content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Astrophysics team lights the way for more accurate model of the universe Abell 370 is a galaxy cluster about 4 billion light-years away from Earth in which astronomers observe the phenomenon of gravitational lensing, the warping of space-time by the cluster’s gravitational field that distorts the light from galaxies lying far behind it. This manifests as arcs and streaks in the picture, which are the stretched images of background galaxies. Credit: NASA/Space Telescope Science Institute Light from distant galaxies reveals important information about the nature of the universe and allows scientists to develop high-precision models of the history, evolution and structure of the cosmos. The gravity associated with massive pockets of dark matter that lie between Earth and these galaxies, however, plays havoc with those galactic light signals. Gravity distorts galaxies' light—a process called gravitational lensing—and also slightly aligns the galaxies physically, resulting in additional gravitational lensing light signals that contaminate the true data. In a study first published Aug. 5 in The Astrophysical Journal Letters, University of Texas at Dallas scientists demonstrated the first use of a method called self-calibration to remove contamination from gravitational lensing signals. The results should lead to more accurate cosmological models of the universe, said Dr. Mustapha Ishak-Boushaki, professor of physics in the School of Natural Sciences and Mathematics and the corresponding author of the study. "The self-calibration method is something others proposed about 10 years ago; many thought it was just a theoretical method and moved away from it," Ishak-Boushaki said. "But I intuitively felt the promise. After eight years of persistent investigation maturing the method itself, and then the last two years applying it to the data, it bore fruit with important consequences for cosmological studies." A lens on the universe Gravitational lensing is one of the most promising methods in cosmology to provide information on the parameters that underlie the current model of the universe. "It can help us map the distribution of dark matter and discover information about the structure of the universe. But the measurement of such cosmological parameters can be off by as much as 30% if we do not extract the contamination in the gravitational lensing signal," Ishak-Boushaki said. Due to the way distant galaxies form and the environment they form in, they are slightly physically aligned with the dark matter close to them. This intrinsic alignment generates additional spurious lensing signals, or a bias, which contaminate the data from the galaxies and thus skew the measurement of key cosmological parameters, including those that describe the amount of dark matter and dark energy in the universe and how fast galaxies move away from each other. To complicate matters further, there are two types of intrinsic alignment that require different methods of mitigation. In their study, the research team used the self-calibration method to extract the nuisance signals from a type of alignment called intrinsic shape-gravitational shear, which is the most critical component. "Our work significantly increases the chances of success to measure the properties of dark energy in an accurate way, which will allow us to understand what is causing cosmic acceleration," Ishak-Boushaki said. "Another impact will be to determine accurately whether Einstein's general theory of relativity holds at very large scales in the universe. These are very important questions." Impact on cosmology Several large scientific surveys aimed at better understanding the universe are in the works, and they will gather gravitational lensing data. These include the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST), the European Space Agency's Euclid mission and NASA's Nancy Grace Roman Space Telescope. "The big winner here will be these upcoming surveys of gravitational lensing. We will really be able to get the full potential from them to understand our universe," said Ishak-Boushaki, who is a member and a convener of the LSST's Dark Energy Science Collaboration. The self-calibration method to remove contaminated signals was first proposed by Dr. Pengjie Zhang, a professor of astronomy at Shanghai Jiao Tong University and a co-author of the current study. Ishak-Boushaki further developed the method and introduced it to the realm of cosmological observations, along with one of his former students, Michael Troxel MS'11, Ph.D.'14, now an assistant professor of physics at Duke University. Since 2012 the research has been supported by two grants to Ishak-Boushaki from the National Science Foundation (NSF). "Not everyone was sure that self-calibration would lead to such an important result. Some colleagues were encouraging; some were skeptical," Ishak-Boushaki said. "I've learned that it pays not to give up. My intuition was that if it was done right, it would work, and I'm grateful to the NSF for seeing the promise of this work." More information: Eske M. Pedersen et al. First Detection of the GI-type of Intrinsic Alignments of Galaxies Using the Self-calibration Method in a Photometric Galaxy Survey, The Astrophysical Journal (2020). DOI: 10.3847/2041-8213/aba51b Citation: Astrophysics team lights the way for more accurate model of the universe (2020, October 15) retrieved 15 October 2020 from https://phys.org/news/2020-10-astrophysics-team-accurate-universe.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. E-mail the story Astrophysics team lights the way for more accurate model of the universe Note Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form. Your message Newsletter sign up Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties. Your Privacy This site uses cookies to assist with navigation, analyse your use of our services, and provide content from third parties. By using our site, you acknowledge that you have read and understand our Privacy Policy and Terms of Use.
ABSTRACT: We present herein anionic borate-based bi-mesoionic carbene compounds of the 1,2,3-triazol-4-ylidene type that undergo C-N isomerization reactions. The isomerized compounds are excellent ligands for CoII ?centers. Strong agostic interactions with the "C-H"-groups of the cyclohexyl substituents result in an unusual low-spin square planar CoII ?complex, which is unreactive towards external substrates. Such agostic interactions are absent in the complex with phenyl substituents on the borate backbone. This complex displays a high-spin tetrahedral CoII ?center, which is reactive towards external substrates including dioxygen. To the best of our knowledge, this is also the first investigation of agostic interactions through single-crystal EPR spectroscopy. We conclusively show here that the structure and properties of these CoII complexes can be strongly influenced through interactions in the secondary coordination sphere. Additionally, we unravel a unique ligand rearrangement for these classes of anionic mesoionic carbene-based ligands.
https://www.omicsdi.org/dataset/biostudies/S-EPMC7839553
Grief is often associated with the death of a loved one, but following a significant disaster where an individual experiences a great loss of some kind, it is a natural and necessary part of the healing process. Disasters such as hurricanes, earthquakes, accidents or wildfires are often immediate and sudden. When faced with such an event, the emotional toll can become too overwhelming, that’s why many people find themselves trying to regain some kind of normality by addressing the practical day-to-day aspects and suppress the emotional turmoil they may be feeling. Emotional Symptoms After a Disaster The common feelings people experience after a disaster are those of shock, disorientation and an inability to process distressing information. As these feelings subside, people’s thoughts and behaviours start to converge in a variety of emotional expressive ways. - Intense or unpredictable feelings, including anxiety, irritability and moodiness. - Changes to thoughts and behaviour patterns, which often manifest into repeated and vivid memories of the event, loss of concentration or a change in eating and sleeping patterns. - Sensitivity to environmental factors including sounds, smells and environmental sensations. - Strained interpersonal relationships and conflicts - Stress-related physical symptoms including headaches, nausea and chest pain Finding Ways to Cope With Grief When tragedy strikes, the aftermath can affect every aspect of our lives including relationships, our daily routines and even our ability to function at work effectively. With research showing that after a disaster, most people gain emotional resilience over time, the lingering effects of a great loss can turn towards deep grief. This process needs to be experienced in whatever natural form that takes. One man who learned that grief was, in fact, a natural part of the healing process, was Darin Olien. Darin is the founder and visionary behind SuperLife and is a globally renowned exotic superfoods hunter, supplement formulator, and environmental activist. During an exploratory expedition in the Amazon, Darin received word that a devastating wildfire had ravaged multiple homes within Malibu, and he was amongst the ashes. He lost everything and the harsh reality set it, there was nothing to come home to. While many would wallow in their grief, Darin eventually embraced his new reality, which led to a newly kindled clarity and perspective. Witness Your Grief To see the reality and aftermath of a disaster can be confronting but is necessary. Psychologically, while it is emotionally testing, it can bring to the forefront a tangible conclusion, allowing you to understand that events are now out of your control and the grieving process needs to begin. “There wasn't a bug. There wasn't a bird. It was fucking smoke and black-and-white soot. Everything. Everything,” Darin explained. “Seeing it is a visceral part of grieving, that really seeing that, oh, nothing survived, not even that barn or that bridge.” When you begin to acknowledge and recognise that a disaster has taken place, the next step is to allow your pain and emotions to manifest in whatever way they need to appear. Find New Opportunity Within Your Grief As strange as it sounds, there is a lot of personal opportunities that can come with grief and drastic change. It can give you the space to re-evaluate your motives, values and core beliefs. In your grief, allow yourself the chance to open up and channel your negative energy towards more creative work. Pursuing creative outlets has been proven to help those suffering from loss or trauma, navigate and understand their emotions. “It's weird to have excitement along with loss,” says Darin, “the result of this is my life had changed.” While those around him were also grieving, his acceptance of his new circumstances had given way to new ideas. “I wouldn't take it back now because of the depth of who I am, the depth of what I perceive, the depth of my commitment to this planet and to people and health. It has only exponentially gotten stronger, so I would never want that to be taken away,” he explained, “I'd burn my house down again to be able to feel how immensely connected I am to myself and to what I'm committed to doing this probably for the rest of my life.” Go Through The Emotions There is no avoiding fear, pain, anger and grief in this life. It’s just a part of our story and helps to form who we are as individuals. In the times of heightened negative emotions and grief, there can often seem to be no light on the horizon, no morning after that will bring you joy, happiness or laughter – but you should trust in the fact that it will pass and that whatever emotions you are feeling, must be felt. “You're going to deal with all of the emotions. When you have to shut down a business or you failed at something, you have to deal with the emotions, and it's not living in the emotions, it's allowing the emotions to move through without judgment.” “Anyone grieving anything, I would just say don't try to push it away. Just face it straight on.” Avoidance or bypassing these feelings just gives you permission to carry on as normal, and often you retain your pain, anger and resentment which can emerge later in an unhealthy and damaging way. When left unaddressed, you can become ingrained and perpetually centred within your grief, as Darin explains, “if you're not willing to face the challenges, the pain, the fear, the anger, the resentment, if you're not willing to face it, this is as good as it gets.” The Psychology of Entrepreneurship For more insight into navigating grief after a disaster, check out volume 16 of the Psychology of Entrepreneurship podcast hosted by Ronsley Vaz and with special guest Darin Olien. Author: Ronsley Vaz Ronsley is the founder & chief day dreamer at AMPLIFY. He is an author, speaker & serial entrepreneur. He has a Masters’ degree in Software Engineering and an MBA in Psychology and Leadership. He is known as the creator of We Are Podcast – the first Podcasting Conference in the Southern Hemisphere, and the host of The Bond Appetit Podcast and Should I Start a Podcast. He has an audience of over 3 million in 133 countries.
https://mustamplify.com/how-to-recover-emotionally-from-a-disaster/
Construction Dispute Law in California When a property owner and a general contractor contract for a construction project to take place on some property in Richmond, California, whether it's a house, some landscaping, or a remodeling project, there is always a gamble that something can go wrong. In fact, at least a very minor setback may be more likely than not. Most often, the owners of land and contractors can end disputes before they get too serious, thus eliminating the need for litigation. Most contracts governing construction projects have built-in remedies for the most common problems, typically requiring the party that causes a delay or other problem to pay the other party a set fee. Even if the parties can't easily resolve their disputes, and someone else needs to intervene, that somebody doesn't always need to be a judge or jury. Many construction disputes call for mediation, during which a neutral third party helps the parties to the dispute negotiate a settlement. They might also go through arbitration, during which a third party renders a binding decision. Litigating a construction dispute in Richmond, California is definitely not something that anybody likes doing. However, it is sometimes necessary, as a last resort. Examples of Construction Disputes That Might Lead to Litigation in Richmond, California Delays: Some minor delays in a construction project are all but guaranteed to occur. Typically, if contracts require a specific completion date, the contractor will give itself longer than the project would take under ideal circumstances, to account for possible delays. Moreover, construction contracts usually attempt to insure against delays, such as imposing fees on contractors if the project is delayed beyond a certain point. If no such clause is included in a contract, a court will usually award the client damages that could have been reasonably anticipated at the time the contract was entered into. Owner's refusal to pay: if the contractor finishes a project to specifications, and the owner of the property doesn't pay the contractor, the contractor will most likely file a lawsuit to recover the agreed-upon price. In such cases, the owner will typically argue that the contractor's work wasn't of acceptable quality. In these cases, the court must decide who first breached the contract. In these cases, it's the party who did not breach first who wins the lawsuit. If the court finds that the contractor breached the contract through sub-quality work product, the owner will not be responsible for payment (though he may have to pay for materials and labor), and if the court finds that the construction was acceptable, the owner has to pay, because he is the party in breach. Subcontractors: With big construction projects, contractors typically hire other, smaller contractors to do some of the work for them. This is typically work of a specialized nature, which the general contractor isn't equipped to handle (such as plumbing or electrical wiring). The general contractor is the one who is responsible for the satisfactory completion of the project. If a subcontractor makes a mistake, or causes a delay, the general contractor is ultimately liable to the person who hired them. However, if they are sued for the mistakes of a subcontractor, and lose, they can then sue the subcontractor to recover their losses. Mechanic's Lien: If the contractor wins in a lawsuit against the land owner, and the court orders the owner to pay the contractor for services rendered, the contractor needs a way to secure payment, if the owner refuses. In some cases, a mechanic's lien permits the contractor to force the sale of the land, and any improvements to it, in order to secure payment for the services it provided. Can a Richmond, California Attorney Help? Disputes over construction delays or defects can be extremely taxing. Therefore, getting an efficient Richmond, California real estate attorney might mean the difference between success or failure in your business ventures.
https://realestatelawyers.legalmatch.com/CA/Richmond/construction-disputes.html
The new Wisconsin landlord tenant law, criminal activity and leases A reader of the ApartmentAssoc Yahoo Group questions how the new landlord tenant bill affects leases that attempt to deal with criminal activity. The new law is: 704.44 (9) Allows the landlord to terminate the tenancy of a tenant based solely on the commission of a crime in or on the rental property if the tenant, or someone who lawfully resides with the tenant, is the victim, as defined in s. 950.02 (4), of that crime. 704.44 (9) is a prohibited lease provision. Therefore a lease is void if it allows you to evict the tenant solely because of a crime that the tenant or an authorized co occupant were the victim of. The provision was meant to be a protection for domestic violence victims. As I read this an example would be if the tenant’s door is broken. - If the damage was caused by the tenant and you use your lease to evict them for the damage all is good. - If the damage occurred during a burglary and you attempt to evict them based on a lease provision, you would fail and possibly be in trouble. It would be wrong even without this law to evict a victim of a crime. ‘Oh, you were robbed of your tv and radio – well now you are losing your home too because having your door broken during a robbery is a lease violation’. – bad landlord. - If the door was broken by the ex boyfriend that they had a restraining order against, you would most certainly be in trouble. But that is the current law under 106.50 (5m) (dm). - On the other hand if the door was broken during a robbery committed by a rival drug dealer and the police report indicated that, then a lease provision such as a crime free addendum would be fine. I agree it is not the perfect language. However this is far better than the prior prohibition against a lease that permitted you to evict the tenant for a crime they could not have reasonably prevented. Under the new wording you can evict a tenant if they fail to exercise reasonable control of the property. An example is if the tenant’s grandson who does not live at the property comes over and sells drugs out of grandma’s unit. Under the current law you would have a hard time crafting a lease that would be legal and permit you to evict as a means of stopping the drug traffic because it is very likely that granny could not reasonably prevent the activity. The only answer would be wait until the Police send an angry letter about the nuisance activity. Under the new law your lease will be able to address this because granny is not a victim. His second question is: What happens if a tenant gets a Disorderly Conduct ticket, or a ticket for possession of marijuana, or a ticket for… Was there a commission of a crime? You do not have to differentiate between citations and crimes in the above. The tenant was not a victim, they were the perpetrator of the act and therefore a lease provision that permits eviction would be valid under both the old and new law. Plus the law acknowledges that criminals are not victims of their own crimes §950.02 (4) (b) “Victim” does not include the person charged with or alleged to have committed the crime. However if the tenant was the victim of what could be considered a criminal act, I think you must treat the tenant as a victim regardless if the perp was given a citation or a state charge. So if in the prior door broken by a burglar example you would be wrong in evicting a tenant even if the guy who did it only received a muni citation. There will be more on this as a discussion of how to make the Crime Free Lease or addendum work within this new law.
http://justalandlord.com/the-new-wisconsin-landlord-tenant-law-criminal-activity-and-leases/
The present invention relates to a piezoelectric ceramic composition for actuators. More particularly, it relates to a piezoelectric ceramic composition for actuators, which is useful as a material for piezoelectric actuators particularly suitable for operation at a high frequency of a level of a few kHz to 100 kHz, among a wide range of applications of piezoelectric actuators, as will be described hereinafter. Actuators are designed to employ a piezoelectric reverse effect, i.e. an action to convert an electrical energy to a mechanical energy, to perform a fine displacement of a micron or submicron order by application of a voltage. They have been developed rapidly in recent years for application to e.g. precise control of a sound for e.g. a buzzer, or of a flow rate for e.g. a pump or a valve, precise positioning for e.g. a stepper or an apparatus for the production of semiconductors, and an ultrasonic motor which has attracted an attention as a small size motor of next generation. Heretofore, as a piezoelectric material for actuators, a lead zirconate titanate ceramic composition (PZT) has been known as having excellent piezoelectric characteristics, and various improvements have been made thereon depending upon its particular applications. 2+ 1/3 2/3 3 /3 2/3 3 3 2 3 For example, improvements of the characteristics of the PZT type piezoelectric material for actuators have been made by e.g. a method of substituting a part of the lead zirconate titanate by e.g. Ba2+, Sr or Ca2+, a method of preparing a solid solution with a composite perovskite such as Pb(CO, Ta)0 or Pb-(Nii, Nb)0, or a method of adding an oxide such as W0, Fe0 or NiO. When a piezoelectric actuator device is operated at a resonant frequency of a level of a few kHz to 100 kHz as in the case of an ultrasonic motor developed in recent years, the piezoelectric material is required to have a high mechanical quality coefficient (for example, Qm≧1,000) to have a large vibrational amplitude in a resonance state and to control heat generation. However, when conventional high piezoelectric strain constant material (so-called soft material) for actuators is employed, the mechanical quality coefficient (Qm) is low at a level of a few tens to hundred, and a loss at the resonance point is vigorous, whereby the input energy can not effectively be converted to a mechanical energy, and consequently there have been problems such that the displacement tends to be small and heat generation tends to be vigorous. Further, the soft high piezoelectric strain constant material usually has a low Curie's temperature (Tc) at a level of from 100 to 150 ° C, and the heat generation tends to reach the Curie's point, whereby there will be a problem that depolarization takes place, or no displacement takes place. ∈33 ∈0 ∈33 ∈0 Also, in a case where a piezoelectric actuator device is operated at a high frequency of a level of a few kHz to a few tens kHz in a non-resonance state, the above-mentioned soft material has a large dielectric constant T/ and a large dielectric loss (tan 6) (for example, T/ ≈ 5,000, and tan δ ≈ 2 to 4%), whereby heat generation is vigorous, depolarization takes place for the above-mentioned reason, and the desired displacement tends to be hardly obtainable. E33 Eo ∈33 ∈0 31 12 On the other hand, a so-called hard material having a high Curie's temperature (for example, Tc>300°C) is employed, the dielectric constant (T/) and the dielectric loss (tan 6) become small, (for example T/ ≈ 500 to 1,000, and tan δ ≈ 0.1 to 1%), but the piezoelectric strain constant decreases substantially, for example, the lateral piezoelectric strain constant (d) decreases to a level of 50 x 10- m/v, whereby in order to obtain a desired displacement, a high voltage will be required, and an expensive amplifier is required for operation at a high voltage and high frequency. 31 ∈33 ∈0 ∈33 ∈0 12 As described in the foregoing, when a piezoelectric actuator is operated at a high frequency of a level of a few kHz to hundred kHz, it is desired to develop a material having excellent characteristics such that the piezoelectric strain constant is large, for example, the lateral mode piezoelectric strain constant (d) is at least 100 x 10- m/v, the dielectric constant (T/) and the dielectric loss (tan 6) are small, for example, T/ ≈ 1,000 to 3,000, and tan δ ≈ 0.1 to 1%, and the mechanical quality coefficient (Qm) is high, for example, Qm is at least 1,000. The present invention has been accomplished under these circumstances. It is an object of the present invention to provide a piezoelectric ceramic composition for actuators, which is very useful as a piezoelectric actuator material suitable for operation at a high frequency of a level of a few kHz to hundred kHz and which has a high piezoelectric strain constant, a low dielectric constant, a low dielectric loss and a high mechanical quality coefficient. 2 2 3 Thus, the present invention provides a piezoelectric ceramic composition for actuators, composed of lead, lanthanum, zirconium, titanium, magnesium, zinc, niobium, manganese, chromium and oxygen atoms, which contains manganese in an amount of at most 1.5% by weight as calculated as Mn0 and chromium in an amount of at most 0.5% by weight as calculated as Cr0, relative to a main component composition of the formula (I):wherein 0<x≦0.07, 0.40≦y≦0.65, 0≦q≦1, and 0<z≦0.40. Namely, the present inventors have conducted detailed studies to accomplish the above object and as a result, have found that a composition having a specific composition has a high piezoelectric constant, a low dielectric constant, a low dielectric loss and a high mechanical quality coefficient at the same time. The present invention has been accomplished on the basis of this discovery. Now, the present invention will be described in detail with reference to the preferred embodiments. 2 2 3 2 2 3 In the following description, the amount of manganese as calculated as Mn0 and the amount of chromium as calculated as Cr0, relative to the main component composition of the formula (I) will be referred to simply as "the amount of Mn0" and "the amount of Cr0", respectively. The piezoelectric ceramic composition for actuators of the present invention has a high Curie's temperature, a high piezoelectric strain constant and a high mechanical quality coefficient. 2 2 With the composition of the present invention, the larger the amount of Mn0, the larger the mechanical quality coefficient (Qm), provided that the rest of the composition is constant. The composition of the present invention can be classified into two groups with the amount of Mn0 of 0.3% by weight being the border line. 2 Namely, in a case where the amount of Mn0 is at least 0.3% by weight, preferably from 0.4 to 1.2% by weight, the composition of the present invention exhibits a very large mechanical quality coefficient (Qm), and a loss at the resonance point is small. Accordingly, such a composition is suitable as a material for operation at a high frequency utilizing resonance. 2 31 On the other hand, in a case where the amount of Mn0 is less than 0.3% by weight, preferably from 0.05 to 0.25% by weight, it is possible to obtain a ceramic composition having a large lateral mode piezoelectric strain constant (d) and a very small dielectric loss (tan 6) although the mechanical quality coefficient (Qm) is not so large. Such a composition is suitable as a material for operation at a high frequency to be used in a non-resonance state. Further, in the composition of the present invention, the content of lanthanum is influential over the Curie's temperature. Namely, the smaller the amount of lanthanum, the higher the Curie's temperature. Here, the amount of lanthanum is preferably in such an amount that x in the formula (I) is at most 0.05. The Curie's temperature is an index for the limit of high temperature use. The limit for the high temperature use varies depending upon the particular purpose of the piezoelectric ceramic composition for actuators. Therefore, it can not necessarily be said that the higher the Curie's temperature, the better. However, it is of course preferred that the Curie's temperature is high so long as other physical properties are the same. Inversely, the physical properties of piezoelectric ceramic compositions for actuators should be compared among those showing the Curie's temperature suitable for the particular purpose. Now, preferred embodiments of the present invention will be described. 31 12 2 2 3 x = 0.02, y = 0.50, z = 0.10, q = 0.7, the amount of Mn0 = 0.5 wt%, the amount of Cr0 = 0.1 wt% (Example 1 given hereinafter) 2 2 3 x = 0.03, y = 0.50, z = 0.10, q = 1.0, 0.5 or 0.0, the amount of Mn0 = 0.5 wt%, the amount of Cr0 = 0.1 wt% (Examples 2, 3 and 4 given hereinafter) 2 2 3 x = 0.03, y = 0.50, z = 0.10, q = 0.5, the amount of Mn0 = 0.5 wt%, the amount of Cr0 = 0.3 wt% (Example 5 given hereinafter) 2 2 3 @ x = 0.04, y = 0.52, z = 0.10, q = 0.7, the amount of Mn0 = 0.5 wt%, the amount of Cr0 = 0.1 wt% (Example 6 given hereinafter) The ceramic composition of the present invention has a high Curie's temperature, a high piezoelectric strain constant and a high mechanical quality coefficient. Particularly those having the following compositions ① to @ in the above formula (I) have a Curie's temperature (Tc) of at least 250 ° C, a lateral mode piezoelectric strain constant (d) exceeding 100 x 10- m/v and a mechanical quality coefficient (Qm) as high as at least 1,000, and thus, they are suitable as materials for high frequency operation utilizing resonance, such as a ultrasonic motor. 31 31 12 12 Further, those having the following compositions to in the formula (I) have a Curie's temperature (Tc) of at least 200 °C and a lateral mode piezoelectric strain constant (d) exceeding 200 x 10- m/v, and thus, they are particularly preferred. With a conventional composition having a lateral piezoelectric strain constant (d ) exceeding 200 x 10- m/v, the dielectric loss (tan 6) is usually as large as from 2 to 3% (see Comparative Examples 5 to 7 given hereinafter). Whereas, with the products of the present invention (see Examples 9, 10 and 11 given hereinafter), the dielectric loss (tan 6) is as low as from 0.3 to 0.5%, which is from 1/4 to 1/10 of the dielectric loss of the conventional products, and thus they are suitable as materials for actuators for high frequency operation. 2 2 3 x = 0.02, y = 0.48, z = 0.28, q = 0.7, the amount of Mn0 = 0.15 wt%, the amount of Cr0 = 0.1 wt% (Example 9 given hereinafter) 2 2 3 @ x = 0.03, y = 0.50, z = 0.16, q = 0.15, the amount of Mn0 = 0.15 wt%, the amount of Cr0 = 0.1 wt% (Example 10 given hereinafter) 2 2 3 x = 0.04, y = 0.52, z = 0.16, q = 0.5, the amount of Mn0 = 0.15 wt%, the amount of Cr0 = 0.1 wt% (Example 11 given hereinafter) 31 If x in the formula (I) exceeds 0.07, the Curie's temperature (Tc) will be at most 150°C, whereby the upper limit of the operational temperature of the device will be at a level of from 60 to 70 C, and thus such a material is not practically useful. Besides, the lateral mode piezoelectric strain constant (d) is as small as not detectable by a resonance-antiresonance method, and thus it is not suitable as a material for actuators (Comparative Example 8 given hereinafter). 31 2 2 If z in the formula (I) is at least 0.40 (Comparative Example 1 given hereinafter), a pyrochlore phase tends to be present in a sintered body in addition to the perovskite phase, whereby the lateral mode piezoelectric constant (d) tends to be low, such being undesirable. Further, if the amount of Mn0 exceeds 1.5% by weight, (Comparative Example 2 given hereinafter), abnormal grain growth tends to occur during the sintering, and the density of the sintered product tends to be low, and dielectric breakdown is likely to result during the polarization, such being undesirable. Accordingly, the amount of Mn0 is adjusted to be at most 1.5% by weight, preferably from 0.05 to 1.2% by weight. 2 3 31 2 3 31 2 2 3 When Cr0 was not added (Comparative Example 3 given hereinafter), the lateral mode piezoelectric strain constant (d) and the mechanical quality coefficient (Qm) are small, and when the amount of Cr0 exceeds 0.5% by weight (Comparative Example 4 given hereinafter), no effect of the addition of chromium is observed in the lateral mode piezoelectric strain constant (d) and in the mechanical quality coefficient (Qm). This is apparent from the comparison with Examples 3 and 5 wherein the proportions of x, y, z and q and the amount of Mn0 are the same. Thus, the amount of Cr0 should be usually at most 0.5% by weight, preferably from 0.002 to 0.5% by weight. The ceramic composition of the present invention may be prepared, for example, by weighing oxide starting materials of the respective constituting elements to obtain a prescribed blend composition, then mixing them in a wet system by e.g. a ball mill, calcining the mixture, followed by pulverization, and sintering the obtained powder at a temperature of from 1,100 to ,300°C. Now, the present invention will be described in further detail with reference to Examples and Comparative Examples. However, it should be understood that the present invention is by no means restricted by such specific Examples. 2 3 2 2 5 2 2 3 E33 EO 31 3i PbO, La0, TiO, Zr0, MgO, ZnO, Nb0, Mn0 and Cr0, high purity oxide starting materials having a purity of at least 99.9%, were weighed in prescribed proportions and subjected to wet-system mixing for 24 hours by means of a ball mill. The mixture was dried, molded and calcined at 900 °C for two hours. Then, it was pulverized in a mortar and again pulverized in a wet-system for 24 hours by a ball mill. The obtained powder was hydrostatically press-molded by a rubber press method and then sintered at 1,200°C in a lead atmosphere. Then, the obtained sintered body was processed into a disk and a rod by a slicing machine. Then, an Ag paste was screen-printed and subjected to electrode baking at 550°C. Polarization treatment was conducted in a silicon oil of a temperature of from 80 to 120°C under an electrical field intensity of from 2.0 to 4.0 kV/mm for from 5 to 20 minutes. Upon expiration of one day, various piezoelectric physical properties such as the dielectric constant at 1 kHz (T/), the dielectric loss at 1 kHz (tan 6), the mechanical quality coefficient (Qm), the lateral mode electromechanical coupling coefficient (K) and the lateral mode piezoelectric strain constant (d), were measured by a resonance-antiresonance method by means of a vector impedance analyzer. Further, the temperature characteristic of the relative permittivity was measured, and from the maximum of the relative permittivity, the Curie's temperature (Tc) was obtained. The results are shown in Tables 1 to 3. As described in the foregoing, the piezoelectric ceramic composition for actuators according to the present invention has a high electromechanical coupling coefficient, a high piezoelectric strain constant, a low dielectric constant, a low dielectric loss, a high mechanical quality coefficient and a high Curie's temperature simultaneously, and it can be used effectively as various piezoelectric materials. Particularly, the piezoelectric ceramic composition for actuators according to the present invention shows excellent properties as a material for piezoelectric actuators to be operated at a high frequency of a few kHz to hundred kHz, and it is very useful for industrial applications. EXAMPLES 1 TO 13 AND COMPARATIVE EXAMPLES 1 TO 8
Various approaches are currently rethinking the role of human life in the ecosphere, highlighting the interconnectedness between the social and the ecosphere and posing new ethical questions. In our panel, we invite studies on the relation between ecological challenges and social change. Currently, various approaches are engaged in rethinking the role of human life in the ecosphere, indicating increasing attention being paid by practitioners, researchers, and philosophers to the interconnectedness between the social, the ecological, and the spiritual sphere. At the same time, problems of environmental justice and/or public conversation policies often become a contested ethical issue between local communities and other major political forces, shaping both social lives and places through a range of different power relations. At the same time, anthropological research on new forms of environmental activism related to concepts such as bioregionalism, permaculture principles, 'degrowth' (Latouche) and 'deep ecology' (Naess) gives new inputs on global ethical challenges and progressively gains more visibility within the discipline. In our panel, we invite studies that discuss practices of environmental justice and engage with creative alternatives that rethink the complex relations between social and ecological life. We want to reflect both on the effects, strategies and implications of environmental activism and on cases where there seems to be a clash between what people do with their places and general ecological and ethical concerns. How are conventional conversation policies (such the management of nature parks) affecting social life? How are they contested? How can environmental justice activism (such as in ecovillages) mobilize new social forces? To what extent can new forms of environmental activism (such as the movement for degrowth) produce social change? How can scholars position themselves toward environmental activism, often caught in an ambivalent relation of academic distance and more or less openly expressed sympathy for the causes? Street-by-street energy efficiency roll-out schemes: a practical alternative for fighting climate change with social fairness?
https://www.nomadit.co.uk/sief/sief2011/panels.php5?PanelID=778
The last Trilogue negotiation with the former Maltese Presidency of the Council, now in the hands of Estonia as of 1 July, took place last week. Commenting on the outcome of the meeting, MEP Roza Thun – Rapporteur for the European Parliament – expressed her disappointment because some Member States in the Council could not agree on some specific provisions of the draft law. In a blog post appeared on the website of the EPP Group, MEP Thun declared that EU negotiators agreed to resume the work with the Estonian Presidency, aiming at finding workable solutions, namely on the following aspects: - Discrimination in digital services, such as e-books, music, video games and software: the Parliament is in favor of including them in the scope if the trader has acquired licenses in the relevant EU countries, while the Council would prefer to leave such services out of the scope. - Rules prohibiting passive sales restrictions. - The review of this regulation. Ecommerce Europe has always had a constructive approach towards the Proposal for a Regulation on geo-blocking. Nevertheless, the European e-commerce association has always called on policymakers to ensure that the Regulation will not put at risk online merchants’ right to economic and contractual freedom. In April 2017, the IMCO Committee of the European Parliament, in charge of this proposal, adopted its report on the Regulation. In Ecommerce Europe’s opinion, the work done by MEP Thun and the IMCO Committee contributed to improve many aspects of the text that created concerns for online merchants. Nevertheless, Ecommerce Europe believes that it is a missed chance that some pragmatic and workable solutions, proposed by the Rapporteur in her draft report, were not retained in the final version of the text adopted by IMCO in April. Ecommerce Europe analyzed the IMCO Report and listed its positive elements and remaining concerns, namely potential negative impacts on traders if they would have to cover additional costs of shipping, issues regarding the applicable law and rerouting (for more details, please refer to p. 13-14 of the “Ecommerce Europe’s Manifesto”, available here). Next steps Most probably, an agreement on the Geo-blocking Regulation will not be possible before autumn 2017. In the meantime, Ecommerce Europe stays in close contact with EU policymakers, in order to ensure that the perspective of online merchants will be duly taken into account during the next rounds of negotiations.
https://www.ecommerce-europe.eu/news-item/informal-agreement-geo-blocking-regulation-gets-delayed/
Blog Fisher Wallace: Tratamento de depressão, ansiedade e insônia! Do you or does someone you know feel “blue” while managing a productive life? It could be high functioning depression. High functioning depression belongs to a group of disorders referred to as “depression” or “depressive disorders”. The term is used to describe people who experience consistent symptoms of depression for a long period of time (over 2 years), but continue to complete the activities necessary to lead functional lives. They work and often have thriving careers. They take care of their families, their homes and their appearance while they are struggling with feelings of negativity, sadness, and despair. The symptoms are examined in more detail below. Why is high functioning depression unique? Depressive disorders are associated with feeling “blue” and a lack of motivation. This can impact lives in many different ways. Sometimes, it can mean that it takes extra effort to complete the tasks of daily living, including preparing and eating healthy meals, maintaining a cleanly household, maintaining personal hygiene or completing logistical tasks, like paying the bills. For certain individuals, these tasks are consistently passed over. When these tasks are not being completed on a regular basis, it may become apparent to others, who can, in turn, express concern and offer support. On the other hand, it is possible that individuals experiencing high functioning depression may not shirk their responsibilities, despite the fact that their chores may seem like a heavy burden. They push themselves to complete the daily tasks of living while facing other, often devastating, symptoms of depression. This is referred to as “high functioning depression”. Individuals grappling with the disorder experience symptoms of depression, but continue to lead productive lives. Can I be diagnosed with high functioning depression? High functioning depression is not diagnosed as a distinct type of depression. It is a lay-term for a specific experience of “persistent depressive disorder”, which is a new diagnosis. What is now called persistent depressive disorder was formerly two categories; chronic depressive disorder and dysthymia. Although high functioning depression is not a diagnosis, it is a term that can help us better understand the experience of certain individuals. To be diagnosed with persistent depression, must report that they feel “down” most of the time. This report comes either from the person experiencing the symptoms or from their loved ones. In the case of high functioning depression, the person experiencing the symptoms may be able to hide their feelings, making it difficult for others to notice that they are feeling “blue”. This adds some extra difficulty in identifying individuals who may be in need of help. It is important that people disclosing these types of feelings are offered an empathetic ear and treatment, if necessary, regardless of how “together” they may seem. How common is high functioning depression? Estimates of the incidence high functioning depression are not currently available. However, according to the American Psychiatric Association, approximately 1 in 200 people may experience persistent depressive disorder within any given 12-month period. It is a common mistake to assume that individuals who are leading productive lives are also happy. This is not always the case. Individuals experiencing high functioning depression may accomplish many things over the years, but feel little joy of accomplishment, regardless of the magnitude or impact of their deeds. It is not necessary that an individual experience extreme mood swings or thoughts of death and suicide in order to receive treatment for a depressive disorder. People who are experiencing high functioning depression often appear, from the outside, to have happy and productive lives. They may be able to “put on a happy face” while interacting with others. However, their inner dialogue may be rife with negativity. Problems are perceived as insurmountable when individuals don’t believe they have the resources to cope with them. This is a result of low self-esteem and self-criticism, common to depressive disorders. Often, people experiencing high functioning depression feel like they’re wasting time, continuing to keep themselves busy, but feel hopeless when it comes the idea of building a happy life. While major depressive disorder can be associated with thoughts of death or suicidal ideation, persistent depressive disorder is associated with a general sense of hopelessness. This doesn’t make it any easier. The long-lasting nature of the disorder is relentless. Major depressive disorder can go into remission in between depressive episodes, leaving the individual to experience a symptom-free life for a time, while persistent depressive disorder tends to continue for longer periods with remission times less than 2 months in duration. The typical healthy adult requires 7-9 hours of sleep nightly (Hirshkowitz et al, 2015), but individuals experiencing depressive symptoms may find it difficult to regulate their sleep. Some may sleep more than this, while others may sleep less. Regardless of sleeping difficulties, people who are depressed often find that they experience fatigue (Demyttenaere, De Fruyt & Stahl, 2005) and have very little motivation to complete any task at all. This is often the reason that so many depressed individuals neglect simple daily tasks. People with high functioning depression are not exempt from this feeling. However, these individuals may continue to complete their tasks in the face of these symptoms. That doesn’t mean it’s easier for them or that they are less deserving of treatment than those who do not complete these types of tasks. Eating well can be a great pleasure. It can also be a lot of work. With depression, sometimes individuals will indulge too often and they gain weight (Simmons et al, 2016). Alternatively, individuals may lose motivation to eat and lose weight (Simmons et al, 2016). When supporting an individual experiencing depressive symptoms, it is prudent to encourage healthy eating habits. On occasion, it can be the cognitive symptoms associated with depressive disorders that lead individuals to seek help. With pervasive depressive disorder, memory is impaired (Yoon, LeMoult, & Joormann, 2014) such that people may walk into a room to fetch an item, but forget which item they were seeking. They may forget where they are headed while driving or fail to remember all of the items on their grocery list. These troubles may seem common and innocuous, but when they happen frequently, they have a significant impact on daily functioning. What can be done about high functioning depression? If you or someone you know is experiencing the symptoms described above, there are many things that can ease the symptoms. Before undergoing any of these proven depression-relief methods, please consult with a professional. Also known as talk therapy, psychotherapy helps patients reach awareness of their own thoughts, feelings, behavior, and mood. You will work with a licensed therapist to map out personal history and identify the foundational issues triggering your depression. There are many kinds of psychotherapy, including one-on-one, group, and family. Cognitive Behavioral Therapy aims to alter a patient’s negative thinking patterns. The therapist will work with you to disarm recurrent depressive thoughts and feelings before they can take hold. It is a strategy often used to treat drug and alcohol abuse, anxiety, insomnia, and depression. Once your therapist helps you identify your session goals, he or she will work with you for several months to refocus your thought processes, derail bad habits, and approach your emotional problems from new angles. Simply, CBT helps the patient gain clarity about what they’re feeling emotionally. Ideally the therapy helps to uproot negative automatic thoughts so that the patient can solve problems removed from drowning emotions. Serotonin is a chemical that helps facilitate neurological messages between cells. A stable level of serotonin helps regulate sleep, appetite, memory, learning, and mood/level of perceived happiness. Individuals with lower serotonin levels, may experience a higher level of depression since the neural connections don’t have enough messengers, so-to-speak. So, how can we improve that serotonin level? Handheld pulse generators like the FDA-approved Fisher Wallace Stimulator® enhance the production of serotonin and lower the stress-producing hormone cortisol over time. It helps treat not only high-functioning depressive symptoms, but also insomnia, post-traumatic-stress, panic, and anxiety. The Fisher Wallace Stimulator is also FDA-cleared to treat bodily pain. Cranial Electrotherapy Stimulators are very easy to use and comfortable. Comprehensive research supports their true effectiveness. Consult your doctor if you are interested in trying out this unique, new method for enhanced depression relief. In some cases, depressive symptoms are far too rooted and overwhelming to be alleviated by exercise, diet, or good sleep. Different symptoms call for the use of different approved-medications: anti-depressants, anti-anxiety medication, mood stabilizers, and antipsychotic medication. Though a certain ailment may be incurable, drugs can assist someone experiencing debilitating symptoms. Be aware that since these are not natural methods for relief, so a spectrum of side-effects are likely to occur. Often, medication is prescribed to patients with hereditary depression and anxiety. Unfortunately, some people naturally don’t produce enough serotonin. Consult your doctor, psychologist, or psychiatrist if you would like to try prescription medication. It’s important to note that, while drug therapy benefits many, you may not be compatible with certain drugs. Each prescription is slightly different and prompts a different physiological response from the user. Simply ‘trial and error’ administration is often the best course of action for you and your doctor. Demyttenaere, K., De Fruyt, J., & Stahl, S. M. (2005). The many faces of fatigue in major depressive disorder. The International Journal of Neuropsychopharmacology, 8(01), 93-105. Hirshkowitz, M., Whiton, K., Albert, S. M., Alessi, C., Bruni, O., DonCarlos, L., ... & Neubauer, D. N. (2015). National Sleep Foundation’s sleep time duration recommendations: methodology and results summary. Sleep Health, 1(1), 40-43. Klein, D.N., & Black, S.R. (2013). Persistent depressive disorder. Psychopathology: History, Diagnosis, and Empirical Foundations, 334. Nemeth, V. L., Csete, G., Drotos, G., Greminger, N., Janka, Z., Vecsei, L., & Must, A. (2016). The effect of emotion and reward contingencies on relational memory in major depression: an eye-movement study with follow-up. Frontiers in Psychology, 7. Simmons, W. K., Burrows, K., Avery, J. A., Kerr, K. L., Bodurka, J., Savage, C. R., & Drevets, W. C. (2016). Depression-Related Increases and Decreases in Appetite: Dissociable Patterns of Aberrant Activity in Reward and Interoceptive Neurocircuitry. American Journal of Psychiatry, 173(4), 418-428. Yoon, K. L., LeMoult, J., & Joormann, J. (2014). Updating emotional content in working memory: A depression-specific deficit?. Journal of behavior therapy and experimental psychiatry, 45(3), 368-374.
https://www.fisherwallace.com.br/blogs/saude-mental/do-i-have-high-functioning-depression
- Pages 352 pp. - Size 6" x 9" “A large proportion of our present higher animals are nothing other than human beings who were so entangled in their passions that they became hardened and ceased to evolve further. The animals came into being as a consequence of the hardening of human passions. The feelings experienced by an individual who looks about him with real occult understanding are as follows: In the course of becoming a human being, I have passed through what I encounter today in the form of lions and snakes. I have lived in all these forms, because my own inner being has been involved with the traits that are expressed in these animal forms.” —Rudolf Steiner (Universe, Earth, and Man, p. 94) As human beings, what is our true relationship to the animals on earth? What is our responsibility to our fellow creatures? Douglas Sloan explores these and other questions in this important book on the human-animal connection. His explorations are based on personal experience and wide-ranging research into the work of Rudolf Steiner and others, including scientist students of the inner life of animals and committed defenders of animal wellbeing. Rudolf Steiner describes how, from the beginning of creation, humans and animals have been united in deep kinship. A loss of the sense of this human–animal connection has resulted in an immense animal suffering the world over. Especially now, in their suffering, animals pose many pressing and perplexing questions for the modern humankind, which constitute the primary focus of this book, as well as how Rudolf Steiner presents a vision of the ultimate redemption of the animals from their suffering. What is the nature of this redemption? What is our responsibility in making it happen? Exploring these and related questions with the help of Rudolf Steiner’s work and that of others on the issue, we begin to see the importance today of relating to animals in a completely new way—a relationship that can understand and respect the animals’ inner spiritual being, and one that requires a deep grasp of our own spiritual being in relation to theirs. Douglas Sloan helps us toward this new relationship with animals, both conceptually and through our everyday actions. C O N T E N T S: Introduction Part I: Two Perspectives on Evolution 1. Darwinian Perspective 2. Evolution from the Perspective of Rudolf Steiner Part II: The Inner Life of Animals 1. Introduction 2. Body, Soul, and Spirit: Human and Animal 3. Animal Group Soul and Individual Souls 4. Animal Play (Again) 5. Animal Emotions beyond Play 6. Group Soul and Group “I”-being 7. Higher Capacities among Certain Animals 8. Difference in Kind and/or Difference in Degree 9. Evil Part III: Animal Rights 1. Introduction 2. Do Animals Have Rights?
https://steiner.presswarehouse.com/browse/book/9781584201946/The-Redemption-of-the-Animals
Who Is and Who Isn't Getting Vaccinated by Income Nationwide, 207.7 million people, or 62.5% of eligible U.S. residents age 5 or older, were fully vaccinated as of Jan. 9, according to the Centers for Disease Control and Prevention’s COVID Data Tracker. More than 75 million, or 36%, received a booster shot. Although the percentage of people receiving two doses of the mRNA vaccine or one jab of the single-dose vaccine is encouraging in the fight against the viral disease, it still leaves almost 40% of the U.S. population without any protection against COVID-19, or with the reduced protection that only one dose provides. Less encouraging is the relatively low percentage of those receiving the much recommended booster shot. Numerous public sentiment surveys have tried to uncover the reasons behind the lack of vaccine take-up in a large portion of the population. A recent one pointed to income level as one possible factor in lower vaccination rates. To determine who is and is not getting vaccinated by income, 24/7 Wall St. reviewed the U.S. Census Bureau’s Household Pulse Survey. The survey gathered data on the vaccination status of Americans who are 18 and older based on a number of characteristics, including household income. For each income level, the survey reported total population as well as totals for different vaccination status. We calculated the percentages of those unvaccinated, those who received at least one dose and those with three or more doses. The survey highlights the correlation between income and vaccination status. The higher the income level the more likely the person will be vaccinated. Conversely, as household incomes decrease, the unvaccinated percentage rises. At the highest income level noted in the survey — $200,000 and above — the vaccination rate stands at 94.2% with at least one dose, with 54.3% with three jabs. A scant 5.5% are unvaccinated. The unvaccinated rate rises nearly four-fold to 20.6% for those making less than $25,000. While nearly 80% of the population at that income level received one dose, less than a quarter had all three jabs — less than half the level at the highest income bracket. Although questions about possible side effects and mistrust of the government as well as the vaccine itself ranked in the top three spots for vaccine-hesitancy, 1.9%, or 651,339 respondents, said cost concerns kept them from getting the jab. What’s surprising is the fact vaccines are free, so in theory, how much a person makes shouldn’t prevent them from getting a dose. But there could be other financial reasons beyond just the cost of the shot. People must take time off from work to get the jab, and lower-income workers might not want to lose a day’s pay. Could giving lower-way workers a paid day off for the jab boost the vaccination rate? A survey done over the summer by the Kaiser Family Foundation suggests so: Some 73% of workers whose employers support the vaccination or offer paid time off reported getting at least one shot. In those workplaces less encouraging of vaccines or that don’t offer paid time off, the percentages fell to 41% and 51% respectively.
At DKU tournaments we always let the players play 4 round on different courses. If there are more than 300 players, we use 8 courses, but a player only play on 4 of them. The players are divided in matches of 4 or 5 players, and there is a neutral referee. We start at 9:00 and will end the 4 first rounds around 15:30. Stroke play As from 2017 and 2018 we will change the rules. From August 2017 - World Championships - the first 4 rounds counts in stroke play, ladies and gentlemen mixed. No 1-2-3 ladies and 1-2-3 gentlemen will be honored. If there are more than 300 players and 8 courses, we calculate the course's average strokes, and then calculate the players' average on the courses. If there are less than 301 players, we just count the strokes. Match play From August 2018 we take the 36 best ladies and 36 best gentlemen from stroke play and divide those into 6 semifinals with 6 players for each gender. The 6 winners enters the final, where we find the champion. One for ladies and one for gentlemen. Pairs From June 2018 we also change the rules for pairs. Here we divide into 3 groups with 16 pairs in each. One group for ladies only, one for gentlemen only, and one for mixed pairs. In each group there are 4 semifinals with 4 pairs each. The 4 winners enters the final. We end up with 3 winners - ladies, gentlemen, and mixed pairs.
http://www.krolf.dk/gb/index.php/championship-rules
Between 1950 and 2018, the global urban population grew from 0.8 billion to an estimated 4.2 billion. This unprecedented shift from rural to urban living is expected to continue, with projections estimating that by 2050, 68% of the global population will be living in concentrated urban areas (1). The growth of urban centers comes with significant changes to land cover, as forests and grasslands give way to artificial surfaces that prevent water infiltration, such as pavement, asphalt, and rooftops. Quantifying the current and projected extent of impervious surface coverage is an important metric, as it is a tangible measure of urbanization and a key environmental indicator for many issues in the urban environment such as urban heat islands, surface runoff and flooding, and air pollution (2). Quantifying impervious surface coverage will allow cities to accurately address these urban environmental issues, and contribute to improved land use planning to build a more sustainable and resilient urban ecosystem. Industry Practicum As part of the Advanced Diploma in Geographic Information Systems at BCIT, I have started working with the City of Maple Ridge to create a process for quantifying pervious/impervious surface coverage using Esri technology. The City of Maple Ridge is located in the northeastern section of Greater Vancouver, on the unceded territory of the Kwantlen and Katzie First Nations People. Situated between the Fraser River to the South and the Agricultural Land Reserve (ALR) to the North and East, the city is constrained to a narrow development corridor. Despite this, Maple Ridge is one of the fastest growing communities in British Columbia with a 5-year growth rate of 8.2%, making current and future land use an essential component of city planning and decision making. Urban landcover Mapping Traditionally, reliable mapping of urban impervious surfaces would be acquired by ground surveys. These surveys were time-consuming and labor intensive, and the inability to supply real-time data restricted it’s use in city planning and development monitoring. The introduction of remote sensing technology offered a unique and innovative approach to mapping, interpreting, and monitoring urban landscapes. To date, landcover mapping for urban regions generally falls into two categories based on spatial resolution: a moderate-resolution approach and a very-high resolution approach. The very-high resolution approach utilizes multispectral and hyperspectral imagery with a spatial resolution <2m and more recently, airborne laser scanning like LiDAR. This approach is continuously improving, and data fusion is becoming commonplace, particularly of spectral, thematic, and 3-D structure data. It is this data fusion approach that will be used in conjunction with Esri’s machine learning technology to create an end-to-end land cover classification workflow utilizing WorldView-2 multispectral imagery and LiDAR derived data. Machine Learning and AI Esri has provided machine learning and AI technology within ArcGIS in the past to model spatial relationships, make data-driven decisions, and perform image classification. Recently however, they have been developing tools and workflows to harness a subset of machine learning, deep learning, which utilizes convolutional neural network (CNN) algorithms to train computers to do tasks and solve problems. Esri allows users to develop and train their own deep learning models, or to use Esri’s pretrained models. Esri currently offers 27 different pretrained deep learning models on the ArcGIS Living Atlas. These models are pretrained on large volumes of data, and help accelerate workflows designed specifically for image feature extraction, land-cover classification, and detecting objects. As the practicum is still in progress, the model is constantly being developed and fine tuned for more precise and accurate deliverables. The second blog post in this series will explore the nitty gritty of developing a deep learning model for land use classification and extracting impervious surfaces using Esri technology, and explain in depth the technical components, required datasets, detailed workflow, and common mistakes along the way. References (1) United Nations, Department of Economic and Social Affairs, Population Division. World Urbanization Prospects: The 2018 Revision (United Nations, Department of Economic and Social Affairs, Population Division, 2018). (2) Luo, H., et al., 2018. An Improved Method for Impervious Surface Mapping Incorporating LiDAR Data and High-Resolution Imagery at Different Acquisition Times. Remote Sensing, 10 (9), 1-27.
https://ecce.esri.ca/bcit-blog/2022/05/14/urban-landcover-mapping-utilizing-deep-learning-to-extract-impervious-surfaces/
Digital Transfer Printing makes it possible to print a full-colour, digital print on various substances. A digital print is printed onto a special transfer paper and the product that is being branded is treated with a chemical before the logo is applied. The transfer paper is then placed onto the product, transferring the ink from the paper onto the product. At this stage the branded product is treated with a sealing chemical to ensure permanency of the logo. DIGITAL TRANSFER PRINTING BENEFITS:
http://www.chilliprint.co.za/digital-transfer-printing/
Saturday, March 21, 2009 Funeral Foods Around the World • Every culture has different traditions and customs when it comes to funerals. • With those differences, also brings a an array of foods. The Greeks • Priests were presented with a dish of corn cooked with sugar (the corn they say was for resurrection and the sugar….heavenly bliss) Kollyva Paximadia The Irish • One of the more eventful wakes, in which prayers are often exchanged for whiskey, snuff, and tobacco. • The Irish Church tried to ban alcohol from wakes but was unsuccessful. • Friends and neighbors bring a cake or a plate of sandwiches. East Africa • Require lots of Food (Because lots of family members) • Funerals a huge expense • Most East Africans only make $1 a day. Funerals are a Major Financial Burden. • Interesting to note, in another community in Central Province, those attending a funeral take food with them and the mourners have to pay to eat. This is seen as a contribution , rather than a financial transaction of buying and selling of goods. North America Funeral Foods in America: • Southern Funerals: Funeral Fried Chicken and Macaroni and Cheese. • Funeral Potatoes: a cheesy hash browns casserole. Funeral Potatoes are so common at Mormon Funerals in Utah that they are commonly called Mormon Potatoes. 17 comments: I think the Irish traditions are the most interesting. Prayers exchanged for whiskey, snuff, and tobacco?? Also, it's almost funny that the Irish church tried to ban alcohol from the wakes but was unsuccessful. It seems like kind of a bad thing to serve at wakes. I would never think about going to a funeral and getting alcohol they just don't really go together! I think that the Irish traditions were the most interesting as well, but I would have to disagree with you on the fact that you think its a bad thing to serve at a wake. Although I do see your point I think that sad things like this do make people want to drink alcohol and I think that them serving them this during a wake can be a comforting thing. Just like at our wakes in the Southern United States we eat the foods that comfort us, fried chicken and casseroles. For the Irish, alcohol can be so intertwined within their culture and even their church life. Growing up in a fairly strict protestant family, my husband and I moved to St. Louis (a big Irish and Italian Catholic city). When my son attended a Catholic school, there was a bier garden at the school fair and at the little league games, the parents could order pitchers of beer. It would seem difficult for them in a time of sadness to abstain from alcohol when it is a big part of their culture I found the Dutch's funeral cake made from caraway and molasses, and then with the deceased person's initial on top, very interesting. I would like to know if it tastes good or not. It makes you wonder how they come up with these kinds of traditions. I thought this topic might be a little macabre, but then I realized that food is a comforting and familiar part of the funeral process. It still shocks me to realize how ingrained food is in every part of our culture. This was a very great and unique report. Thanks! I think that the African part of your report was interesting. We see funeral cost as a burden in the US but it seems that in this culture they could almost work a lifetime just to pay for ones funeral! You know, it seems like the US have very fatty funeral foods compared to the other countries, especially in the South (fried chicken and mac & cheese). I suppose this is how we deal with death- eat some "comfort food" and it's all better. I find the Irish tradition interesting as well, but hey whats wrong with with this? I assume that it would generally be moderate amount, and comforting to some mourners. I don't thing there's much difference in having such things as alcohol and stuffing your self with as much fried chicken as possible for comfort. Personally I love funeral food traditions here in the "bible belt." When a member of my community passes away, the family doesn't have to worry about feeding their children or putting dinner on the table for at least a week. When a loved one passes on, you can bet that everyone you know is going to pitch in and churches will organize meals to be brought to you. It has been done for me, and I have cooked meals for others. I love that the Dutch put the initials of the deceased on their bread. That's a wonderful way to honor that person, even with the food! I was also interested to learn that in East Africa, donations for the family are associated with the food. Though I understand donating money, that really does seem like they're paying to eat! I never really thought about people making certain foods for a funeral ceremony, so this paper was extra interesting. I've heard of people cooking for the family in mourning after the funeral and burial, but normally it's just pot luck or something someone picks up at the nearest KFC or K&W Cafeteria. People in my family usually make traditional foods like beans, mac n cheese, potatoes, chicken or roast and a pie and takes it to the family. It was really cool learning that certain cultures have certain things they offer. I find it very interesting the different customs present in different countries for a funeral. It seems like each country or region inside the country has their own food dishes brought to funerals. I think to most it is a comfort food that they can enjoy in a time of mourning. I think many of us think more about different foods that people may share at their wedding but we dont often think about foods for funerals. Wow, you covered alot of differing cultures, you did great, I really learned a great deal about what is expected and traditional in reagrds to funeral's, and I am Irish, both my grandparents came over from Cork, and we have always had a "few" cocktails at wakes. Irish wakes are infamous, because where there are large families, drinking, and Irishmen... Take a seat and enjoy the show, I can't speak for all Iirsh wakes, but in Boston they are interesting and eventful.
Fitch Ratings warns that Trump's proposed tax cuts alone would not generate enough economic growth to offset the loss in revenue. Wage Gains Add to U.S. Labor Market Strength Average hourly earnings increased at an annual pace of 2.8% in October — the fastest since the Great Recession officially ended in June 2009. Food, Gas Keep Producer Prices Unchanged While there may be some firming in core pipeline pressures, "the headwinds from energy and food prices remain in place," an economist says. Producer Prices Drop for 1st Time Since March The 0.4% decline suggests inflationary pressures remain muted, possibly dampening prospects for an interest rate hike next month. Consumer Spending Keeps Up Solid Growth With a 0.4% gain in June, personal consumption has showed strong increases for three straight months and again outpaced personal income growth. U.S. Consumer Prices Increase 0.2% in June It was the fourth straight monthly gain but food prices dropped 0.3% and inflation remains low by historical standards. Yellen Suggests No Rate Hike in ‘Coming Months’ The Fed chair says signs of a weakening jobs growth are "concerning" but she is still optimistic about the labor market and inflation. U.S. Producer Prices Edge Up 0.2% in April The increase was less than economists expected but may be a sign that the deflationary effect of the oil price rout is easing. U.S. Consumers Expect 2.6% Rise in Inflation A New York Fed survey finds consumers' inflation expectations for the next three years are at their highest level in five months. U.S. Consumers Expect 2.7% Rise in Inflation The findings of a New York Fed survey reflect expectations of a sharp rebound in gasoline prices over the next year. Consumer Spending Increases 0.5% in January The gain in personal spending is the largest since May and, along with other recent data, suggests the U.S. economy is holding steady in early 2016.
https://www.cfo.com/tag/inflation/page/9/
Commander, Navy Warfare Development Command (COMNAVWARDEVCOM) is directed by Commander, U.S. Fleet Forces Command to conduct the Fleet 360 War Game series for fleet commanders and their MOC staff. The war game objectives, together with a game scenario informed by an actual operation plan selected by the fleet commander, result in a tailored war game design that provides the operational context necessary to address the issues of greatest importance to the fleet commander. The war game is scalable, allowing the fleet commander to tailor MOC participation to be anything from one operational planning team (OPT) within the future operations cross-functional team (CFT) up to the full MOC CFT organization. The incumbent will work as an integral part of the NWDC N2 Red Cell and will work closely with the NWDC Red Cell OPFOR Warfare Area Analysts in developing wargaming products for NWDC’s war games, exercises, and experiments with a focus on Fleet 360 war game and related Fleet 360 experiments / deep dives using processed/fused Intelligence Community based documents, assessments, Intelligence briefings, operational reports and Intelligence data bases. The incumbent will take a leading role in the development, coordination and portrayal of OPFOR cyber capabilities and tactics, techniques and procedures during Fleet 360 war games. This senior military analyst will leverage current and timely intelligence reporting from classified and unclassified sources to accurately represent adversary cyber capabilities / TTPs against US Navy operational forces. The incumbent will create scenario material (Intelligence reports, message traffic, assessments, war game handbooks, etc.) to be utilized in Operational Level of War (OLW) exercises and experiments. The incumbent will provide Operational Planning and Intelligence support to the Exercise Control Group/White Cell staff during Fleet 360/Strike Group 360 war games and related experiments. - Bachelor’s degree in related fields plus 8 years of Cyber Warfare / Threat experience at NSA/USCYBERCOM, 10th Fleet or other service equivalent, and/or combatant command (i.e. EUCOM, CENTCOM). - Experience coordinating with other government and intelligence organizations or Maritime Intelligence Operation Center/Joint Intelligence Operation Centers, especially in the EUCOM and/or CENTCOM AORs. - Retired O4/5 with experience in OLW command and staff operations. - Experience with wargaming and Joint exercise planning process. - Candidate must possess an active Top Secret SCI Clearance. Travel is required to support exercise planning.
https://jobs.alionscience.com/job/military-analyst-sr-mv-28266/J3N1526GYFKGMLDT98N/
Poudre Heritage Alliance Awarded Nearly $125,000 in Grants Poudre Heritage Alliance Awarded Grants for Strategic Interpretive Planning and Oral History Projects within the Cache la Poudre River National Heritage Area The Poudre Heritage Alliance (PHA), the nonprofit managing entity of the Cache la Poudre River National Heritage Area (CALA), has received two large grants to support their programs that promote historical and cultural opportunities, engage people in the Poudre River corridor, and inspire learning, preservation, and stewardship. Through the Colorado the Beautiful Grant Program, administered by Colorado Parks and Wildlife and Great Outdoors Colorado (GOCO), PHA has been awarded $96,877 to complete a new Strategic Interpretive Plan for the Cache la Poudre River National Heritage Area. Grant funds will be used to: 1) Update the original Feasibility Study and Resource Inventory completed in 1990 2) Develop a Strategic Interpretive Plan to help CALA become more accessible to the public as a whole 3) Provide visitors with a seamlessly integrated experience 4) Build partner capacity The PHA will collaborate with a variety of stakeholders throughout the strategic planning process, including partner sites, staff, user groups, neighbors, government agencies, towns and municipalities, community decision-makers, and local businesses. The PHA has also been awarded a $25,000 “Women in Parks Innovation and Impact” grant from the National Park Service (NPS) and the National Park Foundation (NPF). The goal of this grant “is to support projects and programs that help the NPS share a more comprehensive American narrative that includes the voices of women.” In particular, the initiative is meant to increase awareness about the 19th Amendment’s centennial and highlight stories of women who continue to shape the world. Through their project, “Lifting Voices from the Shadows,” the PHA, Colorado State University’s Native American Cultural Center, the National Heritage Areas Program, and the Northern Arapaho tribe will work together to compile stories from Northern Arapaho women that run in parallel with or counter to, the Suffragette movement and modern society. Grant support from the Women in Parks grant will enable PHA to 1) record women’s oral histories; 2) create educational videos and interpretive videos; and 3) share relevant content that aligns with 19th Amendment milestones. The Cache la Poudre River National Heritage Area is treasured by a community that values it for a variety of recreational activities and the tranquility of a natural corridor, while also depending on it as a water source for municipal, industrial and agricultural uses. A wide range of cultural perspectives from our rich Poudre River heritage. These grant funds will help the PHA present creative and balanced interpretation, representing the variety of cultures that make up our river corridor and helping citizens find a sense of place and continuity in a rapidly changing world. ABOUT THE CACHE LA POUDRE RIVER NATIONAL HERITAGE AREA AND THE POUDRE HERITAGE ALLIANCE The Cache la Poudre River National Heritage Area (CALA), a 45-mile stretch of the Lower Poudre River, tells the story of the river where Western Water Law took shape and how the river still informs the use of water throughout the arid West today. CALA’s 501(c)3 nonprofit managing entity – the Poudre Heritage Alliance – PROMOTES a variety of historical and cultural opportunities; ENGAGES people in their river corridor; and INSPIRES learning, preservation, and stewardship. Find out more at: https://www.poudreheritage.org/
Understanding the impacts of past and contemporary climate change on biodiversity is critical for effective conservation. Amphibians have weak dispersal abilities, putting them at risk of habitat fragmentation and loss. Both climate change and anthropogenic disturbances exacerbate these risks, increasing the likelihood of additional amphibian extinctions in the near future. The giant spiny frog (Quasipaa spinosa), an endemic species to East Asia, has faced a dramatic population decline over the last few decades. Using the giant spiny frog as an indicator to explore how past and future climate changes affect landscape connectivity, we characterized the shifts in the suitable habitat and habitat connectivity of the frog. Results We found a clear northward shift and a reduction in the extent of suitable habitat during the Last Glacial Maximum for giant spiny frogs; since that time, there has been an expansion of the available habitat. Our modelling showed that “overwarm” climatic conditions would most likely cause a decrease in the available habitat and an increase in the magnitude of population fragmentation in the future. We found that the habitat connectivity of the studied frogs will decrease by 50–75% under future climate change. Our results strengthen the notion that the mountains in southern China and the Sino-Vietnamese transboundary regions can act as critical refugia and priority areas of conservation planning going forward. Conclusions Given that amphibians are highly sensitive to environmental changes, our findings highlight that the responses of habitat suitability and connectivity to climate change can be critical considerations in future conservation measures for species with weak dispersal abilities and should not be neglected, as they all too often are.
https://frontiersinzoology.biomedcentral.com/articles/10.1186/s12983-021-00398-w
When were you first introduced to tea, and what do you remember about the experience? My first introduction to “tea” was with my paternal Grandmother. There were many Sunday’s spent with my grandparents for “dinner” when we were very young. The family would get together at the grandparents’ homes for afternoon dinner. These were formal occasions with everyone seated at the dining room table. Everyone was “dressed”, manners in full force, seated at the table, patiently listening to stories of the past, with a very full menu. After dinner, my grandmother would many times present her famous meringues, or petit fours, and bring out her silver tea service. I do not remember the tea itself, but the memory of beauty, comfort, and friendship that came with the special desserts, the china used for tea, and the silver tea service, all impressed me. My maternal grandfather also introduced me to something special. He was British – he came to this country from England at the age of 11. You would expect him to be the tea drinker. No, coffee was his beverage of choice, a pot always on the stove-top. When I was very young, there were some weekends spent with him and my grandmother. He would bring her breakfast in bed every morning. I would get to sit up in bed with her, she in her silk bed jacket and tea tray on her lap, and he would bring me a separate tray. On my tray he would take a saucer, place a piece of toast, pour a bit of coffee with lots of milk and sugar on it, and the toast would become toast ala café Aleut (sp??). Though this was not tea, the wonderful feeling of having a hot beverage shared with another, the tradition of it, and the ceremony created, gave me the feeling of warmth and companionship that you find in sharing today’s Afternoon teas. Why did you decide to start a tea business, and how has selling and blending tea shaped your life? Because tea became my passion. I became a tea drinker in college. But it wasn’t until my daughter was in grammar school that I discovered my love of everything tea. She and I became friends with a mother and daughter who loved having afternoon tea. We visited tea rooms, tea shops, and had teas in our homes. I began baking scones, and experimenting with teas. I gave Mothers' Valentine’s Day Tea Parties, Birthday Tea Parties, Afterschool Tea Parties, and with encouragement from friends began to offer catered Afternoon Teas to small businesses, libraries, etc. When my daughter went off to college, it was time for me to “step it up.” I wanted to open a Tea Room, and with researching realized it was too much of a risk. So I thought outside of the box, and created a Mobile Tea Truck; offering only loose leaf premium teas, tea sandwiches, and desserts. It was very exciting and customers embraced the uniqueness of it. After several years of wonderful customers, fun experiences, and sore shoulders and knees, it was time to move on. From the Mobile Tea Truck, blossomed the Mobile Tea Shoppe. I attended several of the World Tea Conferences and met other passionate tea entrepreneurs. I made connections with like minded tea vendors; using premium, organic, carefully sourced teas. I chose teas that I liked, knowing that my customers have enjoyed my offerings in the past, I was confident with my choices. But I was also aware of the wide variety of tastes that the public possess, and decided to offer at least 30 different types of tea. My Mobile Tea Shoppe became a moveable tea shop, setting up my tent at Craft Shows, events, and Farmer’s Market. Speaking with customers about the teas, offering education, how to prepare the different teas, where they are sourced, etc. is a passion of mine. The connections I have made through tea have shaped my life in many ways. How do you create a blend- does it start with an idea, an aroma, a taste, or something completely different? It really starts with an idea or inspiration. For example, I am a Downton Abbey fan. I was inspired by the time period. So I researched what tea they were drinking in England at that time and discovered that it was mostly Indian teas. So I created a Downton Abbey blend of a variety of Indian teas. Or the birth of the royal prince and princess inspired me to blend teas in their honor. Or a photograph – I found a beautiful picture of a Blue Jay sitting on top of a tea cup, and created a Blue Jay tea that consisted of black teas, dried blueberries, and lavender. A local tea room asked to have a signature tea blended for them, and I worked with their ideas based on a personal memory shared by the owners. It is very much like being an artist. Inspiration has come from events and location. The birth of Princess Charlotte inspired me to blend a tea in her honor. Location has also inspired me – attending events on Cape Cod inspired me to offer a cranberry blended tea. When someone visits your booth at a farmers' market or craft fair, what will they experience? The first experience can either be visual or scent. Many times someone will stop because of the visual appeal – they first see the curtains on the side of the tent with ribbons, and the distinctly tea room design. Or, they will stop because they can smell the teas. There is a round “sniffing” table at the entrance. Each tea is displayed with a sniff jar in front, so they can sniff the tea itself and see what the teas look like. This can give my customers an idea of what the teas will taste like. The sniff jars have been a big success – customers comment frequently on how much they appreciate this. What are some of your favorite food and tea pairings? That would be cheeses with fruits, and teas. I absolutely LOVE pears and apples with cheeses, and complementing teas. The unique thing with cheese and teas is that you want many times to have opposites. If a cheese is mild, like a buttery mild brie, then you may want a strong tea like a Keemum. So much fun and so delicious pairing tea with cheese! I of course love both of these separately and paired. What is your ideal afternoon tea experience? Just spending time with a friend or my daughter, over a pot of tea, sharing a delicious tea that I may have discovered, with one of my home baked scones and clotted cream. Tea and friendship! Advice for anyone thinking about starting a tea business? The obvious jumping off point is passion. You have to have this!! But the next part has nothing to do with passion. You really have to carefully investigate the business aspect. I was very lucky to have the support of a friend who had the practical, business background who assisted me with the business plan, and weekly meetings with the Small Business Administration, and Banks to discuss business loans. An integral part of starting your business is working with the municipalities that you plan on doing business in. Every town is different and navigating the process with the towns’ management teams, the health departments, and even the building departments, can really drive how and if you do your business. To be determined. I am beginning to feel the need to re-invent, but what that may be is a mystery at present. More and more tea vendors are popping up, which is wonderful for the business of tea in general. More and more people are drinking tea, wanting tea, and becoming knowledgeable about tea – also wonderful. But in order to compete you need to be unique – offer something that no one else does that makes you stand apart. The Mobile Tea Truck was a unique adventure – time for another. Thank you so much, Gay for answering my questions! I look forward to running into you at a crafts fair or farmers' market soon!
http://www.tea-happiness.com/2015/08/interview-gay-hughes-of-gay-grace-teas.html
Paragliding appeared in Europe in the 1970s and spread to China ten years later. The sport requires no power. The pilot can take off with the paraglider from a mountain slope to reach as high as several thousand meters in the air and fly a distance of several hundred kilometers. So far, the longest gliding time is over 14 hours and the longest distance covered is 300 kilometers in the world. The sport became popular in a short time in the world because the paraglider is simple in structure and easy to operate. It is regarded as the easiest way to fly in the world in a flight. There are several thousand people who often practice paragliding in China. The sport enables people to take delight in the wilderness and experience the excitement of flying like an eagle. At present, paragliders can be seen soaring high in the sky in all parts of the world.
https://www.lumichat.com/en/news/2017/1024/176147.html
While it is almost the end of summer, it’s never too late to start catching up on a good read. Most people tend to stick to the usual genres: romance, suspense, cops and robbers or classical works. We’re going to take a look at some business-minded books, however, the kind of stuff that’s interesting to read and highly applicable to your work life. These books are aimed at identifying gaps in workflow management between teams as well as individuals, while also encouraging workers to identify their strengths and weaknesses to help improve their personal development and overall work environment. Learning your team’s dynamic is essential to building a cohesive and productive unit. However, it usually takes time to figure out what makes each person tick as well as what will motivate them to perform at their full potential. Written as a business fable, The Five Dysfunctions of a Team by Patrick Lencioni, aims to address why teams become dysfunctional as well as how to acknowledge flaws and inadequacies in order to create a purposeful work environment. Lencioni explains that when teams strive to understand and accept their inadequacies they can conquer any internal or external strife they may face. Figuring out where your talents lie can be a difficult task, to say the least. However, there are some great tools out there to help people and companies identify what their aptitudes are and how to best utilize them. Discover Your Strengths is a Hall of Famer among books in this genre. The authors’ main goal is to help each person discover their strengths through an online or written test. In contrast to The Five Dysfunctions of a Teams, Buckingham and Clifton encourage their readers to focus primarily on the strengths one has rather than struggle to conquer one’s own weaknesses. Buckingham and Clifton argue that when group members focus on their strengths, they can position themselves into roles and tasks they know they would be well-suited for. In essence, the desired end result would be developing desired attributes in each member to improve overall team efficiency while also minimizing employee turnover. In The Leadership Challenge, authors James Kouzes and Barry Z. Posner aim to train leaders to use a hybrid of both The Five Dysfunctions of a Team and StrengthsFinder. To accomplish this, they encourage their readers to develop their “practices of leadership.” They advise leaders to lead by example, develop a core vision everyone can work toward and inspire members to think creatively to solve problems. Part of this process includes a StrengthsFinder-style quiz called the Leadership Practices Inventory, which helps assess a leader’s strengths, weaknesses and efficiencies. Kouzes and Posner also urge leaders to build a system of confidence in their subordinates that lets them come into their own without micro-managing them. The leadership gurus believe that high quality leaders must be taught to lead with humility and honesty, rather than rely solely on natural talents. In short, The Five Dysfunctions of a Team, StrengthsFinder and The Leadership Challenge all seek to answer the question of how to minimize dysfunction in the workplace, improve productivity and morale. At the same time, they probe what makes an efficient, respectable leader who is willing to think outside the box to better his employees and company without compromising their convictions. Personal development and work culture go hand in hand. As summer comes to an end, take stock of where you have been and where you need to go as a company. This introspection will help your company or workforce better manage the day-to-day stress as well as encourage all team members to strive to better themselves, no matter what weaknesses or talents they exhibit.
https://www.panelextenders.com/blog/summer-reading-list-three-must-read-business-books-summer/
Much outdoor cooking over charcoal is conventional grilling in which meat, such as ground beef patties, steaks, chicken parts, and pork chops, or fish are placed on a grill directly over hot, glowing charcoal. Satisfactory cooking of ground beef patties, relatively thin cuts of meat, and fish can be carried out in this manner, at relatively high temperatures, in a few minutes to about one-half an hour. Larger cuts of meat, such as beef briskets, pork shoulders, as well as whole chickens and the like, require much longer cooking times, sometimes up to twelve hours or more, depending on a number of factors, including the kind of meat being cooked, the size and weight of the portion being cooked, and its collagen content. Slow cooking of these meats breaks down collagen, making the meat tender, and easy to cut and chew. Because of the long cooking times, cooking must be carried out at relative low temperatures in order to avoid charring and dehydration. Smokers are used for outdoor cooking of these larger cuts of meat. Most smokers fall into either of two categories. One popular type of smoker is typically in the form of a cylindrical or egg-shaped enclosure symmetrical about a vertical axis. In this type of smoker a charcoal or wood fire is directly underneath the meat rack, but vertically spaced from the meat rack by a large distance, usually with a drip pan interposed between the fire and the meat rack. Hot gas from the fire passes around the edge of the pan and into contact with the meat on the meat rack, and then out through a vent or chimney at the top. Another type of smoker has a cylindrical drum-shaped cooking chamber generally symmetrical about a horizontal axis, and a separate fire box attached to one end of the drum. Smoke from the firebox is directed into the cooking chamber, and, from the cooking chamber through a stack located near the opposite end of the cooking chamber. In the operation of both types of smokers, the temperature within the cooking chamber is controlled by manual adjustment of air dampers. Indirect heating of larger cuts of meat can also be carried out in a conventional kettle grill, or a drum-type grill, by arranging the charcoal so that it is not directly underneath the meat, and adjusting the air dampers, both below the charcoal and above the meat, in such a way as to avoid excessive temperature. In smokers, and also in kettle grills, it is difficult to maintain a steady, moderate temperature. Depending on conditions, the temperature in the smoker or grill will gradually rise or fall. Controlling the temperature, therefore, requires frequent adjustment of the dampers. If the fire becomes too hot, the meat will be cooked too quickly on the outside and inadequately on the inside. Moreover, if the fire is excessively hot, it will burn too quickly, requiring frequent addition of fuel. Excessive temperature can be avoided by using only a small amount of fuel. However, when a smaller amount of fuel is used, more frequent addition of fuel is required. On the other hand, if the dampers are insufficiently open to maintain combustion, the fire will be extinguished, and must be reignited. In either case, whenever the smoker or grill is opened to add or reignite fuel, the atmosphere inside the cooking chamber cools, and the proper cooking temperature must be reestablished. Another problem encountered in conventional slow cooking is the excessive consumption of fuel. In order to establish a good charcoal fire, the usual practice is to ignite a quantity of charcoal, using lighter fluid, an electric heater, or a propane torch, or to place the charcoal temporarily in a removable chimney, and ignite it by burning paper. When these methods are used, the entire quantity of charcoal is ignited, and before cooking is begun, the charcoal is brought to a condition in which the coals are glowing, with little or no visible flame. A large amount of heat, and consequently a large amount of fuel, is wasted in the process of establishing a fire. The fact that the entire mass of charcoal is ignited initially, also means that it will be necessary to replenish the fuel supply from time to time, if cooking is to take place over a long interval. An object of this invention is to make it possible to cook slowly with a solid fuel fire over a long time interval, without the need for constant attention to the fire. Another object of the invention is to conserve solid fuel, and to minimize or avoid the need for replenishment of fuel in slow cooking.
Jamie Hanson, PhD, is lead author of the study "A Family Focused Intervention Influences Hippocampal‐Prefrontal Connectivity Through Gains in Self‐Regulation," published on Oct.8 in Child Development. Hanson, an assistant professor of psychology and research scientist in the Learning Research and Development Center, provides the following lay summary: "While the stressors associated with poverty can significantly impact mental health, family-centered prevention programs (and other strategies) can aid in achieving more positive outcomes over time. These programs build skills and competencies in youth and families by improving parental emotional support, fostering parent-child communication, helping youth to set goals for the future, etc. The current study explored whether participation in these types of programs can impact the brain. Specifically, we examined whether participation in a family-centered prevention program (the Strong African American Families, SAAF) at age 11 was related to differences in the brain at age 25. "To address this question, we collected neuroimaging data from a sample of 93 African American young adults who have been participating in a longitudinal study since they were 11 years of age. Neuroimaging data was collected using resting state fMRI (where individuals are lying awake in the MRI scanner and not engaged in a specific task or activity). We focused on brain connectivity (or interactions) between the hippocampus and prefrontal cortex; these brain regions are involved with remembering information and making decisions. "We found three important things. First, we found that adult participants who completed the intervention (as youth) had stronger connections between the hippocampus and prefrontal cortex, compared to adults who did not complete the intervention. "Second, we found that improvements in self-regulation connected to the intervention (measured right after the program, at age 11) was associated with the connections between the hippocampus and prefrontal cortex. "Third, we found that this brain connectivity was also related to disruptive behavioral problems that people reported as adults. Individuals with higher brain coupling had fewer problems with aggression and reported losing their tempers less. "These results suggest that participation in programs that enhance supportive parenting may be one cost-effective way of addressing social disparities and promoting the well-being of at-risk children."
http://www.braininstitute.pitt.edu/study-finds-parenting-programs-influence-brain-connectivity-risk-children
Download "National greenhouse gas inventory data for the period and status of reporting" 3 Page 3 I. Introduction A. Mandate 1. The Conference of the Parties (COP), by its decisions 9/CP.2, 3/CP.5 and 18/CP.8, requested that Parties included in Annex I to the Convention (Annex I Parties) submit national inventory data on greenhouse gas (GHG) emissions from sources and removals by sinks by 15 April each year. Decision 19/CP.8 requested the secretariat to prepare annual reports on GHG inventory data submitted by Annex I Parties for consideration by the Subsidiary Body of Implementation (SBI) and the COP. 1 This document is a report on GHG inventory data submitted by Annex I Parties in 26. B. Scope of the note 2. This note presents the latest available data on GHG emissions and removals from Annex I Parties for the period , based on the national GHG inventories received by the secretariat by 9 October 26. The document also shows the status of reporting of annual GHG emission inventories from Annex I Parties, highlighting the timeliness and completeness of reporting. 3. Data are provided for carbon dioxide (CO 2 ), methane (CH 4 ), nitrous oxide (N 2 O), and for hydrofluorocarbons (HFCs), perfluorocarbons (PFCs) and sulphur hexafluoride (SF 6 ) taken together. In addition, total 2 aggregate 3 GHG emissions are presented, both including and excluding net GHG emissions/removals from land use, land-use change and forestry (LULUCF). Data on net emissions/removals from LULUCF (for CO 2, CH 4, and N 2 O) are also provided. C. Possible action by the Conference of the Parties and the Subsidiary Body for Implementation 4. The COP and the SBI may wish to take note of the information contained in this document and provide further guidance to Parties and the secretariat. II. Status of reporting A. Inventory submissions in The UNFCCC reporting guidelines on annual inventories require that Annex I Parties annually submit a national inventory report (NIR) and common reporting format (CRF) data tables covering data from the base year up to two years before the year of submission, 4 i.e., from 199 up to 24 in the 26 submission. Table 1 summarizes the status of reporting for the 26 submissions. It shows that all 41 Annex I Parties submitted their inventories in 26 (25 of them by the due date of 15 April). In 26, a GHG inventory of Turkey was received for the first time, 5 and a GHG inventory of the Russian Federation pursuant to decision 3/CP.5 was received 6 for the first time since 2. 1 UNFCCC Guidelines for the technical review of greenhouse gas inventories from Parties included in Annex I to the Convention (FCCC/CP/22/8), paragraphs 42 and The term total implies that emissions from sectors of the common reporting format (CRF) are summed up; the inclusion of land use, land-use change and forestry (LULUCF) into the sum is indicated separately. 3 The term aggregate implies that GHG emissions are calculated as a weighted sum of CO 2, CH 4, N 2 O, HFCs, PFCs and SF 6 ; the sum is made using the global warming potentials agreed under the Convention (1 for CO 2, 21 for CH 4, 31 for N 2 O, and specific values for individual HFCs, PFCs and SF 6 ). 4 Guidelines for the preparation of national communications by Parties included in Annex I to the Convention, Part I: UNFCCC reporting guidelines on annual inventories (FCCC/SBSTA/26/9). 5 Turkey ratified the Convention on 24 February 24. Decision 26/CP.7 invited Parties to recognize the special circumstances of Turkey, which place Turkey in a situation different from that of other Annex I Parties. 6 The Russian Federation provided inventory data informally; the data are still subject to a formal approval procedure in the Russian Federation. 4 Page 4 Table 1. Greenhouse gas inventory submissions from Annex I Parties in 26 Party CRF submission date a CRF submission format b Years reported c Submission of NIR Reporting of LULUCF data Australia 24 May 26 CRF-R !! Austria 13 April 26 CRF-R !! Belarus 14 April 26 CRF-R !! Belgium 14 April 26 CRF-A !! Bulgaria 18 April 26 CRF-A/CRF-R d 1988, !! Canada 11 May 26 CRF-R !! Croatia 31 August 26 CRF-R ! Czech Republic 13 April 26 CRF-R !! Denmark 12 April 26 CRF-R !! Estonia 12 April 26 CRF-A !! e European Community 15 April 26 CRF-R !! Finland 6 April 26 CRF-R !! France 15 February 26 CRF-A/CRF-R d !! Germany 3 March 26 CRF-A !! Greece 16 April 26 CRF-R !! Hungary 19 April 26 CRF-R , !! Iceland 26 July 26 CRF-A ! Ireland 13 April 26 CRF-R !! Italy 18 April 26 CRF-A !! Japan 25 May 26 CRF-R !! Latvia 13 April 26 CRF-R !! Liechtenstein 3 May 26 CRF-R 199, 24!! e Lithuania 15 April 26 CRF-R 24!! e Luxembourg 6 February 26 CRF-A ! Monaco 16 June 26 CRF-A !! Netherlands 14 April 26 CRF-R !! New Zealand 13 April 26 CRF-R !! Norway 27 May 26 CRF-R !! Poland 15 April 26 CRF-R 24!! e Portugal 13 April 26 CRF-A !! Romania 5 May 26 CRF-R !! Russian Federation 9 October 26 f CRF-R ! Slovakia 13 April 26 CRF-R 199, 2 24!! Slovenia 26 April 26 CRF-R 1986, ! e Spain 12 April 26 CRF-R !! Sweden 13 April 26 CRF-R !! Switzerland 13 April 26 CRF-A !! e Turkey 14 April 26 CRF-A ! Ukraine 26 May 26 CRF-R !! United Kingdom of Great Britain and Northern Ireland 13 April 26 CRF-R !! United States of America 5 April 26 CRF-R !! a Date of submission of common reporting format (CRF) data; the submission date for the national inventory report (NIR) may differ. The dates after 15 April 26 are shown in italics; the dates after 27 May (six weeks after the submission deadline) are shown in bold. b CRF-R indicates that the Party reported using the CRF Reporter software; CRF-A indicates that the Party reported using the CRF application. c Indicates the years for which complete CRF tables were submitted in 26; for some Parties, information on emissions was provided in the CRF trend tables, although complete CRF tables were not submitted in 26 for some years. d The initial submission was with the CRF application, but later a resubmission with the CRF Reporter was made. e Not all years from were covered in the submitted land use, land-use change and forestry (LULUCF) data. f An informal provision of national inventory data. B. Reporting issues 1. Completeness and timeliness of reporting 6. Figure 1 illustrates the number of inventory submissions since It shows that 26 was the first year when all 41 Annex I Parties submitted their inventories. Twenty-five submissions were made by the due date of 15 April and 37 of the submissions included an NIR. 5 Page 5 Figure 1. Greenhouse gas inventory submissions from Annex I Parties, CRF and NIR submissions from 1998 to 26 Number of submissions CRF by 15 April CRF by 9 October NIR by 9 October Note: CRF = common reporting format; NIR = national inventory report. 7. According to table 1, 37 Parties reported complete CRF tables for all years from to 24, which means a further improvement in the completeness of reporting (in 25, 29 Parties reported complete CRF tables for all years). Twenty Parties submitted a revised version of their inventory after making the initial submission in order to improve the GHG estimates reported. 8. Some Parties still face problems in reporting complete annual GHG inventories on time. Five Parties (Croatia, Iceland, Liechtenstein, Monaco and the Russian Federation) submitted their CRF tables more than six weeks late and seven Parties (Italy, Liechtenstein, Lithuania, Monaco, Slovakia, Spain and Turkey) were late, also by more than six weeks, in submitting their NIR. Four Parties (Croatia, Iceland, Slovenia and the Russian Federation) had not submitted their NIRs by the time this document was prepared. Three reporting Parties have not provided data for some years (Liechtenstein, Lithuania and Poland). 2. Reporting of LULUCF data 9. The COP decided in 23 (decision 13/CP.9) that Annex I Parties should use the Intergovernmental Panel on Climate Change (IPCC) Good Practice Guidance for Land Use, Land-Use Change and Forestry for preparing annual inventories under the Convention, due in 25 8 and beyond. 1. The reporting of LULUCF data considerably improved in 26. In 25, only 2 Parties reported LULUCF data, whereas in 26, 39 Parties (all reporting Parties except Luxembourg and Turkey) provided LULUCF data, although some Parties (Estonia, Liechtenstein, Lithuania, Poland, Slovenia and Switzerland) did not provide LULUCF data for some years (table 1). 3. Use of the CRF Reporter software 11. The COP, by its decision 18/CP.8, requested the secretariat to develop new software for reporting in the CRF in order to facilitate Parties inventory submissions. The Subsidiary Body for Scientific and Technological Advice (SBSTA) invited Annex I Parties to use the new CRF software (CRF Reporter) to report the inventory submissions due in 25. In 25, the COP decided 7 The Parties that are allowed to use a base year other than 199 have also provided data for their respective base years as per COP decisions 9/CP.2 and 11/CP.4. These Parties and their base years are Bulgaria (1988), Hungary (average of ), Poland (1988), Romania (1989) and Slovenia (1986). 8 The year 25 was a trial period for reporting under decision 13/CP.9. 6 Page 6 (decision 7/CP.11) that Annex I Parties should use the CRF Reporter for the submission of their annual GHG inventories due from April The number of Annex I Parties using the CRF Reporter increased greatly in 26 to 31 from only four in 25. The ongoing work of the secretariat on the improvement of the CRF Reporter is expected to facilitate the further increase in the use of the CRF Reporter, aiming to ensure that all inventory submissions are made with the CRF Reporter as required by decision 7/CP.11. C. Recalculations 13. The 26 submissions confirm that Parties continue to implement recalculations, when required, in order to improve the quality of emission estimates. In 26, 34 Parties conducted recalculations reflecting changes in activity data, emission factors and the methodologies used (table 2). 14. Many Parties conducted recalculations for all GHGs and all sectors, and, as a general rule, for all years in order to ensure the consistency of the time series. The impact of recalculations on GHG emissions varied widely from very small numbers to sizeable values. For example, for 21 Parties the change in total aggregate GHG emissions without LULUCF in the base year after recalculations was less than 2 per cent but for 7 Parties the change was above 5 per cent (table 2). Table 2. Inventory recalculations by Annex I Parties in 26 Recalculations conducted in 26 Impact on base year GHG emissions without LULUCF (%) Recalculations conducted in 26 Impact on base year GHG emissions without LULUCF (%) Party Party Australia! 1.24 Liechtenstein! Austria!.43 Lithuania Belarus! 2.55 Luxembourg Belgium!.7 Monaco Bulgaria! 4.38 Netherlands!.46 Canada!.51 New Zealand!.6 Croatia! 19.1 Norway! 1.45 Czech Republic! 2.48 Poland Denmark! 1.9 Portugal!.98 Estonia Romania! 1.7 European Community! 6.57 Russian Federation Finland! 1.2 Slovakia! France!.19 Slovenia!.17 Germany! 1.4 a Spain! 1.13 Greece!.62 Sweden!.21 Hungary!.75 Switzerland!.72 Iceland! 1.62 Turkey Ireland! 3.38 Ukraine! 6.75 United Kingdom of Great Britain and Northern Ireland! 1.4 a Italy! 1.65 Japan! United States of Latvia! 2.14 America! Note 1: The information in this table is based on the latest available inventory submissions. Note 2: The recalculations for land use, land-use change and forestry (LULUCF) and the impact of recalculations on GHG emissions with LULUCF are not covered in this table because many Parties switched in 26 from reporting with the LULUCF Excel tables to reporting with the LULUCF tables in the CRF Reporter, and therefore the recalculations were not reflected fully in the corresponding reporting tables. a The Party has not estimated the impact of recalculations on base year emissions but the recalculated data were provided. 7 Page 7 III. Overview of emission trends and sources in Parties included in Annex I to the Convention A. Total aggregate greenhouse gas emissions 15. From to 24, total aggregate GHG emissions without emissions/removals from LULUCF from Annex I Parties taken together decreased by 3.3 per cent, from 18.6 thousand to 17.9 thousand Tg 1 CO 2 equivalent (figures 2 and 3 11 ). Total aggregate GHG emissions with LULUCF decreased by 4.9 per cent, from 16.5 thousand to 15.7 thousand Tg CO 2 equivalent. Since 2, the emissions without LULUCF have increased somewhat, and the emissions with LULUCF have decreased slightly. Figure 2. GHG emissions from Annex I Parties, 199, 2 and 24 GHG emissions without LULUCF GHG emissions with LULUCF 1, Tg CO 2 equivalent , Tg CO 2 equivalent Annex I EIT Parties Annex I non-eit Parties All Annex I Paries Annex I EIT Parties Annex I non-eit Parties All Annex I Paries Note: For greenhouse gas (GHG) emissions with land use, land-use change and forestry (LULUCF), data for Estonia, Lithuania, Luxembourg, Poland, Slovenia, Switzerland and Turkey are not included because of the unavailability or incompleteness of some LULUCF data in the period Figure 3. Changes in GHG emissions from Annex I Parties, Change compared to 199 level (%) GHG emissions without LULUCF 2 Annex I non-eit Parties All Annex I Parties -2-3 Annex I EIT Parties Change compared to 199 level (%) GHG emissions with LULUCF 2 Annex I non-eit Parties All Annex I Parties Annex I EIT Parties Note: For greenhouse gas (GHG) emissions with land use, land-use change and forestry (LULUCF), data for Estonia, Lithuania, Luxembourg, Poland, Slovenia, Switzerland and Turkey are not included because of the unavailability or incompleteness of some LULUCF data in the period Unless specified otherwise, here and elsewhere in this document base year data are used in sums and totals instead of 199 data (as per COP decisions 9/CP.2 and 11/CP.4) for Bulgaria (1988), Hungary (average of ), Poland (1988), Romania (1989) and Slovenia (1986). 1 One teragram (Tg) equals one million tonnes; one thousand Tg equals one billion tonnes. 11 In these and other figures, interpolation was used for some Parties to fill in the missing data for some years; this did not have a meaningful impact on the totals and trends. 9 Page 9 B. Greenhouse gas emissions by gas 18. Figure 5 shows changes in total emissions (without LULUCF) of individual GHGs from Annex I Parties over the period CO 2 emissions decreased by.1 per cent over this period, whereas the emissions of CH 4 and N 2 O decreased by 18. and 19.7 per cent, respectively. The emissions of HFCs, PFCs and SF 6 taken together increased by 7.9 per cent (mostly because of increases in HFCs). Figure 5. Annex I Party greenhouse gas emissions by gas, 199 and 24 GHG emissions (1, Tg CO 2 equivalent) Change (%) CO2 2 CH4 4 N2O 2 O HFCs+PFCs+SF CO2 2 CH4 4 N2O 2 HFCs+PFCs+SF6 6 Note: HFCs = hydrofluorocarbons; PFCs = perfluorocarbons; SF 6 = sulphur hexafluoride. C. Greenhouse gas emissions by sector 19. Figure 6 illustrates trends in aggregate GHG emissions from Annex I Parties by sector. For all Annex I Parties taken together, sectoral emissions decreased from, with the greatest decreases in agriculture ( 2. per cent) and industrial processes ( 13.1 per cent). The decrease in energy was the smallest (.4 per cent). Net GHG removals by LULUCF increased by 24.8 per cent. Figure 6. Annex I Party greenhouse gas emissions/removals by sector, 199 and 24 GHG emissions/removals (1, Tg CO 2 equivalent) Change (%) Energy Industrial processes Agriculture Waste LULUCF Energy Industrial processes Agriculture Waste LULUCF Note: LULUCF = land use, land-use change and forestry. 2. Within the Energy sector (figure 7), an increase in emissions occurred for energy industries and transport, whereas for manufacturing industries and construction as well as for other sectors and fugitive emissions the emissions decreased. The greatest increase occurred for transport, 23.9 per cent from 199 to 24; the greatest decline occurred for fugitive emissions, 16.9 per cent. 10 Page 1 Figure 7. Annex I Party greenhouse gas emissions in the energy sector, 199 and 24 GHG emissions (1, Tg CO 2 equivalent) Change (%) Energy industries Manufacturing industries and construction Transport Other sectors Fugitive emissions Energy industries Manufacturing industries and construction Transport Other sectors Fugitive emissions Note: Except for fugitive emissions, data for the Russian Federation are not included here because the emissions from subsectors in the energy sectors were reported with notation keys. 21. GHG emissions from fuels sold for use in international aviation increased by 52. per cent from (figure 8). The emissions relating to fuels sold for use in international marine transportation increased by 3.4 per cent between 199 and 24. Figure 8. Annex I Party greenhouse gas emissions from bunker fuels, 199 and 24 GHG emissions (1, Tg CO 2 equivalent) Change (%) Aviation bunkers Marine bunkers Aviation bunkers Marine bunkers Note 1: For aviation bunker fuels, data for Estonia, Liechtenstein, Lithuania, Monaco, Poland, Romania, the Russian Federation, Slovakia, Turkey and Ukraine are not included because of their unavailability or incompleteness, or because the emissions were reported with notation keys for some years in the period Note 2: For marine bunker fuels, data for Estonia, Lithuania, Luxembourg, Poland, Romania, the Russian Federation and Turkey are not included because of their unavailability or incompleteness, or because the emissions were reported with notation keys for some years in the period D. Comparison of emissions estimates in 25 and 26 reports 22. In 25, the UNFCCC secretariat published a similar GHG data report based on the submissions of GHG inventories in 25. For transparency, table 3 compares the estimates for total aggregate GHG emissions without LULUCF in 199 provided in that report (FCCC/SBI/25/17) with the 199 estimates provided in this report. This comparison shows that although the estimates have changed, there are substantive reasons for these changes. 13 Page 13 Table 5. Total aggregate anthropogenic emissions of CO 2, CH 4, N 2 O, HFCs, PFCs and SF 6, including emissions/removals from land use, land-use change and forestry, 199, 1995 and 2 24 Gg CO 2 equivalent Australia Austria Belarus* Belgium Bulgaria* a Canada Croatia* Czech Republic* Denmark Estonia* European Community b Finland France Germany Greece Hungary* a Iceland Ireland Italy Japan Latvia* Liechtenstein Lithuania* Luxembourg Monaco Netherlands New Zealand Norway Poland* a Portugal Romania* a Russian Federation* Slovakia* Slovenia* a Spain Sweden Switzerland Turkey** Ukraine* United Kingdom United States of America Decrease in emissions by more than 1 per cent (number of Parties) 18 Change in emissions within 1 per cent (number of Parties) Increase in emissions by more than 1 per cent (number of Parties) 16 Note: Negative values in Gg mean removals; positive values in Gg mean emissions. 15 Page 15 Table 7. Total anthropogenic CO 2 emissions including emissions/removals from land use, land-use change and forestry, 199, 1995 and 2 24 Gg CO 2 Australia Austria Belarus* Belgium Bulgaria* a Canada Croatia* Czech Republic* Denmark Estonia* European Community b Finland France Germany Greece Hungary* a Iceland Ireland Italy Japan Latvia* Liechtenstein Lithuania* Luxembourg Monaco Netherlands New Zealand Norway Poland* a Portugal Romania* a Russian Federation* Slovakia* Slovenia* a Spain Sweden Switzerland Turkey** Ukraine* United Kingdom United States of America Decrease in emissions by more than 1 per cent (number of Parties) 16 Change in emissions within 1 per cent (number of Parties) Increase in emissions by more than 1 per cent (number of Parties) 18 Note: Negative values in Gg mean removals; positive values in Gg mean emissions.
http://autodocbox.com/Performance_Vehicles/66390892-National-greenhouse-gas-inventory-data-for-the-period-and-status-of-reporting.html
In 2007, the Canadian Federation of Humane Societies celebrated 50 years of speaking for those who cannot speak for themselves. Our quest to become the national voice of humane societies and SPCAs has not been an easy one. We have, however, overcome these obstacles and worked diligently over the years to earn our solid reputation as a key player in animal welfare issues. Join us as we take a look back on the foundation’s beginnings, the pioneers who helped the CFHS become what it is today and some of our achievements. When the Canadian Federation of Humane Societies was established in 1957, it faced many organizational challenges. Societies and individuals from across Canada were coming together for the first time to collectively improve the welfare of animals in our country, and it was no easy task. There were regional biases, with different societies each insisting that their issues were more urgent than the next organization’s. There were also great debates over the structure of the Federation. The CFHS faced many difficulties along the way but it was able to successfully overcome these challenges to become the national voice for animal welfare in Canada. As we celebrate our 50th anniversary, the CFHS still has a variety of members that it must serve but the diversity of our membership is our strength. One of our challenges as we move forward will be to continue to meet our member societies needs and to support them in fulfilling their mandates to help animals across Canada. Celebrate our 50th anniversary with us! Click on the pages below to read more about our history, and keep checking in as we continue our crucial work to help Canadians help animals in the future!
http://cfhs.ca/info/50_years_of_animal_welfare
There space over 600 muscle in the person body. Learning the muscular mechanism often involves memorizing details about each muscle, choose where a muscle attaches come bones and how a muscle helps move a joint. In textbooks and also lectures these details about muscles are defined using committed vocabulary that is tough to understand. Below is an example: The triceps brachii has three bellies v varying origins (scapula and also humerus) and also one insertion (ulna). That is a element mover the elbow extension. The anconeus acts together a synergist in elbow extension. You are watching: The movable attachment of a muscle is called the What walk all the textbook slang mean? The triceps brachii has 4 places wherein it attaches come the scapula, humerus, and also ulna. This muscle dram a big role (that’s what element mover means) in prolonging the elbow share from a bent to a directly position. Keep reading to learn what every the various other muscle jargon means! 1. Muscles connect to Bones in ~ Locations dubbed Origins and also Insertions A bones muscle attaches come bone (or occasionally other muscle or tissues) at two or much more places. If the place is a bone that continues to be immobile for an action, the attachments is referred to as an origin. If the place is ~ above the bone the moves during the action, the attachments is dubbed an insertion. The triceps brachii happens to have four points the attachment: one insertion on the ulna and three origins (two top top the humerus and one top top the scapula). 2. Muscles Act top top Synovial Joints to relocate the Body The muscles bordering synovial joints room responsible for relocating the human body in space. These muscle action are regularly paired, prefer flexion and also extension or abduction and adduction. Below the usual terms are noted and defined, v animations to aid you picture the muscles and joints in motion. Flexion and extension room usually movements forward and also backward indigenous the body, such as nodding the head. Flexion: decreasing the angle in between two skeletal (bending). Extension: enhancing the angle between two skeleton (straightening a bend). The triceps brachii and also anconeus space muscles that extend the elbow. The biceps brachii, brachialis, and brachioradialis flex the elbow. Abduction and adduction are usually side-to-side movements, together as moving the arm laterally once doing jumping jacks. Abduction: relocating away indigenous the body’s midline. Adduction: relocating toward the body midline. The gluteus medius, gluteus minimus, tensor fasciae latae, and sartorius space muscles the abduct the hip. The pectineus, adductor longus, adductor brevis, adductor magnus, and also gracilis adduct the hip. Pronation and supination Describing the rotation of the forearm back and forth calls for special terms. Spread your fingers out and also look at the palms of her hands and the fingers and then rotate your palms come look at her nails. Now look at your palms again. That’s forearm supination and pronation. Pronation: rotating the forearm so the palm is encountering backward or down. Supination: rotating the forearm therefore the palm is facing forward or up. Elevation and also depression space up-and-down movements, such as chewing or shrugging her shoulders. As soon as you relocate the mandible under to open the mouth, that’s mandible depression. Relocate the mandible ago up, it is mandible elevation. Elevation: relocating a body component up. Depression: relocating a body component down. Protraction and also retraction By relocating your jaw ago and soon in a jutting motion, you space protracting and retracting her mandible. Protraction: moving a bone front without changing the angle. Retraction: moving a bone backward without changing the angle. Inversion and eversion You invert your foot when you rotate it inward to view what is stuck under your shoe. You evert your foot to placed the single of her shoe ago on the floor. Inversion: turning the sole of the foot inward. Eversion: transforming the sole of the foot outward. Dorsiflexion and plantar flexion girlfriend dorsiflex her feet to walk on your heels, and plantar flex them to tiptoe. Dorsiflexion: bringing your foot upward toward your shin. Plantar flexion: depressing her foot. 3. Muscle Actions have Prime Movers, Synergists, Stabilizers, and Antagonists While countless muscles may be involved in any given action, muscle function terminology enables you to quickly understand the various roles various muscles play in every movement.Prime movers and also antagonist The element mover, sometimes called the agonist, is the muscle that gives the primary force driving the action. One antagonist muscle is in opposition to a prime mover in that it provides some resistance and/or reverses a given movement. Element movers and antagonists are frequently paired increase on opposite sides of a joint, with their prime mover/antagonist functions reversing together the movement transforms direction. Synergists. One or more synergists are often involved in one action. Synergists space muscles that aid the prime mover in the role. Stabilizers. Stabilizers act to store bones immobile as soon as needed. Your earlier muscles, because that example, room stabilizers as soon as they are keeping your posture sturdy. See more: Do Lutherans Eat Meat On Fridays During Lent, Differences In Lent Between Catholics & Lutherans Download Body activities Lab Activity External Sources Muscle Premium by visible Body offers a substantial reference the musculoskeletal structures and also function, plus typical injuries and conditions.
https://tasiilaq.net/the-movable-attachment-of-a-muscle-is-called-the/
CROSS REFERENCE TO RELATED APPLICATION FIELD OF THE INVENTION BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF-THE PREFERRED EMBODIMENTS OTHER EMBODIMENTS This application is based upon and claims the benefit of Japanese Patent Application No. 2004-078048 filed on Mar. 18, 2004 and No. 2004-298760 filed on Oct. 13, 2004, the content of which are incorporated herein by reference. The present invention relates to a vehicular brake control apparatus and a vehicular brake control method for causing a brake mechanism that generates braking force on a vehicle by pressing a friction-applying member, such as a brake pad or the like, against a friction-receiving member and therefore generating friction force to promptly generate braking force by applying pressurizing force beforehand to the friction-applying member. In a conventional brake control apparatus, an ineffective stroke in a brake caliper, for example, a gap between a brake pad and a disc rotor, can be eliminated beforehand by applying pressurizing force to the brake pad (e.g., Japanese Patent Application Laid Open No. HEI 10-157585). In this brake control apparatus, a wheel cylinder (hereinafter, referred to as “W/C”) provided separately for each tire wheel is pre-charged with a pressure (W/C pressure) in accordance with the releasing speed of the accelerator pedal so as to substantially eliminate the ineffective stroke in the brake caliper prior to engagement of the brake. Therefore, when a brake engaging mode is entered, braking force is promptly generated. However, since the pre-charge of W/C pressure is performed in accordance with the releasing speed of the accelerator pedal, the aforementioned conventional brake control apparatus has the following drawbacks. That is, if a need for brake engagement suddenly arises, for example, in a case where the accelerator pedal is slowly being released or a case where the accelerator pedal has been in an undepressed state, the ineffective stroke in the brake caliper cannot be eliminated beforehand, and therefore the prompt generation of braking force cannot be achieved. Furthermore, since the conventional brake control apparatus performs the pre-charge of W/C pressure on the basis of the releasing speed of the accelerator pedal alone, the pre-charge is executed irrespectively of surrounding environments, that is, may be executed even when the pre-charge is not needed. Therefore, uncomfortable brake feeling may be caused to a driver. Accordingly, an object of the present invention is to provide a vehicular brake control apparatus and a vehicular brake control method capable of precisely performing the pre-charge under a circumstance where the prompt generation of braking force is needed. It is another object of the present invention to provide a vehicular brake control apparatus and a vehicular brake control method capable of preventing or curbing the uncomfortable brake feeling caused to a driver by unnecessary performance of the pre-charge. According to a first aspect of the present invention, risky occasions, and locations that may be risky are detected by determining whether the ambient environment detected by a surrounding environment detector device meets a predetermined criterion, and the pre-charge is performed for such locations and the like. Therefore, the pre-charge can be precisely performed under necessary circumstances irrespectively of the driver's accelerator operation, so that braking force is promptly generated when the driver depresses the brake pedal at such a location or the like. This makes it possible to prevent accidents or the like. Furthermore, since the vehicular brake control apparatus performs the pre-charge in accordance with ambient environments, unnecessarily frequent performance of the pre-charge is avoided, that is, the pre-charge is not performed when it is not needed, but is performed only when it is truly needed, for example, at emergency occasions and the like. Therefore, it becomes possible to avoid discomforting the driver about brake feeling. For example, the surrounding environment detector device may include an infrastructure information input device capable of acquiring infrastructure information as an ambient environment. In this construction, a pre-charge permission determination portion determines whether the infrastructure information detected by the infrastructure information input device meets a predetermined criterion. In accordance with a result of the determination, the pre-charge permission determination portion causes a braking force control device to execute the pre-charge control. Therefore, it becomes possible to perform the pre-charge control based on the infrastructure information, for example, information that cannot be obtained only via the sensors provided in the vehicle. The surrounding environment detector device may include a navigation device that stores a road map and road information regarding roads contained in the road map. In this construction, using the navigation device, information regarding a road that the vehicle follows is detected as the ambient environment. On the basis of the road information and the road map stored in the navigation device, it is determined by the pre-charge permission determination portion whether the present road requires that the driver's attention be called. If it is determined that the road requires that the driver's attention be called, the braking force control device is caused to execute the pre-charge control. Therefore, if the present road is a road requiring that driver's attention be called, for example, a road requiring a stop for safety, a road that may possibly be busy with pedestrians and the like, a road used for school commutation, a road that has blind corners and the like so that drivers cannot easily grasp conditions ahead, etc., it is possible to perform the pre-charge control suitable to such a road. In the foregoing construction, the surrounding environment detector device may include a vehicle speed detector device that produces an output corresponding to a vehicle speed of the vehicle. In this construction, the pre-charge permission determination portion may determine whether there is possibility of the vehicle overrunning a stop-requiring position or a vicinity of the stop-requiring position, or whether there is possibility of the vehicle overrunning an intersection that is not equipped with a traffic signal or a vicinity of the intersection, from a present vehicle speed on the basis of the output of the vehicle speed detector and the road information and the road map stored in the navigation device. If it is determined that there is such possibility, the braking force control device is caused to execute the pre-charge control. Therefore, if it is determined from the present vehicle speed that there is possibility of the vehicle overrunning a stop-requiring position or a vicinity of the stop-requiring position, or whether there is possibility of the vehicle overrunning an intersection that is not equipped with a traffic signal or a vicinity of the intersection, it is possible to perform the pre-charge control suitable to such a situation. Furthermore, the surrounding environment detector device may include a right-and-left turn detector device that detects whether the vehicle is about to turn right or left. In this construction, it is determined by the pre-charge permission determination portion whether there is possibility of the vehicle turning right or left on the basis of an output of the right-and-left turn detector device and the road information and the road map stored in the navigation device. If it is determined that there is possibility of the vehicle turning right or left, the braking force control device is caused to execute the pre-charge control. Therefore, if the vehicle is about to turn right or left, the pre-charge control can be performed in advance. In the foregoing construction, the surrounding environment detector device may include a vehicle speed detector device that produces an output corresponding to a vehicle speed of the vehicle. In this construction, it is determined by the pre-charge permission determination portion whether the vehicle is about to turn right or left in a situation where the vehicle starts to run again after verification of a stop of the vehicle at an intersection on the basis of the output of the right-and-left turn detector device, the output of the vehicle speed detector device, and the road information and the road map stored in the navigation device. If the determination is affirmative, the braking force control device is caused to execute the pre-charge control. Still further, the surrounding environment detector device may include a behavior detector device that produces an output corresponding to a driver's behavior. In this construction, it is determined by the pre-charge permission determination portion whether the driver's behavior corresponds to a road route that the vehicle is to follow on the basis of the output of the behavior detector device and the road information and the road map stored in the navigation device. If the driver's behavior does not correspond to the road route, the braking force control device is caused to execute the pre-charge control. Therefore, if the driver's behavior does not correspond to a road route that the vehicle is to follow, the pre-charge control can be performed suitably to such a situation. According to still another form of the present invention, the pre-charge may be ended only when the amount of pressurization generated by operation of a brake operating member becomes greater than the amount of pressurization generated by the pre-charge. Therefore, if the driver operates the brake operating member only slightly during the pre-charge, the pre-charge is not ended. In yet another form of the present invention, the amount of brake fluid used for the pre-charge control may be set at different values for individual brake calipers provided for the wheels, in accordance with the specifications of the brake calipers. By setting the amount of brake fluid needed for the pre-charge at amounts appropriate to the individual brake calipers in the foregoing manner, it becomes possible to execute more suitable pre-charge. In a further form of the present invention, the vehicle speed may be detected by the surrounding environment detector device, and the amount of brake fluid used for executing the pre-charge is controlled to an amount corresponding to the vehicle speed by the pre-charge permission determination portion. If the amount of brake fluid for executing the pre-charge is set in accordance with the vehicle speed, it becomes possible to execute the pre-charge suitably in accordance with the vehicle speed. For example, if the pre-charge permission determination portion determines from the output of the vehicle speed detector device that the vehicle speed is lower than a predetermined vehicle speed, the braking force control device may cause the amount of brake fluid used for executing the pre-charge control to become less than the amount of brake fluid that is used for the pre-charge control when the vehicle speed is higher than or equal to the predetermined vehicle speed, or omits execution of the pre-charge control. Thus, if the effect of the pre-charge becomes low as in the case where the vehicle speed is lower than a predetermined vehicle speed, the amount of brake fluid used for the pre-charge may be set at a reduced amount, or performance of the pre-charge may be omitted. Furthermore, the braking force control device may increase the amount of brake fluid used for executing the pre-charge in accordance with increase in the vehicle speed on the basis of the output of the vehicle speed detector device. By increasing the amount of brake fluid used for the pre-charge with increase in the vehicle speed in the aforementioned manner, it becomes possible to execute the pre-charge corresponding to the velocity dependency of the brake pads. In a further form of the present invention, if a braking operation of a preceding vehicle occurs, the pre-charge may be performed as a risk is assumed in such a situation. Therefore, even if the preceding vehicle decelerates and rapidly approaches, it is possible to correspondingly generate braking force. In a preferable form as for example, the pre-charge permission determination portion may determine whether a condition that an inter-vehicle distance to the preceding vehicle detected from an output of the distance detector device is less than a first predetermined value and that a relative speed with respect to the preceding vehicle determined from a rate of change of the inter-vehicle distance is greater than a second predetermined value is met. If this condition is met, the braking force control device executes the pre-charge control. This arrangement avoids the pre-charge in the cases where the inter-vehicle distance to a preceding vehicle is so great that there is substantially no degree of risk and in the cases where the relative speed to a preceding vehicle is substantially zero or negative and therefore there is substantially no possibility of the host vehicle catching up with the preceding vehicle, since there is no need for the pre-charge in these cases. By avoiding unnecessary performance of the pre-change in this manner, the driver's brake feeling can be improved. In a further form of the present invention, if a laterally adjacent vehicle is about to cut in front of the host vehicle, the pre-charge may be performed as a risk is assumed in such a situation. This arrangement allows braking force to be generated in quick response to a laterally adjacent vehicle cutting in front of the host vehicle. In a preferable form as for example, the pre-charge permission determination portion may determine whether a condition that an inter-vehicle distance to the laterally adjacent vehicle detected from an output of the distance detector device is less than a third predetermined value and that a relative speed with respect to the laterally adjacent vehicle is greater than a fourth predetermined value is met. If this condition is met, the braking force control device executes the pre-charge control. This arrangement also avoids unnecessary performance of the pre-change, and therefore can improve the driver's brake feeling. It should be apparent that the techniques of executing the pre-charge in accordance with a predetermined ambient environment are not necessarily limited to substantial apparatuses or the like, but function in the form of methods and the like. The present invention will be described further with reference to various embodiments in the drawings. First Embodiment FIG. 1 FIG. 1 A block diagram of a vehicular brake control apparatus to which an embodiment of the present invention is applied is shown in . This vehicular brake control apparatus can be installed in practically any vehicle, such as an engine-installed vehicle, an electric vehicle, etc. A construction of the vehicular brake control apparatus will be described hereinafter with reference to . FIG. 1 1 2 3 4 5 6 As shown in , the vehicular brake control apparatus includes a pre-charge main switch , a surrounding environment detector portion , a brake operation detector portion , a pre-charge permission determination portion , an in-cabin warning portion , and a brake actuator corresponding to a braking force control device. 1 1 1 4 4 The pre-charge main switch is disposed in, for example, an instrument panel in a vehicle cabin, and is provided for the on/off switching operation of a driver. The pre-charge main switch serves as a switch for selecting whether to activate the vehicular brake control apparatus in this embodiment. A signal indicating the state of on/off switching of the pre-charge main switch is input to the pre-charge permission determination portion . On the basis of the signal, the pre-charge permission determination portion determines whether to execute pre-charge control processing. 2 2 2 The surrounding environment detector portion is provided for detecting an environment around a vehicle and the state of run of the host vehicle. The surrounding environment detector portion outputs an electric signal that serves as a reference for use in determining whether the vehicle is in an environment where the pre-charge needs to be performed, such as a risky location or a situation that may be risky. Examples of the surrounding environment detector portion include an infrastructure information input device, a vehicular speed sensor, a steering angle sensor, a navigation device, an image recognition device, an obstacle recognizing sensor, etc. 4 The infrastructure information input device is a device for acquiring the information that cannot be obtained only via the sensors provided in the vehicle, by using vehicle-to-vehicle communication or road-to-vehicle communication or the like. For example, the device acquires infrastructure information prepared by an AHS system. For example, the infrastructure information input device makes it possible to obtain surrounding circumstance information obtained via cameras disposed at intersections and the like and to output an electric signal indicating information regarding an intersection through which the vehicle is to pass to the pre-charge permission determination portion . 4 4 The vehicle speed sensor is a device for outputting an output signal corresponding to the speed of the vehicle equipped with a vehicular brake control apparatus. Although the vehicle speed sensor is cited herein as an example of a vehicle speed detector device, the vehicle speed sensor may also be replaced by tire wheel speed sensors, which are becoming common equipment of vehicles. In that case, each tire wheel speed sensor outputs a detection signal corresponding to the tire wheel speed. Therefore, the vehicle speed may be determined by the pre-charge permission determination portion on the basis of the detection signals. If a brake ECU or another ECU determines vehicle speed from the detection signals from the tire wheel speed sensors, the pre-charge permission determination portion may receive a signal regarding the vehicle speed from the brake ECU or the like. The steering angle sensor outputs, as a detection signal, a signal corresponding to the amount of steering operation performed by a driver. On the basis of the detection signal from the steering angle sensor, the cornering state of the vehicle can be determined. 4 The navigation device stores road maps providing information regarding intersections, curves, etc., and information regarding individual roads within the road maps, for example, information that a road requires a stop for safety, information that a road may possibly be busy with pedestrians and the like, information that a road is used for school commutation, information that a road has blind corners and the like so that drivers cannot easily grasp conditions ahead, etc., that is, various information indicating roads that require the pre-charge. The stored information can be output as electric signals from the navigation device to the pre-charge permission determination portion . 4 The image recognition device is capable of capturing conditions present ahead of the vehicle or within the cabin as images, for example, a vehicle-installed camera or the like. On the basis of image data of pictures taken by the vehicle-installed camera, it is possible to perform an analysis for a pedestrian or the like ahead of the vehicle and an analysis for the direction of the line of vision of a driver. The image data provided by the image recognition device or the information regarding pedestrians obtained after the analysis of image data can be output as electric signals to the pre-charge permission determination portion . 4 The obstacle recognizing sensor is a sensor for detecting conditions ahead of the vehicle or adjacent to the vehicle. For example, the sensor is designed so as to detect the distance to an obstacle, such as a pedestrian or the like, by using laser, ultrasonic waves, infrared rays or the like. Specific examples of the obstacle recognizing sensor include a laser radar that applies laser beams to an area ahead of the vehicle, and receives reflected light therefrom, and computes the distance to a preceding vehicle on the basis of the interval between the laser emission time and the laser reception time, a night vision device that displays conditions ahead of the vehicle captured via infrared radiation at night, etc. Detection signals from the obstacle recognizing sensor, that is, an electric signal indicating the distance to an obstacle, an electric signal indicating image data regarding conditions ahead of the vehicle or information regarding pedestrians or the like obtained after analysis of image data, etc., can be output to the pre-charge permission determination portion . 3 3 3 The brake operation detector portion outputs an electric signal corresponding to the operation of the brake pedal (brake operating member) performed by a driver. Examples of the brake operation detector portion include a stroke sensor that outputs an electric signal corresponding to the amount of stroke of the brake pedal, and a depressing force sensor that outputs an electric signal corresponding to the depressing force applied to the brake pedal. On the basis of the electric signals from the brake operation detector portion , it is determined whether the brake pedal is operated. 4 4 The pre-charge permission determination portion is formed by a microcomputer that has a CPU, a ROM, a RAM, an I/O unit, etc. The pre-charge permission determination portion executes a pre-charge determining processing in accordance with programs stored in the ROM. 4 1 2 1 4 2 Specifically, the pre-charge permission determination portion is designed to receive electric signals input from the pre-charge main switch and the surrounding environment detector portion . When an electric signal that causes execution of a pre-charge control processing is input from the pre-charge main switch , the pre-charge permission determination portion allows execution of the pre-charge control processing on the basis of the electric signal from the surrounding environment detector portion . 4 2 4 5 6 The ROM of the pre-charge permission determination portion stores a predetermined criterion for determining whether to execute the pre-charge. Then, if the environment around the vehicle or the running state of the host vehicle detected on the basis of the electric signal from the surrounding environment detector portion meets the criterion, the pre-charge permission determination portion outputs an electric signal indicating to the in-cabin warning portion that the pre-charge is being executed, and also outputs an electric signal for causing the brake actuator to execute the pre-charge. For example, with reference to the infrastructure information obtained from the infrastructure information input device, it is determined whether a certain environment is prone to corner collisions, or whether a certain environment is an environment where a left-turning vehicle is likely to collide with an oncoming vehicle or the like, or whether a certain environment is an environment where a right or left-turning vehicle is likely to collide with a pedestrian on a crosswalk, etc., on the basis of whether a predetermined criterion is met. 5 6 Furthermore, with reference to information regarding various roads and road maps obtained from the navigation device, it is determined whether a certain area is an area with high possibility of presence of pedestrians and the like, such as an intersection, a residential area, a highway service area, etc., on the basis of a predetermined criterion. Examples will be described in a case where the needs for warning on an area-by-area basis associated with road maps are stored in the navigation device in the form of digitized values or flag setting. If a digitized value is greater than or equal to a predetermined threshold value or if a flag is on, it is determined that a predetermined criterion is met, so that an electric signal is output to the in-cabin warning portion and the brake actuator so as to execute the pre-charge before the vehicle runs in that area or the like. Still further, with reference to information regarding various roads and road maps obtained from the navigation device and detection signals from the vehicle speed sensor, it is determined from the present vehicle speed whether there is possibility of the vehicle overrunning a stop line (or a position that requires a stop) or a vicinity thereof, or whether there is possibility of the vehicle overrunning an intersection that is not equipped with a traffic signal or a vicinity of the intersection, etc., on the basis of a predetermined criterion, for example, a braking distance expected from the vehicle speed. 5 6 Still further, with reference to information regarding various roads and road maps obtained from the navigation device and detection signals from the vehicle speed sensor and the steering angle sensor, it is determined whether there is possibility of the vehicle turning right or left on the basis of a predetermined criterion. For example, if it is determined that the steering operation performed by the driver is greater than or equal to a predetermined threshold value from the detection signal from the steering angle sensor in a situation where the vehicle starts to run again after verification of a stop of the vehicle at an intersection on the basis of a road map and a detection signal from the vehicle speed sensor, it is assumed that there is possibility of the vehicle turning right or left. Therefore, in this case, too, an electric signal is output to the in-cabin warning portion and the brake actuator so as to execute the pre-charge beforehand. 4 Although the detection signal from the steering angle sensor is cited above as an example, it is also possible to adopt a construction in which the signal indicating a driver's operation of a blinker or direction indicator is input to the pre-charge permission determination portion and, on the basis of the operation signal, it is determined whether the vehicle is to turn left or right, and accordingly the aforementioned electric signal is output. 5 6 Still further, with reference to information regarding various roads and road maps obtained from the navigation device and to information from the image recognition device, it is determined, for example, whether a driver's behavior corresponds to a route of roads that the vehicle is scheduled to follow, on the basis of a predetermined criterion. Examples of the case where such determination is convenient include a case where a driver has been staring rightward for a predetermined time although the vehicle is scheduled to turn left. In this case, too, an electric signal is output to the in-cabin warning portion and the brake actuator so as to execute the pre-charge beforehand. 5 5 4 5 The in-cabin warning portion is provided for visually or auditorily indicating to the driver that the pre-charge is being executed. The in-cabin warning portion is formed by, for example, a warning indicator lamp provided in an instrument panel in the vehicle cabin, a warning buzzer provided in the vehicle cabin, a voice/sound producing device such as a speaker provided in an audio device or the navigation device, or the like. The indicator lamp is able to visually indicate to the driver that the pre-charge is being executed. The voice producing device is able to auditorily indicate to the driver that the pre-charge is being executed. Specifically, upon input of the aforementioned electric signal from the pre-charge permission determination portion , as for example, the in-cabin warning portion indicates the execution of the pre-charge to the driver by lighting on or producing voices or sound. 6 6 The brake actuator is formed by a brake mechanism that is capable of automatic pressurization. In other words, the brake actuator is designed to be capable of automatically applying the W/C pressure so as to reduce or eliminate the ineffective stroke, that is, a play stroke that occurs before the friction-applying member, such as a brake pad or the like, is pressed against a friction-receiving member such as a disc rotor or the like. FIG. 2 FIG. 2 6 6 shows an example of the brake actuator . As shown in , the brake actuator includes two brake systems (an X-form piping), that is, a first brake system that controls the brake fluid pressure applied to the left front wheel and the right rear wheel, and a second brake system that controls the brake fluid pressure applied to the right front wheel and the left rear wheel. 11 12 13 11 12 13 13 13 13 13 13 13 a b c d a b. A brake pedal , that is, a brake operating member that is depressed by the driver in order to apply braking force to the vehicle, is connected to a booster device and a master cylinder which are brake fluid pressure sources. When the driver depresses that brake pedal , the booster device boosts the depressing force to pressurize the master pistons , disposed in the master cylinder . Therefore, the same master cylinder pressure (hereinafter, referred to as “M/C pressure”) will be produced in the primary chamber and the secondary chamber that are defined by the master pistons , 13 13 13 13 13 13 13 13 13 13 13 13 13 e c d e c d c d e. The master cylinder is equipped with a master reservoir having passageways that are connected to the primary chamber and the secondary chamber . Via the passageways, the master reservoir supplies brake fluid into the master cylinder , and stores surplus brake fluid from the master cylinder . Each of the passageways has a very small diameter as compared with the diameter of main conduits that extend from the primary chamber and the secondary chamber . Therefore, these passageways achieve an orifice effect when the brake fluid flows from the primary chamber and the secondary chamber of the master cylinder into the master reservoir 13 14 15 34 35 50 50 a b. The M/C pressure produced in the master cylinder is conducted to the W/Cs , , , through the first brake system and the second brake system 50 50 50 50 50 50 50 a b a b a b a. The brake systems , will be described below. Since the first brake system and the second brake system have substantially the same construction, only the first brake system will be described below. Although the second brake system will not be described, the construction thereof can be understood with reference to the construction of the first brake system 50 14 15 14 15 a The first brake system is equipped with a conduit A that is a main conduit that conducts the aforementioned M/C pressure to the W/C provided for the left front wheel FL and the W/C provided for the right rear wheel RR. Via the conduit A, the W/C pressure is produced in each W/C , . 16 16 16 14 15 13 14 15 14 15 13 The conduit A is provided with a first differential pressure control valve that is formed by an electromagnetic valve capable of controlling two positions, that is, an opened state and a differential pressure state. In the first differential pressure control valve , an opened state valve position is held during an ordinary braking state. When electric power is supplied to the solenoid coil, the valve position changes to the differential pressure state. While the first differential pressure control valve is at the differential pressure state valve position, the brake fluid is allowed to flow only in the direction from the side of the W/Cs , to the side of the master cylinder only when the brake fluid pressure of the pair of W/Cs , exceeds the M/C pressure by at least a predetermined value. Therefore, the brake fluid pressure is always controlled so that the W/C , -side pressure does not exceed the master cylinder -side pressure by the predetermined pressure value or more. Conduit protection is thus realized. 1 2 14 15 16 1 2 17 14 18 15 The conduit A branches into two conduits A, A at a downstream point on the W/C , side of the first differential pressure control valve . One of the two conduits A, A is provided with a first pressure increase control valve that controls the increase of the brake fluid pressure supplied to the W/C , and the other one is provided with a second pressure increase control valve that controls the increase of the brake fluid pressure supplied to the W/C . 17 18 17 18 19 14 15 The first and second pressure increase control valves , are each formed by an electromagnetic valve that is provided as a two-position valve capable of controlling an opened state and a closed state. While the first or second pressure increase control valve , is controlled to the opened state, the M/C pressure or the brake fluid pressure produced by discharge of brake fluid from a pump (described below) can be applied to the W/C or . 11 16 17 18 During the ordinary braking achieved by a driver's operation of the brake pedal , the first differential pressure control valve and the first and second pressure increase control valves , are always controlled to the opened state. 16 17 18 16 17 18 16 16 14 15 11 16 17 18 17 18 11 17 18 a a a a a a The first differential pressure control valve and the first and second pressure increase control valves , are provided with safety valves , , , respectively, which are connected in parallel therewith. The safety valve of the first differential pressure control valve is provided for allowing conduction of the M/C pressure to the W/Cs , upon the driver's depression of the brake pedal while the valve position of the first differential pressure control valve is in the differential pressure state. The safety valves , of the pressure increase control valves , are provided for allowing reduction of the W/C pressure of the left front wheel FL and the right rear wheel RR corresponding to the driver's release of the brake pedal if the releasing operation is performed while the pressure increase control valves , are controlled to the closed state, particularly, during ABS control. 17 18 14 15 20 21 22 21 22 21 22 Conduits B connecting the conduits A between the first and second pressure increase control valves , and the W/Cs , to a reservoir hole of a reservoir are provided with a first pressure reduction control valve and a second pressure reduction control valve , respectively. The valves , are each formed by an electromagnetic valve that is provided as a two-position valve capable of controlling conduction and closed states. During ordinary braking, the first and second pressure reduction control valves , are always controlled to the closed state. 20 19 60 20 13 14 15 A conduit C connects between the reservoir and the conduit A, that is, a main conduit. The conduit C is provided with a self-priming pump that is actuated by an electric motor so as to suck brake fluid from the reservoir and discharge it toward the side of the master cylinder or the side of the W/Cs , . 19 19 19 19 19 23 a b The pump is equipped with safety valves , so as to allow the one-way suction/discharge operation. In order to mitigate the pulsation of the brake fluid discharged by the pump , a portion of the conduit C on the discharge side of the pump is provided with a fixed capacity damper . 20 19 13 13 24 c A conduit D is connected to a portion of the conduit C between the reservoir and the pump . The conduit D is connected to a primary chamber of the master cylinder . The conduit D is provided with a first control valve capable of controlling shut-off and opened states. 13 19 14 15 During brake assist control, TCS control, ABS control or anti-side-skid control, brake fluid is sucked from the master cylinder through the conduit D by the pump and is discharged therefrom to the conduit A so as to supply brake fluid to a side of the W/C , and therefore increase the W/C pressure of object tire wheels. 50 50 16 36 17 18 37 38 21 22 41 42 24 44 19 39 6 b a The construction of the second brake system is substantially the same as that of the first brake system . Specifically, the first differential pressure control valve corresponds to a second differential pressure control valve . The first and second pressure increase control valves , correspond to third and fourth pressure increase control valves , , respectively. The first and second pressure reduction control valves , correspond to third and fourth pressure reduction control valves , , respectively. The first control valve corresponds to a first control valve . The pump corresponds to a pump . The conduits A, B, C and D correspond to conduits E, F, G and H, respectively. The brake actuator is constructed as described above. 6 16 17 18 21 22 24 36 37 38 41 42 44 60 19 39 4 14 15 34 35 In the brake actuator with the configuration as mentioned above, the voltage application control of the control valves , , , , , , , , , , , and the electric motor for driving the pumps , is executed on the basis of the electric signal from the pre-charge permission determination portion . In this manner, the control of the W/C pressure generated in the W/Cs , , , is performed. 6 11 14 15 34 35 FIG. 2 In the brake actuator , each control valve assumes a valve position as indicated in during normal braking. When M/C pressure is generated in accordance with the amount of depression of the brake pedal , the M/C pressure is conducted to the W/Cs , , , so that braking force is generated on each tire wheel. 16 36 60 19 39 14 15 34 35 During traction control, anti-skid control (vehicle stability control) or the like, the first and second differential pressure control valves , are controlled to the differential pressure state and the electric motor is energized so as to adjust the braking force on control-object wheels. Therefore, the brake fluid suction/discharge operation of the pumps , is performed, so that W/Cs , , , corresponding to control-object wheels are automatically pressurized via the conduits C, G and the conduits A, E to generate braking force. FIG. 3 The pre-charge control processing executed by the vehicular brake control apparatus constructed as described above will be described with reference to the flowchart of the pre-charge control processing shown in . Portions of processing shown in the drawing correspond to devices, units or the like that execute various processings and the like. FIG. 3 4 The vehicular brake control apparatus executes the pre-charge control processing in accordance with the flowchart shown in when an ignition switch (not shown) provided in the vehicle is turned on. The pre-charge control processing is executed by the pre-charge permission determination portion of the vehicular brake control apparatus in every predetermined control cycle. 100 1 4 1 1 1 At , it is determined whether the pre-charge main switch is on, that is, whether the driver has requested the pre-charge control. This determination processing is carried out on the basis of an electric signal input to the pre-charge permission determination portion from the pre-charge main switch . Specifically, if the pre-charge main switch is on and an electric signal indicating request for the pre-charge control has been output, the determination is affirmative. If the pre-charge main switch is off and an electric signal indicating the absence of request for the pre-charge control has been output or the electric signal indicating request for the pre-charge control has not been output, the determination is negative. 110 60 16 36 5 If the determination is negative, it is considered that the driver has not requested the pre-charge control, the processing proceeds to , at which a pre-charge control ending processing is performed. After that, the pre-charge control processing ends. The pre-charge control ending processing is executed if the pre-charge, after being started, is to be ended, or if the pre-charge is not performed. Specifically, in this processing, the driving of the electric motor is stopped in order to end the pre-charge, and the first and second differential pressure control valves , are set to the valve position of the opened state as in a normal braking operation. Furthermore, the warning by the in-cabin warning portion is stopped. 120 Conversely, if the determination is affirmative, it is considered that the driver has requested the pre-charge control, and the processing proceeds to . 120 11 3 130 At , it is determined whether the brake pedal is in an undepressed state. This determination processing is performed on the basis of a detection result provided by the brake operation detector portion . If the determination is affirmative, the processing proceeds to . 130 2 4 At , it is determined whether the surrounding environment has risk, specifically, whether the surrounding environment requires execution of the pre-charge so that braking force will be promptly generated. This processing is performed by determining whether information obtained from electric signals input from the surrounding environment detector portion to the pre-charge permission determination portion meets a predetermined criterion as described above. For example, determination is made regarding contents listed below. The determination regarding the following contents is performed by methods described above. (1) Whether the surrounding environment is prone to corner collisions, or whether the surrounding environment is an environment where a left-turning vehicle is likely to collide with an oncoming vehicle or the like, or whether the surrounding environment is an environment where a right or left-turning vehicle is likely to collide with a pedestrian on a crosswalk, etc. (2) Whether the surrounding environment includes an area with high possibility of presence of pedestrians and the like, such as an intersection, a residential area, a highway service area, etc. FIG. 4 (3) Whether there is possibility of the vehicle overrunning a stop line or a vicinity thereof, or whether there is possibility of the vehicle overrunning an intersection that is not equipped with a traffic signal or a vicinity of the intersection, etc. The content of the processing in this case is illustrated in the flowchart of . 131 131 131 a b c Firstly at , a present vehicle speed is computed on the basis of a detection signal input from the vehicle speed sensor. Subsequently at , the position of the host vehicle is detected on the basis of the input of the navigation device. Then at , it is determined from information regarding various roads and road maps obtained from the navigation device whether there is possibility of the vehicle overrunning a stop line or a vicinity thereof or an intersection that is not equipped with a traffic signal or a vicinity of the intersection. This determination processing is performed, for example, on the basis of whether the distance to the stop line or the intersection without a traffic signal is less than the braking distance expected from the vehicle speed. 131 131 131 131 130 c d c e FIG. 3 If the determination at is affirmative, the processing proceeds to , at which a flag indicating that there is a need for the pre-charge is set. Conversely, if the determination at is negative, the processing proceeds to , at which the flag indicating that there is a need for the pre-charge is cleared. In this manner, the determination as to whether there is any risk as indicated in is performed. The determination at is performed on the basis of the state of the flag mentioned above. FIG. 5 (4) Whether there is possibility of the vehicle turning right or left. The content of the processing in this case is illustrated in the flowchart of . 132 132 131 131 132 a b a b c FIG. 4 Firstly at and , a processing similar to the processing executed at and shown in is executed. Subsequently at , it is determined whether the vehicle is in a stopped state at an intersection from information regarding various roads and road maps obtained from the navigation device and the determined vehicle speed. 132 132 132 132 132 132 132 132 132 c d c e f f e f g. If the determination at is negative, there is no need for warning, and therefore the flag indicating that there is a need for the pre-charge is cleared at . Conversely, if the determination at is affirmative, the processing proceeds to , at which the vehicle speed is computed again on the basis of the detection signal from the vehicle speed sensor. Subsequently at , it is determined whether the host vehicle has started from the re-determined vehicle speed. If it is determined at that the host vehicle has not started, the processing at and is repeated until the host vehicle starts. If it is determined that the host vehicle has started, the processing proceeds to 132 132 132 132 132 132 130 g h g h i d FIG. 3 At , the output signal of the steering angle sensor is input. Subsequently at , it is determined whether the host vehicle is about to turn right or left. This determination processing is performed on the basis of whether the steering angle determined from the output signal of the steering angle sensor input at is greater than a predetermined threshold value. If the determination at is affirmative, the processing proceeds to , at which the flag indicating that there is a need for the pre-charge is set. If the determination is negative, the processing proceeds to , at which the flag indicating that there is a need for the pre-charge is cleared. In this manner, the determination as to whether there is any risk as indicated in is executed. The determination at is executed on the basis of the state of the flag mentioned above. FIG. 6 (5) Whether the driver's behavior corresponds to a route of roads that the vehicle is scheduled to follow. The content of the processing in this case is illustrated in the flowchart of . 133 133 131 a b b Firstly at , the driver's behavior detected by the image recognition device is input. Subsequently at , a processing similar to the processing executed at is executed. 133 c Subsequently at , it is determined whether the driver's behavior is out of correspondence to or corresponds to the driving route. This determination processing is performed on the basis of a criterion, for example, whether the driver has been staring rightward for at least a predetermined time although the vehicle is scheduled to turn left. 133 133 133 130 c d e FIG. 3 If it is determined at that the driver's behavior is out of correspondence to the driving route, the processing proceeds to , at which the flag indicating that there is a need for the pre-charge is set. If it is determined that the driver's behavior is not out of correspondence to (i.e., corresponds to) the driving route, the processing proceeds to , at which the flag indicating that there is a need for the pre-charge is cleared. In this manner the determination as to whether there is any risk as indicated in is performed. The determination at is performed on the basis of the state of the flag mentioned above. 130 140 110 If the result of determination obtained by the processing at executed as described above is that there is a need for the pre-charge, the processing proceeds to . If the result of determination is that there is no need for the pre-charge, the processing proceeds to , at which the pre-charge control ending processing is executed as described above. 140 6 5 At , a pre-charge control starting processing is executed, that is, an electric signal for executing the pre-charge is output to the brake actuator and an electric signal indicating that the pre-charge is being executed is output to the in-cabin warning portion . 6 60 19 39 16 36 13 13 13 19 39 14 15 34 35 c d Therefore, in the brake actuator , the electric motor is driven to cause the pumps , to perform the brake fluid suction/discharge operation, and the first and second differential pressure control valves , are set to the differential pressure state. Therefore, via the primary chamber and the secondary chamber of the M/C , the pumps , suck and discharge brake fluid so that the W/Cs , , , are pressurized, that is, pre-charged, via the conduits C, G and the conduits A, E. 14 15 34 35 6 6 The W/C pressure generated in the W/Cs , , , reduces or eliminates the ineffective stroke between the friction-applying member and the friction-receiving object. Specifically, if the brake actuator is of a type that incorporates disc brakes for generating braking force, the ineffective stroke between the brake pad and the disc rotor in each brake caliper is reduced or eliminated. If the brake actuator is of a type that incorporates drum brakes for generating braking force, the ineffective stroke between the brake shoe and the drum internal wall surface in each brake drum is reduced or eliminated. 5 Furthermore, the in-cabin warning portion produces a warning in the form of visual indication of a lamp or the like or in the form of voices or sounds, so as to indicate to the driver that the pre-charge is being executed. Therefore, it becomes possible for the driver to recognize that the pre-charge is being executed. Furthermore, since the ineffective stroke between the friction-applying member and the friction-receiving object has been reduced or eliminated, braking force can be promptly generated on the vehicle when the driver depresses the brake pedal in accordance with the need. 140 Still further, if the processing proceeds to , it is indicated that the pre-charge has been started and is being executed by, for example, a flag set during the pre-charge, or the like. 120 150 If the determination at is negative, the processing proceeds to , at which it is determined whether the pre-charge is being executed. This determination is performed on the basis of whether the aforementioned under-the-pre-charge flag has been set. That is, it is determined whether the depression of the brake pedal was performed during the pre-charge. If the pre-charge is being executed and the brake pedal is in the depressed state, it must be determined which one of the W/C pressure generated by the pre-charge and the W/C pressure generated by the depression of the brake pedal is to be given priority. 150 Hence, at , it is determined whether the pre-charge is being executed. If the pre-charge is not being executed, the pre-charge control processing is immediately ended. In this case, since the pre-charge has not been executed, a W/C pressure corresponding to the depression of the brake pedal is generated. 150 160 110 If it is determined at that the pre-charge is being executed, the processing proceeds to , at which it is determined whether the W/C pressure caused by the depression of the brake pedal is greater than the W/C pressure caused by the pre-charge. If the W/C pressure caused by the depression of the brake pedal is greater than the W/C pressure caused by the pre-charge, it is considered that the need for the pre-charge no longer exists. Then, the processing proceeds to , at which the pre-charge control ending processing is executed. Therefore, the W/C pressure corresponding to the depression of the brake pedal is generated. 11 11 Conversely, if the W/C pressure caused by the depression of the brake pedal is less than the W/C pressure caused by the pre-charge, it is considered that the need for the pre-charge still exists, and the pre-charge control processing is ended without a stop of the pre-charge. Therefore, if the driver only slightly depresses the brake pedal during the pre-charge, the pre-charge is not ended. Since the pre-charge is continued in this case, braking force can be promptly generated on the vehicle when the driver depresses the brake pedal in accordance with the need. As can be understood from the foregoing description, the vehicular brake control apparatus of this embodiment detects risky occasions and locations that may be risky and, for such locations and the like, performs the pre-charge so that braking force will be promptly generated. Therefore, the pre-charge can be precisely performed under necessary circumstances, irrespectively of the driver's accelerator operation. Hence, when the driver depresses the brake pedal at such a location or the like, braking force will be promptly generated. Thus, accidents and the like can be prevented. 2 If a navigation device or an infrastructure information input device is used as a surrounding environment detector portion , it becomes possible to perform the pre-charge in advance also for locations that cannot be detected by the various sensors mounted on the vehicle for detecting ambient environments. Therefore, this arrangement makes it possible to effectively perform the pre-charge not only when it is risky but also when it may be risky, and thus contributes to prevention of accidents and the like. Furthermore, since the vehicular brake control apparatus of this embodiment performs the pre-charge in accordance with ambient environments, unnecessarily frequent performance of the pre-charge is avoided, that is, the pre-charge is not performed when it is not needed, but is performed only when it is truly needed, for example, at emergency occasions and the like. Therefore, it becomes possible to avoid discomforting the driver about brake feeling. Even in a situation where the vehicle starts running again after a temporary stop, the pre-charge will be performed if it is determined from the surrounding environment that the pre-charge is needed. Therefore, in such a case, too, braking force can be promptly generated on the vehicle. First Modification of First Embodiment 130 4 FIG. 3 FIGS. 4 to 6 FIG. 1 Although in the foregoing first embodiment, the determination regarding risk is performed at in , conceivable examples of the determination regarding risk further exist besides the examples indicated in . Such an example will be described as a first modification of the first embodiment. In terms of the overall construction of the vehicular brake control apparatus as shown in the block diagram of , this modification is substantially the same as the first embodiment. This modification differs merely in the processing executed by the pre-charge permission determination portion , and only different features will be described below. FIG. 7 200 200 200 300 In this modification, the vehicular brake control apparatus detects performance of a braking operation of a preceding vehicle, and correspondingly performs the pre-charge. As indicated in the schematic diagram of showing running vehicles, if a braking operation of a preceding vehicle occurs, the deceleration of the preceding vehicle may possibly result in a rapid approach of the preceding vehicle to a host vehicle . Such a case is considered risky so that the pre-charge is performed. FIG. 8 4 is a flowchart of a risk determination processing executed by the pre-charge permission determination portion in this modification. 134 200 2 200 2 2 200 200 200 2 200 2 200 200 134 a a At , a processing of determining the braking of the preceding vehicle is executed. Specifically, it is determined from information obtained from the surrounding environment detector portion whether a braking operation of the preceding vehicle has occurred. More specifically, since the surrounding environment detector portion is formed by an infrastructure information input device, an image recognition device or the like as mentioned above, the surrounding environment detector portion is able to detect the state of the brake lamps of the preceding vehicle or acquire information regarding the braking operation of the preceding vehicle . Thus, the state of the preceding vehicle is considered an element of the surrounding environment. If the image recognition device of the surrounding environment detector portion detects the turning-on of the brake lamps of the preceding vehicle , or if the infrastructure information input device of the surrounding environment detector portion receives information indicating that a braking operation of the preceding vehicle has been performed via the vehicle-to-vehicle communication or the road-to-vehicle communication, it is determined that the braking of the preceding vehicle has occurred, that is, the determination at is affirmative. 134 200 300 1 200 300 1 200 300 2 2 b Subsequently at , determination regarding the degree of risk is executed. For example, it is determined whether the inter-vehicle distance between the preceding vehicle and the host vehicle is less than a predetermined value (first predetermined value) N and the relative speed between the preceding vehicle and the host vehicle is greater than a predetermined value (second predetermined value) A. The inter-vehicle distance and the relative speed between the preceding vehicle and the host vehicle can be acquired via the surrounding environment detector portion since the surrounding environment detector portion includes an obstacle recognizing sensor and the like. 1 1 300 200 300 200 The predetermined value N and the predetermined value A are set at values that minimize or avoid the change in the driver's brake feeling. That is, the values are set so as to avoid the pre-charge in the cases where the inter-vehicle distance is so great that there is substantially no degree of risk and avoid the pre-charge in the cases where the relative speed of the host vehicle with respect to the preceding vehicle is substantially zero or negative and therefore there is substantially no possibility of the host vehicle catching up with the preceding vehicle . 134 134 134 134 134 134 a b c a b d If an affirmative determination is made at both and , the processing proceeds to , at which a flag indicating that there is a need for the pre-charge is set. If a negative determination is made at or , the processing proceeds to , at which the flag indicating that there is a need for the pre-charge is cleared. 200 200 300 200 134 b Thus, performing the pre-charge upon occurrence of the braking operation of the preceding vehicle makes it possible to generate braking force in a quick response if the preceding vehicle slows down and comes closer to the host vehicle . Furthermore, instead of performing the pre-charge in all the cases where the braking operation of the preceding vehicle occurs, the vehicular brake control apparatus of this modification performs the pre-charge only when a condition as indicated at is met. This arrangement achieves a restriction such that the pre-charge is not performed unnecessary, and therefore improves the precision of the pre-charge. Second Modification of First Embodiment 130 4 FIG. 3 FIG. 1 Another example of the determination regarding risk described above in conjunction with the first embodiment and shown at in will be described as a second modification of the first embodiment. In terms of the overall construction of the vehicular brake control apparatus as shown in the block diagram of , this modification is substantially the same as the foregoing first embodiment. This modification differs merely in the processing executed by the pre-charge permission determination portion , and only different features will be described below. 500 400 FIG. 9 In this modification, the vehicular brake control apparatus detects a vehicle cutting in front of the host vehicle, and correspondingly performs the pre-charge. That is, a situation where a vehicle is about to cut in front of a host vehicle as indicated by a schematic diagram of two running vehicles in is considered risky. In such a situation, the pre-charge is executed. FIG. 10 4 is a flowchart of a risk determination processing executed by the pre-charge permission determination portion in this modification. 135 400 2 500 400 500 400 2 2 500 500 500 2 500 2 500 500 400 500 135 a a At , a cut-in determination processing of determining whether a vehicle is about to cut in front of the host vehicle is executed. Specifically, it is determined from information obtained from the surrounding environment detector portion whether the present situation is a situation where a vehicle existing at a side of the host vehicle (hereinafter, simply referred to as “laterally adjacent vehicle ”) is about to cut in front of the host vehicle . More specifically, since the surrounding environment detector portion is formed by an infrastructure information input device or an image recognition device or the like as mentioned above, the surrounding environment detector portion is able to detect the state of the direction indicator of the laterally adjacent vehicle or acquire information regarding the steering of the laterally adjacent vehicle . Thus, the state of the laterally adjacent vehicle is considered an element of the surrounding environment. If the image recognition device of the surrounding environment detector portion detects the flashing of the direction indicator of the laterally adjacent vehicle , or if the infrastructure information input device of the surrounding environment detector portion obtains information regarding the steering of the laterally adjacent vehicle via the vehicle-to-vehicle communication or the road-to-vehicle communication and detects, from the steering information, that the laterally adjacent vehicle is moving into an area forward of the host vehicle , it is determined that laterally adjacent vehicle is about cut in, that is, the determination at is affirmative. 135 500 400 2 2 500 400 2 b Subsequently at , determination regarding the degree of risk is executed. For example, it is determined whether the inter-vehicle distance between the laterally adjacent vehicle and the host vehicle acquired via the surrounding environment detector portion is less than a predetermined value (third predetermined value) N and the relative speed between the laterally adjacent vehicle and the host vehicle is greater than a predetermined value (fourth predetermined value) A. 2 2 400 500 400 500 500 400 The predetermined value N and the predetermined value A are set at values that minimize or avoid the change in the driver's brake feeling. That is, the values are set so as to avoid the pre-charge in the cases where the inter-vehicle distance is so great that there is substantially no degree of risk and avoid the pre-charge in the cases where the relative speed of the host vehicle with respect to the laterally adjacent vehicle is substantially zero or negative and therefore there is substantially no possibility of the host vehicle catching up with the vehicle , if the vehicle cuts in front of the host vehicle . 500 2 400 500 Furthermore, the running path of the laterally adjacent vehicle is determined on the basis of the information regarding the steering angle acquired by the surrounding environment detector portion via the vehicle-to-vehicle communication. In this case, it is possible to perform the pre-charge only when the running path of the host vehicle does not have a good clearance from the running path of the laterally adjacent vehicle . 135 135 135 135 135 135 a b c a b d If an affirmative determination is made at both and , the processing proceeds to , at which a flag indicating that there is a need for the pre-charge is set. If a negative determination is made at or , the processing proceeds to , at which the flag indicating that there is a need for the pre-charge is cleared. 500 400 500 135 b Thus, if the pre-charge is performed in a situation where the laterally adjacent vehicle is about to cut in front of the host vehicle , braking force can be correspondingly quickly generated. Furthermore, instead of executing the pre-charge in all the cases where the cut-in of the laterally adjacent vehicle occurs, the vehicular brake control apparatus of this modification executes the pre-charge only when a condition as indicated at is met. This arrangement achieves a restriction such that the pre-charge is not performed unnecessary, and therefore improves the precision of the pre-charge. Second Embodiment FIG. 1 A second embodiment of the present invention will be described. This embodiment is different from the first embodiment in that the amount of brake fluid used for the pre-charge is varied between the front and rear wheels. In terms of the overall construction of the vehicular brake control apparatus as shown in the block diagram of and the like, the second embodiment is substantially the same as the first embodiment. Only different features of the second embodiment will be described below. FIG. 11 is a correlation diagram indicating a relationship between the amount of brake fluid consumed (hereinafter, referred to as “amount of consumed fluid”) by a brake caliper A provided for a front wheel and the W/C pressure and a relationship between the amount of consumed fluid by a brake caliper B provided for a rear wheel and the W/C pressure. FIG. 11 1 2 In general, the brake caliper A provided for each front wheel and the brake caliper B provided for each rear wheel have different specifications. Therefore, as indicated in , the amounts of consumed fluid Q, Q of the front and rear wheels are different even if equal W/C pressures PS are to be generated on the front wheel and the rear wheel. Specifically, since the front wheel brake caliper A is usually designed with a greater capacity than the rear wheel brake caliper B, a greater amount of consumed fluid is needed for the front wheel than for the rear wheel in order to generate a fixed W/C pressure PS. Therefore, in this embodiment, the amount of consumed fluid is varied between the front and rear wheels. FIG. 12 FIG. 3 140 is a flowchart of an amount-of-flow setting processing of setting the amounts of consumed fluid of the front and rear wheels. This processing is executed when there is a pre-charge request, that is, in the pre-charge control starting processing at in described above in conjunction with the first embodiment. 141 14 35 15 34 1 2 14 15 34 35 a FIG. 11 Firstly at , a pressurization time setting processing is executed. In this processing, a length of time TA for supplying brake fluid to pressurize the W/C , corresponding to the front wheels and a length of time TB for supplying brake fluid to pressurize the W/C , corresponding to the rear wheels are set. Each of the time TA and the time TB corresponds to the time for supplying the amount of consumed fluid Q or Q needed in order to generate the W/C pressure PS of the pre-charge indicated in for the W/Cs , , , of the front or rear wheels. 141 14 35 15 34 b Subsequently at , the pre-charge is executed for all the tire wheels. Specifically, pressurization is performed on the brake calipers A of the W/Cs , of the front wheels for the time TA, and pressurization is performed on the brake calipers B of the W/Cs , of the rear wheels for the time TB. After that, the W/C pressure of each tire wheel is maintained. In this manner, the amount of consumed fluid needed for the pre-charge is varied among the brake calipers, that is, the amounts of consumed fluid appropriate to the individual brake calipers are set, so as to allow execution of more suitable pre-charge. First Modification of Second Embodiment FIG. 13 FIG. 2 14 15 34 35 14 15 34 35 6 a a a a In the foregoing second embodiment, the amounts of consumed fluid appropriate to the brake calipers are set so that the amount of consumed fluid needed for the pre-charge is variable among the brake calipers, and the calipers are pressurized for the lengths of time TA, TB corresponding to the set amounts of consumed fluid so that each W/C pressure reaches the set value PS. Other methods are also possible. For example, direct detection of the W/C pressure of each brake caliper also makes it possible to bring each W/C pressure to the set value PS. That is, as shown in , pressure sensors , , , are provided corresponding to the W/Cs , , , of the brake actuator of the vehicular brake control apparatus shown in . Each W/C pressure is detected by a corresponding one of the pressure sensors, and is brought to a desired set value PS. FIG. 14 FIG. 3 140 is a flowchart of a W/C pressure control processing in this modification. As in the foregoing second embodiment, this processing is executed when there is a pre-charge request, that is, in the pre-charge control starting processing at in described above in conjunction with the first embodiment. This processing is executed for each wheel. 142 14 15 34 35 142 142 142 142 a a a a a a b a c Firstly at , it is determined whether the W/C pressure detected by a pressure sensor , , or has become equal to or greater than the set pressure PS. If the determination at is negative, the processing proceeds to , at which the W/C pressure is raised. If the determination at is affirmative, the processing proceeds to , at which the raising of the W/C pressure is stopped in order to maintain the detected W/C pressure. 14 15 34 35 Thus, by directly detecting the W/C pressure of each W/C , , , and raising the pressure until it reaches the set value PS, each W/C pressure can be brought to the set value PS even though the amount of consumed fluid needed for the pre-charge varies among the brake calipers. Therefore, it becomes possible to perform more suitable pre-charge as in the second embodiment. 14 15 34 35 Although in this modification the W/C pressure of each one of the W/Cs , , , is detected, detection of the pressure may also be performed for each brake brake system. Second Modification of Second Embodiment 2 The slip rate of each tire wheel may also be used as a basis for bringing each W/C pressure to the set value PS. In this case, on the basis of detection signals of the tire wheel speed sensors of the surrounding environment detector portion , a slip rate is determined from the tire wheel speeds and an estimated vehicle body speed determined from the tire wheel speeds. If the slip rate exceeds a predetermined value N, it is determined that the corresponding W/C pressure has reached the set value PS. FIG. 15 FIG. 3 140 is a flowchart of a W/C pressure control processing in this modification. As in the foregoing second embodiment, this processing is executed when there is a pre-charge request, that is, in the pre-charge control starting processing at in described above in conjunction with the first embodiment. This processing is executed for each wheel. 143 143 143 143 143 a a b a c Firstly at , it is determined whether the slip rate of a wheel is greater than the predetermined value N. If the determination at is negative, the processing proceeds to , at which the W/C pressure is raised. If the determination at is affirmative, the processing proceeds to , at which the raising of the W/C pressure is stopped in order to maintain the detected W/C pressure. The slip rate increases as the tire wheel speed drops below the estimated vehicle body speed in accordance with the W/C pressure. Therefore, it is possible to assume that the W/C pressure has reached the set value PS if the slip rate reaches the predetermined value N. Therefore, when the slip rate reaches the predetermined value N, the raising of the W/C pressure is stopped. Incidentally, the predetermined value N can be determined as follows. That is, the braking force that is generatable when the W/C pressure reaches the set pressure PS can be found from the W/C pressure-braking force characteristic. Therefore, the predetermined value N is determined by finding a slip rate corresponding to the set pressure PS from a relationship between the road surface friction coefficient μ and the slip rate S (μ-S curve) corresponding to the braking force found as described above. Although the slope of the μ-S curve changes in accordance with the road surface friction coefficient μ, the slip rate corresponding to the set value PS is very small, and is represented by only a portion of the rising segment of the μ-S curve. Therefore, it is not necessary to change the predetermined value N in accordance with the change in the slope of the μ-S curve which occurs in accordance with the road surface friction coefficient μ. Thus, by detecting the slip rate of each wheel and raising the W/C pressure until the slip rate reaches the predetermined value N corresponding to the set value PS, each W/C pressure can be brought to the set value PS even though the amount of consumed fluid needed for the pre-charge varies among the brake calipers. Therefore, it becomes possible to perform more suitable pre-charge as in the foregoing second embodiment. Furthermore, as compared with the first modification, the second modification does not need pressure sensors, so that corresponding simplification of component elements of the vehicular brake control apparatus can be achieved. Third Embodiment FIG. 1 A third embodiment of the present invention will be described. This embodiment is distinguished from the first embodiment in that the amount of brake fluid used for the pre-charge is changed in accordance with the vehicle speed, although the third embodiment is substantially the same as the first embodiment in terms of, for example, the overall construction of the vehicular brake control apparatus as shown in the block diagram of . In the first and second embodiments, the amount of brake fluid used for the pre-charge is not changed in accordance with the vehicle speed. However, when the vehicle speed is very low, the effect of the pre-charge becomes small, and therefore the amount of brake fluid used for the pre-charge may be reduced or performance of the pre-charge may be omitted. Furthermore, if the brake pads have a wheel speed dependency in that, for example, the friction force reduces during high-speed run or the like, it is preferable that the amount of brake fluid used for the pre-charge be greater during a run at a high speed than during a run at a speed that is lower than the aforementioned high speed. 2 Therefore, the amount of brake fluid used for the pre-charge is increased with the increasing vehicle speed by, for example, determining a set value PS corresponding to the vehicle speed determined via the vehicle speed sensor of the surrounding environment detector portion through the use of a map indicating a relationship between the set value PS and the vehicle speed, and setting an amount of brake fluid corresponding to the set value PS. Therefore, it becomes possible to perform suitable pre-charge corresponding to the vehicle speed. The changing of the amount of brake fluid used for the pre-charge in accordance with the vehicle speed, described above in conjunction with the third embodiment of the present invention, may also be combined with the first or second embodiment or any one of the modifications of the first and second embodiments. Although the execution of the pre-charge through the use of information regarding various roads and road maps obtained from the navigation device is described above in conjunction with the embodiments, the execution of the pre-charge may be limited only to, for example, the cases where the setting of a route has been accomplished by using the navigation device. 2 4 2 2 4 Furthermore, although in the foregoing embodiments, the electric signals from the surrounding environment detector portion are input to the pre-charge permission determination portion , this arrangement is not essential. For example, in-vehicle communication networks, termed in-vehicle LAN (so-called CAN), are being developed, and various data can be uploaded to the in-vehicle LAN. Such an in-vehicle LAN can also be used to receive data indicating a result of the detection performed by the surrounding environment detector portion . In that case, the results of the detection performed by the surrounding environment detector portion may be uploaded to the in-vehicle LAN as data obtained through various operations of an ECU other than the pre-charge permission determination portion (e.g., an engine ECU, a brake ECU, etc.). The present invention is also applicable to this arrangement or construction. 6 6 6 6 6 6 FIG. 3 The above-described brake actuator capable of automatic pressurization described above and having brake systems indicated in is merely an example. The brake actuator may have any structure as long as the brake actuator is capable of automatic pressurization. For example, a hydro booster that uses an electro-hydraulic pump to perform assist may be used as a brake actuator . Furthermore, the brake actuator is not limited to a hydraulic brake construction. For example, an electric brake that electrically generates braking force may be used as a brake actuator . 6 Still further, in a case where the vehicle starts after stopping at an intersection, the brake is in a state where braking force has already been generated and the W/C pressure has been raised. Therefore, if the vehicular brake control apparatus is designed to perform the pre-charge only during such a brake state, the maintenance of a portion of the then-existing W/C pressure will suffice and therefore the brake actuator does not need to have a construction capable of automatic pressurization. While the above description is of the preferred embodiments of the present invention, it should be appreciated that the invention may be modified, altered, or varied without deviating from the scope and fair meaning of the following claims. BRIEF DESCRIPTION OF THE DRAWINGS Other objects, features and advantages of the present invention will be understood more fully from the following detailed description made with reference to the accompanying drawings. In the drawings: FIG. 1 is a block diagram illustrating a construction of a vehicular brake control apparatus in accordance with a first embodiment of the present invention; FIG. 2 FIG. 1 is a diagram showing an example of the construction of a brake actuator provided in the vehicular brake control apparatus shown in ; FIG. 3 FIG. 1 is a flowchart of a pre-charge control processing executed by the vehicular brake control apparatus shown in ; FIG. 4 FIG. 3 is a flowchart of an example of the risk determination processing indicated in ; FIG. 5 FIG. 3 is a flowchart of another example of the risk determination processing indicated in ; FIG. 6 FIG. 3 is a flowchart of still another example of the risk determination processing indicated in ; FIG. 7 is a schematic diagram illustrating a case where, during a run of a host vehicle, there is a preceding vehicle; FIG. 8 is a flowchart of a risk determination processing in accordance with a first modification of the first embodiment; FIG. 9 is a schematic diagram illustrating a case where, during a run of a host vehicle, there is a cut-in vehicle; FIG. 10 is a flowchart of a risk determination processing in accordance with a second modification of the first embodiment; FIG. 11 is a correlation diagram indicating a relationship between the amount of brake fluid consumed (hereinafter, referred to as “amount of consumed fluid”) by a brake caliper A provided for a front wheel and the W/C pressure and a relationship between the amount of brake fluid consumed by a brake caliper B provided for a rear wheel and the W/C pressure; FIG. 12 is a flowchart of an amount-of-flow setting processing of setting the amounts of consumed fluid of front and rear wheels; FIG. 13 is a block diagram illustrating a construction of a vehicular brake control apparatus in accordance with a first modification of a second embodiment of the present invention; FIG. 14 is a flowchart of a W/C pressure control processing in the first modification of the second embodiment; and FIG. 15 is a flowchart of a W/C pressure control processing in a second modification of the second embodiment.
Metformin, a biguanide derivative, is considered a first-line treatment in patients with type 2 diabetes. In addition to controlling glycemic levels and reducing the risk of diabetes complications, several observational studies have reported that its use is associated with a decreased risk of cancer overall and across several specific cancer sites ([@B1]--[@B7]). With respect to lung cancer, only three observational studies have investigated that outcome ([@B8]--[@B10]). In one study, metformin use was associated with a 45% decreased risk of lung cancer (hazard ratio 0.55 \[95% CI 0.37--0.82\]) ([@B9]), whereas in the two other studies, no statistically significant association was found ([@B8],[@B10]). However, these observational studies had several methodological shortcomings. Lung cancer was a secondary outcome in two of these studies, and thus, the findings may have been partly due to chance as a result of multiple comparisons ([@B8],[@B10]). Furthermore, the method of the cohort selection was biased in one study ([@B10]), and exposure misclassifications led to immortal time bias in another study ([@B9]). Because of these methodological shortcomings, as well as insufficient follow-up in some of these studies, the relationship between metformin and lung cancer incidence remains unclear. Laboratory studies ([@B1],[@B5]--[@B7],[@B11]) have shown cytostatic or cytotoxic effects of biguanides in various models and have provided evidence for biologic plausibility for an antineoplastic effect of metformin. For example, the drug has been shown to act systemically to reduce insulin levels if they are elevated at baseline (as is often the case in type 2 diabetes), and this may reduce proliferation of the subset of cancers that are growth-stimulated by insulin. Furthermore, by acting directly on cancer cells or on cells at risk for transformation, metformin and other biguanides can impair mitochondrial ATP production, leading to the activation of liver kinase B1 (LKB1)--AMP-activated protein kinase (AMPK) signaling, resulting in a decrease in protein synthesis and lipid synthesis via inhibition of mammalian target of rapamycin and fatty acid synthase, respectively. Metformin may also have additional proposed mechanisms of action, and there is increasing interest in the hypothesis that metformin has utility in cancer prevention and/or treatment. Indeed, metformin inhibited tobacco carcinogen--induced lung cancer in an animal model ([@B12]), but to date, population-based studies conducted to evaluate the association between metformin and lung cancer have produced conflicting results due to the presence of several important biases. Given the time-related biases in previous observational studies, we conducted a large population-based study specifically designed to avoid these methodologic shortcomings to investigate the association between the use of metformin and the risk of lung cancer in patients with type 2 diabetes. RESEARCH DESIGN AND METHODS {#s1} =========================== Data source {#s2} ----------- We used the U.K. General Practice Research database (GPRD), the world's largest computerized database, representing the primary care longitudinal records of more than 11 million patients from across the U.K. The GPRD is representative of the U.K. general population, with age and sex distributions comparable to those reported by the U.K. National Population Census ([@B13]). All information collected in the GPRD has been subjected to validation studies and been proven to contain consistent and high-quality data ([@B14]). The study protocol was approved by the independent scientific advisory committee of the GPRD and the research ethics committee of the Jewish General Hospital, Montreal, Quebec, Canada. Study cohort {#s3} ------------ Within the GPRD population, we assembled a cohort of all patients, aged at least 40 years, who had received at least one antidiabetic prescription between 1 January 1988 and 31 December 2009. Cohort entry was defined as the date of a first prescription for an oral hypoglycemic agent (OHA) during this period. All patients included in the study were from up-to-standard general medical practices, thus meeting GPRD research quality standards, and were required to have at least 1 year of medical history in the GPRD before their cohort entry. Patients who received insulin as their first antidiabetic treatment were not included because they likely represented patients with type 1 diabetes or patients with advanced type 2 diabetes; however, patients who eventually required insulin during follow-up were retained. Finally, patients diagnosed with lung cancer at any time before cohort entry were excluded. All patients were monitored until a first-ever diagnosis of lung cancer, death from any cause, end of registration with a general practice, or end of the study period (31 December 2009), whichever came first. Case and control subject selection {#s4} ---------------------------------- A nested case--control analysis was conducted within the defined cohort. All incident cases of lung cancer occurring during follow-up were identified on the basis of Read diagnostic codes, which is the standard clinical terminology system used in general practice in the U.K. ([@B15]). The date of each case subject's lung cancer diagnosis was defined as the index date. For the purposes of the analyses, only case subjects with at least 1 year of follow-up were retained to consider a latency period. Up to 10 control subjects, randomly selected from the case subject's risk set, were matched to each patient on year of birth (age), sex, calendar year of cohort entry, and duration of follow-up. To avoid excluding patients, the matching criteria were relaxed for four lung cancer patients. Three individuals were matched with a control subject who had the same year of cohort entry ± 1 year, and one patient was matched to control subjects with a year of birth ± 3 years and the year of cohort entry ± 2 years. Control subjects were assigned the same index date as the patients, thus ensuring that case subjects and matched control subjects had equal duration of follow-up before the index date. By definition, all control subjects were alive, not previously diagnosed with lung cancer, and were registered with a general practice when matched to a given case subject. Cancer diagnoses, including lung cancer, have shown high validity in the GPRD, with sensitivities and positive predictive values exceeding 90% ([@B16]--[@B19]), resulting in case ascertainment rates comparable to U.K. cancer registries ([@B20]). Exposure assessment {#s5} ------------------- For both case and control subjects, we obtained information on all antidiabetic agents prescribed between cohort entry and the index date. Exposures initiated in the year before the index date were excluded from the analysis to account for a latency time window, because these are unlikely to be associated with the outcome. The primary exposure definition was ever use of metformin, defined as receiving at least one prescription between cohort entry and the year before the index date. In secondary exposure definitions, we considered whether a dose--response relationship existed between the use of metformin and lung cancer. Therefore, among patients deemed to have ever used metformin in the primary exposure definition, we investigated whether lung cancer risk varied with the total number of prescriptions received, cumulative duration, and cumulative dose. The total number of metformin prescriptions was tabulated by summing all metformin prescriptions received between cohort entry and index date. Cumulative duration was calculated by summing the prescribed duration associated with each metformin prescription received between cohort entry and index date, and cumulative dose was computed by multiplying the daily dose of each metformin prescription by its specified prescription duration and adding these prescription-specific values across all prescriptions received by a given patient between cohort entry and index date. All three dose--response variables were categorized in quartiles, based on the distribution of use in the control subjects. Potential confounders {#s6} --------------------- The risk estimates were adjusted for comorbid clinical conditions and exposures known to be associated with lung cancer that might also influence the choice of antidiabetic therapy. These conditions and exposures were measured at any time from at least 1 year before cohort entry up to 1 year before the index date. Thus, the following potential confounders were considered: smoking status (ever, never, or unknown), BMI (≥30 vs. \<30 kg/m^2^), excessive alcohol use, last recorded glycated hemoglobin A~1c~ (HbA~1c~) at least 1 year before index date, diabetes duration before cohort selection (defined as a diagnosis of type 2 diabetes or an HbA~1c~ level \>7.0%, whichever appeared first in the medical record), chronic obstructive pulmonary disease, and asthma ([@B18]), previous cancer (other than nonmelanoma skin cancer), and ever use of statins, aspirins, nonsteroidal anti-inflammatory drugs, and other antidiabetic agents that were individually adjusted for in the model, including metformin, sulfonylureas, thiazolidinedione, insulins, and others, consisting of meglitinides, dipeptidyl peptidase-4 inhibitors, α-glucosidase inhibitors, glucagon-like peptide analogs, and guar gum. Statistical analysis {#s7} -------------------- The characteristics of the case subjects and matched control subjects were summarized using descriptive statistics. The overall lung cancer incidence rate with 95% CI based on a Poisson distribution was calculated by dividing the total number of patients with incident lung cancer occurring during follow-up by the total number of person-years of follow-up. Conditional logistic regression was used to estimate rate ratios (RRs) along with 95% CIs of lung cancer associated with the use of metformin. The regression models were conditioned on the four matching factors (age, sex, calendar year of cohort entry, and duration of follow-up) and adjusted for the potential confounders listed above. In the primary analysis, we evaluated whether ever use of metformin, when compared with never use, was associated with a decreased risk of lung cancer. We also conducted two secondary analyses. In the first analysis, we determined whether there was a dose--response between the use of metformin and lung cancer in number of prescriptions, cumulative duration of use, and cumulative dose. Linear trend was assessed by entering these dose--response variables in the conditional logistic models as continuous variables. In the second analysis, we stratified case and control subjects on smoking status to determine whether smoking was an effect modifier of the metformin-lung cancer association. All analyses were conducted with SAS 9.2 software (SAS Institute, Cary, NC). Sensitivity analyses {#s8} -------------------- We conducted three sensitivity analyses to assess the robustness of the findings. Initially, all analyses were restricted to case and matched control subjects with at least 1 year of follow-up and excluded antidiabetic medications initiated during the year before the index date to consider a latency time window. Thus, the first sensitivity analysis consisted of repeating the analyses by using latency time windows of 6 months and 2 years. In the second sensitivity analysis, we assessed potential misclassification of exposure by redefining ever use of metformin as receiving at least three prescriptions within a 12-month period, thus minimizing the inclusion of patients who may not have been regular users or who used these drugs sporadically. Finally, in the third sensitivity analysis to assess the effect of adjusting for variables potentially on the casual pathway, we repeated the analysis, adjusting for the potential confounders measured at cohort entry. RESULTS {#s9} ======= A total of 115,923 patients (55.2% men) newly treated with OHAs met the study inclusion criteria ([Fig. 1](#F1){ref-type="fig"}). Mean age was 64.1 (SD 12.0) years, and the median HbA~1c~ was 8.2% at cohort entry. With respect to OHAs received at cohort entry, 67.4% of patients received metformin monotherapy, 29.6% received sulfonylureas monotherapy, 1.3% received other OHAs in monotherapy, and 1.7% were taking a combination of at least two OHAs. ![Flow chart of study subjects.](124fig1){#F1} The mean follow-up was 5.6 (SD 3.6) years, generating 528,356 person-years of follow-up. During this time, 1,061 patients were diagnosed with lung cancer, resulting in an overall lung cancer rate of 2.0/1,000 person-years (95% CI 1.9--2.1). The nested case-control analysis was restricted to the 808 case subjects and 7,765 matched control subjects with at least 1 year of follow-up. As reported in [Table 1](#T1){ref-type="table"}, case subjects and matched control subjects were similar on several characteristics, such as duration of diabetes before cohort entry, HbA~1c~, BMI, and ever use of nonsteroidal anti-inflammatory drugs. As expected, the prevalence of smoking was higher in case subjects than in matched control subjects (85.2 vs. 60.0%, respectively). Furthermore, case subjects were more likely than control subjects to have had a history of chronic obstructive pulmonary disease, to have had a history of asthma, to have used alcohol excessively, and to have ever used aspirin and statins ([Table 1](#T1){ref-type="table"}). ###### Characteristics of lung cancer case subjects and matched control subjects at index date ![](124tbl1) [Table 2](#T2){ref-type="table"} presents the results of the primary analysis. Overall, the use of metformin was not associated with a decrease rate of lung cancer (adjusted RR 0.94 \[95% CI 0.76--1.17\]). In secondary analyses, no dose-response was observed in number of metformin prescriptions received, cumulative duration, and cumulative dose, with all adjusted RRs around the null value ([Table 2](#T2){ref-type="table"}). ###### Crude and adjusted RRs of lung cancer incidence associated with the use of metformin ![](124tbl2) When case and control subjects were stratified on a smoking status, no effect modification was observed with respect to this variable ([Table 3](#T3){ref-type="table"}). In nonsmokers, ever use of metformin was not associated with a decreased rate of lung cancer (adjusted RR 1.19 \[95% CI 0.62--2.26\]). Likewise, no association was observed in smokers (0.90 \[0.70--1.15\]; [Table 3](#T3){ref-type="table"}). ###### Crude and adjusted RRs of lung cancer incidence associated with the use of metformin stratified by smoking status ![](124tbl3) Sensitivity analyses {#s10} -------------------- In the first sensitivity analysis, varying the latency time window to 6 months and 2 years yielded results consistent with those of the primary analysis (adjusted RR 1.02 \[95% CI 0.83--1.26\] and 1.02 \[0.80--1.29\], respectively). In the second analysis, we assessed the effect of potential exposure misclassification by redefining ever use of metformin as receiving at least three prescriptions within a 12-month period. This analysis yielded null results, consistent with those of the primary analysis (0.97 \[0.80--1.17\]). Finally, adjusting for potential confounders at baseline did not materially change the results (0.97 \[0.78--1.20\]). CONCLUSIONS {#s11} =========== The results of this large population-based study indicate that the use of metformin is not associated with a decreased risk of lung cancer in patients with type 2 diabetes. These results remained unchanged in secondary analyses, which considered dose-response, by smoking status, as well as in several sensitivity analyses. As such, our findings do not support laboratory models that focused on the direct and indirect effect of metformin on lung cancer and tumor proliferation ([@B11],[@B12]). Although laboratory data have provided evidence for plausible mechanisms of action of biguanides that may reduce cancer risk and/or improve cancer prognosis, such plausibility of course does not necessarily demonstrate that metformin has clinical antineoplastic activity. The models do not fully recapitulate the clinical situation in many respects, but one obvious area for future investigation concerns pharmacokinetics and drug exposure levels in lungs clinically as compared with in the models. The findings of this study are comparable to those observed by Ferrara et al. ([@B8]), where ever use of metformin, compared with never use, was not associated with a decreased risk of lung cancer (hazard ratio 1.0 \[95% CI 0.8--1.1\]). However our results contrast sharply with those published by Lai et al. ([@B9]). In that study, ever use of metformin was associated with a significant decreased risk of lung cancer (0.55 \[0.37--0.88\]) ([@B9]). Interestingly, similar risk reductions were observed in that study with other antidiabetic treatments, such as thiazolidinediones (0.55 \[0.32--0.94\]) and α-glucosidase inhibitors (0.61 \[0.38--0.98\]), while null results were observed for insulin (1.00 \[0.68--1.45\]) and sulfonylureas (1.27 \[0.75--2.15\]) ([@B9]). Such impressive risk reductions are likely due to immortal time bias, a bias that is introduced with time-fixed analyses that misclassify unexposed person-time as exposed ([@B21],[@B22]). The current study had a number of strengths. First, our study avoided immortal time bias by using a design and analysis that inherently considered exposure to metformin as time-dependent ([@B21],[@B22]). Second, we were able to assemble a large cohort of patients with type 2 diabetes with a significant number of patients with lung cancer. Third, data are prospectively collected in the GPRD and thus recall bias is avoided. Fourth, the GPRD records information on a number of potential confounders, such as smoking, BMI, and HbA~1c~ levels, which are often absent in administrative databases. Finally, we adjusted the models for HbA~1c~, duration of diabetes before cohort entry (i.e., duration of nontreated diabetes), and matched case and control subjects on duration of follow-up (i.e., duration of treated diabetes). We believe that all efforts went into controlling for the effects of diabetes and its severity, which may be independently associated with an increased risk of lung cancer ([@B18]). This study also has some limitations. Although the GPRD contains information on variables such as smoking, which is perhaps the most important potential confounder in this study, the database lacks information on family history of lung cancer, race, level of physical activity, diet, past lung biopsies, bronchoscopies, computed tomography scans, and other hospital procedures related to lung cancer ([@B23]). Thus, residual confounding due to unmeasured or incompletely measured covariates may still be present, although these unmeasured variables are not strongly associated with the outcome and thus are unlikely to have affected the validity of the results ([@B24]). Another limitation of the GPRD is the lack of information on compliance with the prescribed treatment. The GPRD only contains information on prescriptions written by general practitioners, and therefore, whether prescriptions were actually filled or taken as indicated by patients is unknown. Such exposure misclassification would bias the RRs toward the null. However, the results of our sensitivity analysis requiring at least three prescriptions within a 12-month period suggests that this misclassification was likely minimal. Finally, although cancer diagnoses have been shown to be well recorded in the GPRD ([@B16]--[@B19]), the database does not contain specific information on tumor grade and stage, and thus, it was not possible to stratify the patients by using these parameters. In summary, this large population-based study provides evidence that the use of metformin is not associated with a decreased risk of lung cancer in patients with type 2 diabetes. This finding remained consistent after conducting several secondary and sensitivity analyses. Our observations, however, do not detract from the plausibility of the mechanisms of antineoplastic action of biguanides demonstrated by laboratory models ([@B1]--[@B7],[@B25],[@B26]) but suggest that these mechanisms do not operate clinically, at least at the conventional doses used in the treatment of type 2 diabetes. Therefore, further translational research, including careful attention to drug exposure levels in relevant organs, is suggested before launching large-scale randomized controlled trials of metformin for proposed indications in oncology. This research was partly funded by an infrastructure grant from the Canadian Institutes of Health Research and the Canadian Foundation for Innovation. S.S. is the recipient of the James McGill Chair, and L.A. is the recipient of a Chercheur-Boursier Award from the Fonds de la recherche en santé du Québec. No potential conflicts of interest relevant to this article were reported. B.B.S. contributed to the study concept and design, to analysis and interpretation of data, to drafting of the manuscript, and to critical revision of the manuscript for important intellectual content. L.A. contributed to the study concept and design, to analysis and interpretation of data, and to critical revision of the manuscript for important intellectual content. H.Y. and M.N.P. contributed to analysis and interpretation of data and to critical revision of the manuscript for important intellectual content. S.S. supervised the study and contributed to the study concept and design, to analysis and interpretation of data, and to critical revision of the manuscript for important intellectual content. S.S. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Machine learning, a branch of computer artificial intelligence, has been put to use in a range of healthcare fields, from identifying promising drugs to recognizing changes in medical images that can catch health issues early. A new project at the College of Public Health aims to apply machine learning techniques to the everyday health of millions of people who have hypertension, a main risk factor for cardiovascular disease (CVD). Gabriel Tajeu, assistant professor of Health Services Administration and Policy, has been awarded a five-year, $748,000 grant from the National Institutes of Health to build a machine learning system to analyze electronic health data, identifying patients who are likely to have persistently uncontrolled blood pressure (BP) so doctors can be more proactive with their interventions. Adults who have controlled BP have a substantially decreased risk of CVD-related mortality compared to those with uncontrolled BP. However, using current guidelines, over 40% of the 100 million U.S. adults with hypertension have controlled BP, making it a significant public health issue and one that is insufficiently kept in check. In treatment, there can be what’s called “clinical inertia,” Tajeu said. “Even if a patient’s blood pressure is uncontrolled, the doctor might take a wait-and-see approach rather than increase the dosage of antihypertensive medications,” he said. “But six months or a year is a long time to be walking around with uncontrolled hypertension.” What Tajeu’s machine-learning algorithm would do is provide the clinician with a prediction, based on a mix of data factors, that a particular patient is more likely to come back to their next appointment still having uncontrolled hypertension. “Hopefully that would give them more of a justification for increasing treatment intensity and lower that clinical inertia,” he said. Machine learning algorithms have been put to work on electronic health records in applications such as predicting lengths of hospital stays and hospital mortality rates, but analysis of health data to identify trends and make predictions around hypertension is relatively new territory, Tajeu said. A machine-learning algorithm (MLA) needs to be “trained” using a large set of data. In this case it will be anonymized data from the Temple Health System, which contains extensive demographic, clinical, prescribing, and dispensing data. An MLA can recognize patterns in data that may be difficult to detect otherwise and identify variables that are important for prediction of an outcome. Essentially, the system will assess existing data to analyze what combination of factors are most likely to indicate that a patient with uncontrolled BP will later return still having uncontrolled BP. Tajeu will work with graduate students in public health and computer science to build the system. By the end of the research, they hope to have a validated machine-learning algorithm. The next step would be to identify how to best integrate it into clinical management software used in the Temple Health System. That would allow a physician treating a patient to be alerted that a patient is at risk at the point of service, so appropriate measures could be taken.
https://cph.temple.edu/about/news-events/news/machine-learning-approach-controlling-blood-pressure
Wave Behavior S8P4a. Identify the characteristics of electromagnetic and mechanical waves. S8P4b. Describe how the behavior of light waves is manipulated causing reflection, refraction, diffraction, and absorption. 2 Reflection Reflection occurs when a wave strikes an object or surface and bounces off. An echo is reflected sound. Sound reflects from all surfaces. You see your face in a mirror or a still pond, because of reflection. Light waves produced by a light source such as the Sun or a light bulb bounce off your face, strike the mirror, and reflect back to your eyes. 3 Reflection (continued) When a surface is smooth and even the reflected image is clear and sharp. When light reflects from an uneven or rough surface, you can’t see a sharp image because the reflected light scatters in many different directions. 4 Refraction Refraction is the bending of a wave as it moves from one medium into another. The speed of the wave can be different in different mediums. For example, light waves travel faster in air than in water. Refraction occurs when the speed of a wave changes as it passes from one substance to another. 5 Refraction (continued) A line that is perpendicular to the water’s surface is called the normal. When a light ray passes from air into water, it slows down and bends toward the normal. 6 Refraction (continued) When the ray passes from water into air, it speeds up and bends away from the normal. The larger the change in speed of the light wave is, the larger the change in direction. 7 Color from Refraction Sunlight contains light of various wavelengths. When sunlight passes through a prism, refraction occurs twice: once when sunlight enters the prism and again when it leaves the prism and returns to the air. Violet light has the shortest wavelength and is bent the most. Red light has the longest wavelength and is bent the least. 8 Color from Refraction (continued) Each color has a different wavelength and is refracted a different amount. As a result, the colors of sunlight are separated when they emerge from the prism. Rainbows are created when light waves from the Sun pass into and out of raindrops. 9 Color from Refraction (continued) The colors you see in a rainbow are in order of decreasing wavelength: red, orange, yellow, green, blue, indigo, and violet. 10 Diffraction Diffraction is the bending of waves around a barrier. Sound waves diffract more than light waves. You can hear sound around a corner, but you can’t see around corner. 11 Diffraction and Wavelength Light doesn’t diffract much when passing through an open door because the wavelengths of visible light (400 to 700 billionths of a meter) are much smaller than the width of the door (1 meter). Sound waves that you can hear have wavelengths between a few millimeters and 10 m. A wave is diffracted more when its wavelength is similar in size to the opening. 12 When Waves Meet Waves pass right through each other and continue moving. While two waves overlap a new wave is formed by adding the two waves together. The ability of two waves to combine and form a new wave when they overlap is called interference. 13 Types of Interference When the crest of one wave overlaps the crest of another wave it is called constructive interference. The amplitudes of these combining waves add together to make a larger wave while they overlap. Destructive interference occurs when the crest of one wave overlaps the trough of another wave. The amplitudes of the two waves combine to make a wave with a smaller amplitude. 14 Waves and Particles When waves travel through a small opening, such as a narrow slit, the light spreads out in all directions. If particles are sent through the same slit they would continue in a straight line. Spreading, or diffraction, is only a property of waves. 15 Waves and Particles (continued) If waves meet, they reinforce or cancel each other, then travel on. If particles approach each other, they either collide and scatter or miss each other completely. Interference is a property of waves. 16 Summary Reflection Refraction Reflected sound waves can produce echoes. Reflected light rays produce images in a mirror. Refraction The bending of waves as they pass from one medium to another is refraction. Refraction occurs when the wave’s speed changes. 17 Summary (continued) Diffraction and Interference A prism separates sunlight into the colors of the visible spectrum. Diffraction and Interference The bending of waves around barriers is diffraction. Interference occurs when waves combine to form a new wave while they overlap. Destructive interference can reduce noise. Similar presentations © 2020 SlidePlayer.com Inc. All rights reserved.
http://slideplayer.com/slide/3866714/
By: Craig Garrett BSc MIED IEng How BIM has influenced building and façade design? BIM is a massively disruptive force that is transforming every aspect of the construction industry. But it is not Building Information Modelling (BIM) per say, but rather digital engineering and delivery and the way we approach efficient information management on a project that is changing. This current revolution is long overdue as many traditional design and construction approaches have not changed in hundreds of years. What has changed is the adoption, workflows and processes created to fully utilise 3D technologies, which have allowed the design team and especially architects, to imagine and create more complex forms than ever before. The façade as the outer face or skin of any building is what is most visible to the outside world and often a building is better known for its striking façade more than anything else. In addition, the look, feel and geometry of the façade often then drives the entire concept for the building’s interior spaces. Very fortunately in my 35-year career where the last 10 plus years have been in the Middle East, I have had the opportunity to work on many projects with exceptionally complex and iconic façades that have challenged every aspect of digital engineering design and construction. How Building Information Modelling (BIM) and parametric are implemented procedures into the design process? It is important that architects and the design team generally can convey their vision or design intent to all necessary project stakeholders. Traditionally, this would have been done using presentations or sketches or references to real-world items or natural elements. But in this new digital world, this is best achieved using a variety of geometric and parametric modeling tools which allow the creation of a digital representation of the physical and functional building shape, form or surface that will define the façade. This streamlines the process of design and communicates complicated information accurately to everyone involved, allowing the joint work of architects, clients, builders, engineers, etc. to occur within a single intelligent and shared process, making it a critical workflow to increase efficiency and cross-discipline integration. While inherently parametric, the common approach of utilising BIM is mainly through labor-intensive modeling. The usability of BIM throughout the design process is strengthened, by improving the integration of parametric driven parameters which automate the modelling workflow reducing time and therefore cost. Efficient reuse of data or information in various forms across all disciplines is at the heart of what digital engineering is all about, not so much “Building Information Modelling” but rather “Better Information Management”, ensuring efficient reuse of project data. What are the requirements and differences in BIM and Parametric procedures? Here it is best to consider we have two separate approaches or two differing communities. The difference between these approaches is emphasised when focusing on the semantic meaning of created objects. Parametric design is almost exclusively oriented towards creating geometry. While this geometry is more and more tied to external analysis tools, to evaluate different performance criteria such as day-light availability or energy consumption, the process is inherently focusing on modeling. As such, the resulting geometry contains little information and is not so different from geometry created in traditional CAD systems: points, curves and surfaces placed on layers. Fueled by the requirement for iterations, optioneering and prototyping, parametric design has allowed engineers and designers to rethink their approach in numerous ways and introduce new and daring approaches which might otherwise have been dismissed. At the other side of the specter, BIM relies on a pre-defined semantic structure. All entities have a clear meaning and function. The culmination of the BIM structure is the Industry Foundation Classes (IFC), which describe all possible and foreseeable building elements in over 800 entities, 350 property sets and over 100 data types. Whether the structural steel members of a façade or curved roof or the setting out of a ceiling with lighting fixtures or fire detection items, each relies on the rules and definitions of the products being used and the structured profile that they follow. What are the latest advancements and trends? Personally, I find it almost impossible to keep up with technological change in current times, what I will say is that developments that allow complex calculations or data manipulation to be achieved through a more user-friendly interface are number one in my book. I am of course referring to the visual programming environments that have taken over many of our design processes in recent years. These are easy to learn as they provide many built-in objects that are organised in a flowchart type nature. Basically, a typical text-based programming language means the programmer must think like a computer, while a visual programming language lets the programmer describe the process in terms that make sense to humans. This form of coding has opened a world of possibilities not just in terms of simplicity of use but also in terms of building complex operations by combining pre structured modules, many of which are now available as they are being shared freely online. How to bridge the gaps between parametric design and BIM? “Parametricism” is a term being used for a new global style of design that covers all industries and disciplines and implies that all elements are parametrically malleable with the right computational design tools and manufacturing methods. Basically, that any item can be transformed from its initial state, through a computational process bound by constraints and into a future state. These skills however do not come naturally and therefore traditional roles and responsibilities are changing. Whether through education, upskilling or development, new roles are now emerging for computational experts, parametric modelers, data analysts, visual programmers, data miners and the like. However, don’t for a minute think this wave is affecting the whole industry. In my experience many organisations, especially in the Middle East, are still struggling to come to terms with the basic concept of 3D BIM, never mind digital or computational engineering. So, what you are left with is a huge gap, growing by the day, between the international tier 1 organisations who are actively adopting parametric thinking and pushing the industry capability forward and those smaller tier 2 or 3 organisations further down the food chain who are trying desperately to continue for as long as possible in 2D CAD, just like they have always done. While parametric design can process thousands of iterations in minutes, removing tedious trudge work to find the optimum solution, it is still a human decision that ultimately will make the final choice. What are the limitations of BIM/parametric design? Arguably the only limitation is your imagination. However, there is a very steep learning curve involved (like any other skill) parametric design requires hundreds of hours of practice, patience, and perseverance. While the traditional style of designing usually focuses on the 2D plan first, parametric design focuses on extensive data collection & validation and the setting up of relationships between parameters. Plus, all this needs to be setup at the beginning of the project. Additionally, since this method of design heavily relies on the data we provide during the design phase, extensive research and data collection is usually done first. Anything and everything, from site location to available materials, will contribute key data to the process. Unfortunately, parametric design is not always the one to steal the show. It requires a large amount of investment of time at the start – where various relationships and algorithms are defined. This does not sit well with some organisations with limited resources, on tight budgets or schedules. If the end product is a production-ready model, then the idea is feasible, but for small companies, where the end product is only a 2D Drawing and a visualisation of the finished product, the entire scheme might not work out cost effective. In this case the traditional design might still be the best choice. What are the benefits with respect to implementing BIM and parametric processes for façades? The advantage of any computational process is of course the ease with which you can accommodate change. Once the initial formula or program is configured, usually with a set of baseline settings, then any further change is just a simple adjustment to the inputs and re-run. Façades are often simple geometric shapes or patterns repeated to appear complex. Therefore, a parametric process using a geometric pattern can be generated with ease. Combine this with the fact that most parametric design programming tools accommodate seamless integration with other traditional software such as, environmental analysis, building physics, structural simulation and clash test, etc. then this solves the need for multiple copies of the data on separate platforms as all the data is linked and interoperable. The final step is that with every adjustment you make to your parametric design the 3D model automatically creates the optimised solution and then updates the extracted drawings, BOQ, and schedules. Now you really have a solution that is replaced CAD with CODE. Having worked with and for several consultants and façade manufacturers I have experienced these processes firsthand. This has allowed me an insight into the complexities of façade systems and their installation both traditional and bespoke. However, it is wise to remember that the world of design inside a computer, where your 3D model can be measured to the millimeter, is far removed from a construction site where a build-up in structural steel fabrication tolerances can make a vast difference to the actual final position of the façade panels. Why BIM appears to be fundamental in the current architectural design?How can architects use BIM to streamline complex façade designs? The use of BIM or parametric design is normally associated with complex shapes and curved geometry. But parametric definitions can be used for material and manufacturing optimisation or simply to produce better and faster impressions or iterations when designing traditional rectangular glazing solutions. Parametric design for façade is the application of computational strategies to the design process. While designers traditionally rely on experience and intuition to solve complex design problems, the computational design aims to enhance that process by encoding design decisions using computer power and language. The result is a graphic representation of the steps required to achieve the end designs. An external architectural façade is one of the most expressive, inspiring, and complex aspects of building design. It is the outermost skin with a multitude of roles from visual character to weatherproof barrier. Today, the façade, interior skin or cladding can be a complex as the building itself and balances the aesthetics, visual character, structural stability, solar gain, daylight, visibility, thermal comfort, and zoning together in one component. How has BIM changed the façade modeling? Any project with a BIM requirement means that design reviews focusing on fully coordinated 3D models now feature the façade like any other discipline. The total façade envelope can be around two meters in section. This is from outer skin, through weatherproof paneling and insulation to primary and secondary structural steel and then to interior mounting and final finish of the internal skin. This extensive zone in many cases also needs to accommodate MEP services. Therefore, creating a complex space that needs to be modelled, coordinated, and understood fully. Add to this mix a façade that maybe curving in multiple geometric plains, and you can see that 3D modelling is not just a good idea, it is absolutely essential. I have seen façades represented on a drawing as a single line, because the external shape and form was considered all that was important. But once you fully understand the complex mounting and adjustment that must be anchored to the primary steel, you realise that “the devil is in the detail”, as they say. As an industry we now refer to the “Level of Detail (LOD)” of our 3D models. But in my opinion the façade system has always been a difficult one to define. While I have explained why we should model a façade due to understanding the complexity, the reality of a fully modelled solution with every fixing, adjustment detail and seal, would of course create a model that most normal users could not even open. Here a commonsense approach to what is modelled and what is not must be adopted. How 3D parametric modelling and automation can help meet customer demands? “The customer is always right”, or so the saying goes. Often the vision for the constructed asset originates from the client in some form or another, the challenge then for the architect is to realise that vision with a solution that meets the aesthetic, creative and functional demands, while allowing the engineers to then create a design that can be manufactured and constructed. The parametric process then allowing for accelerated crunching of design ideas and efficient options to find the optimum solution, therefore providing additional time to be spent on the artistic touch provided by the human expert input. Fast and efficient optimisation is a real game changer. Whether reducing structural steel sizes used, or onsite fabrication time reduction, or standardisation of façade panel types and dimensions or automatic development for space planning. Each utilising a lot of data but capable of substantial material and time savings. Being able to then visualise these savings in a plain language format is then critical. Several applications are now available to allow a client or customer to manipulate data in such a way so that they can themselves “play around with” the controlling parameters to adjust the solution as they see fit. This real time interactive ability takes customer configuration to the next level. How technology can be used to reduce costs, reduce defects, and improve designs? Becoming efficient at any process by adopting whatever means possible, means that you can effectively complete that process in less time. Saving time in real terms means saving money, and that is the main driver behind almost everything. The technology itself, whether software or otherwise is merely an enabler, it simply replaces the use of a paper and pencil. Let’s remember, we humans can do whatever a computer does, it is just that they can usually do it faster and with less errors. The takeaway being, we save time. With tried and tested effective processes and workflows in place, then deploying technology to do the work is usually a success. The question of “improving design” in my opinion comes from what we have learned from our past experiences. The construction industry generally is not great at lessons learned or sharing valuable experience for the benefit of everyone. Being better at managing information intelligently provides the tools to allow us to make better-informed decisions and that knowledge then leads to better outcomes. Some parametric processes can now even learn from previous iterations, therefore, understanding what is good and bad, and this then influences future solutions created. In final summary smart solutions developed by the use of smart technology, in the hands of suitably smart people is a force to be reconned with and will shape the design and construction industry going forward.
https://wfmmedia.com/bim-improves-the-integration-of-parametric-driven-parameters-to-automate-the-modelling-workflow/
By Isharpal Singh, M.S. The Next Gen approach in Immunotherapy The era of modern antibody therapy began with the Prize-winning report on the basic structure of the immunoglobulin molecule given by Rodney Porter. His findings revealed two identical antigen-binding fragments (Fab) that blocked antigen precipitation by the parent antibody and a third easily crystallizable fragment (Fc) which was found to be inactive. From a historian’s point of view, it is remarkable that almost simultaneously with these first insights into the antibody’s structure, the idea of constructing a bispecific antibody was born.2 Of all causes of death, cancer is perhaps the most devastating as it currently represents approximately one of every four deaths in the USA. Moreover it boasts an economic impact overshadowing that of any other disease worldwide.3 Available treatment regimens like surgical resection, chemotherapy, and radiation therapy etc. have made very little progress in terms of improving prognosis for advanced stage cancers over the past 40 years.1 Moreover this therapeutic shortcoming is further compounded by the fact that these conventional approaches tend to have no-specific effects therefore affecting both normal & cancerous tissues. This has led to an apparent need for safer and more effective approaches for treating cancer. The Bi-specific Explosion Adoptive immunotherapy based upon antigen-specific T cells involving bispecific antibodies have emerged as a rapidly growing area in the field of cancer research as it attempts to use a patient’s immune system in the treatment of malignancy. It involves harnessing the cytotoxic ability of T cell therapy through the use of bispecific antibodies designed to engage and activate endogenous polyclonal T cell populations via the CD3 complex, directed towards tumor antigen. While the field has seen some significant improvement in a subset of cancers, there is still a large unmet demand which necessitates the search for new therapeutic paradigms. However recently a subclass of Bispecific single-chain antibody, the Bispecific T cell engager (BiTE), have emerged as a superior system to other formats of bi-specific antibodies as these are relatively stable and easy to produce in mammalian cells, are extremely potent at low doses, and are able to mediate cytotoxicity in the absence of supplemental lymphocyte stimulation. Thus giving it an upper hand over its predecessors in achieving efficacy in animal models and early clinical trials. How BiTE works? These bispecific T cell engager (BiTE) displays a very specific design within the group of bispecific antibodies. It is composed of the two binding domains (variable heavy- and light chain domains) of two different human IgG antibodies flexibly linked by a short non-immunogenic peptide.5 Their effector- binding arms bind specifically to the epsilon subunit of CD3, their opposing target-binding arms can be directed against any number of epitopes, such as those differentially expressed on the surface of a tumor cell. Thus providing targeted action with less side effects and better efficacy. It’s like the drug is saying to cancer cell that “we know who you are and we’ve got your address.” Two examples based on this system which are currently in clinical stages are Blinatumomab (Completed Phase 2 trials), a murine anti-human CD3 × anti-human CD19 and MT110 (Completed Phase 1 trials), an anti-human EpCAM × anti-human CD3 TaFv. The former targets pan B-cell antigen, CD19 while the latter targets EpCAM on a wide array of solid tumors in patients with lung, gastric, and colorectal cancers.4 Market Scenario Many companies like Amgen, Novartis, Juno Therapeutics, Kite Pharma and others are working on this technology to develop promising immunotherapies for specific indications as it is likely to be the fastest and safest route to approval. There is no doubt that all these companies have promising candidates with very large potential markets, and these candidates will soon be tested in clinical trials against several malignancies. With these developments, it might well be that we are currently experiencing a turning point in the field of bispecific antibody and more generally of cancer immunotherapy. The Future Overall, the BiTE technology confers some distinct advantages in addressing expressed tumor cell surface targets as described here. But till now this system has been used for the “easiest” indications and the issues of stability and cost-effective production persist as obstacles. The conclusions for the design of future bispecific strategies may be drawn accordingly. However, it is worth emphasizing the highly effective cytotoxic activity of T cells that drive their effectiveness. The balance of enhanced efficiency with toxicity is currently being evaluated in the ongoing clinical studies. These ongoing and future studies will expand our understanding of therapeutic frame using this approach by allowing for single agent and combination use with signaling inhibitors and other immunotherapeutics which lack durable responses. It also provides an opportunity to combat the heterogeneous nature of cancer in order to overcome relapse and resistance which is in itself a very exciting prospect. References: - Bailar JC III, Gornik HL. Cancer undefeated. N Engl J Med 1997; 336:1569-74. - G. Riethmüller (2012). Symmetry breaking: bispecific antibodies, the beginnings, and 50 years on. Cancer Immunity, vol 12, p. 12. - Jemal A, Siegel R, Xu J, Ward E. Cancer statistics, 2010. CA Cancer J Clin 2010; 60:277-300. - P. Chames, D. Baty (2009). Bispecific antibodies for cancer therapy. MAbs, vol 1(6), p. 539- 547. - Rowland-Jones, R., (2009) Aspects of the mode of action of bispecific T cell engager (BiTE) antibodies. Ph.D, University of Bayerischen Julius-Maximilians.
https://thescrutinizer.org/taking-a-bite-out-of-cancer/
INET Oxford conducts its work in close collaboration with its partner institutions: Institute for New Economic Thinking - INET Oxford’s core support comes from the Institute for New Economic Thinking (INET). The Institute is a New York City-based economic research and education foundation designed to broaden and accelerate the development of new economic ideas that will lead to real-world solutions to the great economic and social challenges of the 21st century. Created in response to the global financial crisis of 2008, the Institute is supporting a fundamental shift in economic thinking by funding academic research, building communities of new economic thinkers, and spreading the word about the need for change. INET Oxford is a hub in this global research network. The founding sponsors of INET are George Soros, Jim Balsillie, and William Janeway. Oxford Martin School - INET Oxford is a research institute within the University's Oxford Martin School. The School supports over 300 researchers, drawn from across Oxford and beyond, working to address the most pressing global challenges and opportunities of the 21st century. They are grouped into interdisciplinary programmes, studying subjects as diverse as the future of the global food system to rethinking economics, and from the human rights of future generations to innovation in healthcare. The Oxford Martin School was founded in 2005. It was made possible through the vision and generosity of Dr James Martin (1933-2013), who established the school with the largest benefaction made to Oxford in its history. Resolution Foundation - INET Oxford's Employment, Equity and Growth Programme (EEG) on economic inequality was created in partnership with The Resolution Foundation, an independent research and policy organisation, founded by Clive Cowdery in 2005. The goal of the Resolution Foundation is to improve living standards for the 15 million people in Britain on low and middle incomes. To achieve this the Foundation conducts rigorous research, analysis and policy development to inform public debates and influence key decision makers in government, the private sector, and civil society. Central European University - INET Oxford has both research and teaching links with Central European University (CEU) in Budapest, Hungary. Founded in 1991 at a time when revolutionary changes were throwing off the rigid orthodoxies imposed on Central and Eastern Europe, CEU is a graduate-level “crossroads” university where faculty and students from more than 100 countries come to engage in interdisciplinary education, pursue advanced scholarship, and address some of society’s most vexing problems. CEU is accredited in both the United States and Hungary, and offers English-language Master's and doctoral programs in the social sciences, the humanities, law, management and public policy.
https://www.inet.ox.ac.uk/partners
Empire of Liberty The Empire of Liberty is a theme developed first by Thomas Jefferson to identify the responsibility of the United States to spread freedom across the world. Jefferson saw the mission of the U.S. in terms of setting an example, expansion into western North America, and by intervention abroad. Major exponents of the theme have been James Monroe (Monroe Doctrine), Andrew Jackson and James K. Polk (manifest destiny), Abraham Lincoln (Gettysburg Address), Theodore Roosevelt (Roosevelt Corollary), Woodrow Wilson (Wilsonianism), Franklin D. Roosevelt, Harry Truman (Truman Doctrine), Ronald Reagan (Reagan Doctrine), Bill Clinton, and George W. Bush. In the history of U.S. foreign policy, the Empire of Liberty has provided motivation to fight the Spanish–American War (1898), World War I (1917-18), the later part of World War II (1941–1945), the Cold War (1947–1991), and the War on Terror (2001–present). Thomas Jefferson Jefferson used this phrase "Empire of Liberty" in 1780, while the American revolution was still being fought. His goal was the creation of an independent American state that would be proactive in its foreign policy while ensuring that American interventionism and expansionism would always be of a benevolent nature: We shall divert through our own Country a branch of commerce which the European States have thought worthy of the most important struggles and sacrifices, and in the event of peace [ending the American Revolution]...we shall form to the American union a barrier against the dangerous extension of the British Province of Canada and add to the Empire of Liberty an extensive and fertile Country thereby converting dangerous Enemies into valuable friends.— Jefferson to George Rogers Clark, 25 December 1780 Jefferson envisaged this "Empire" extending Westwards over the American continent, expansion into which he saw as crucial to the American future. During his presidency, this was in part achieved by his 1803 purchase of the Louisiana Territory from the French, almost doubling the area of the Republic and removing the main barrier to Westward expansion, stating that "I confess I look to this duplication of area for the extending of a government so free and economical as ours, as a great achievement to the mass of happiness which is to ensue". However, this was not necessarily a politically unified Empire. "Whether we remain in one confederacy, or form Atlantic and Mississippi confederacies, I believe not very important to the happiness of either part." Despite this, Jefferson on other occasions seemed to stress the territorial inviolability of the Union. In 1809 Jefferson wrote his successor James Madison: we should then have only to include the North [Canada] in our confederacy...and we should have such an empire for liberty as she has never surveyed since the creation: & I am persuaded no constitution was ever before so well calculated as ours for extensive empire & self government.— Jefferson to James Madison, 27 April 1809 Even in his later years, Jefferson saw no limit to the expansion of this Empire, writing "where this progress will stop no-one can say. Barbarism has, in the meantime, been receding before the steady step of amelioration; and will in time, I trust, disappear from the earth". While Jefferson spoke loftily and idealistically about an Empire of Liberty abroad, he also envisioned creating a new form of American imperialism closer to home. The scholar Richard Drinnon observed that Jefferson spoke of establishing more amicable relations with Native Americans on America's Western Frontier at his "second inaugural address". During this address, Drinnon claims that Jefferson was quoted as stating that "humanity enjoins us to teach them (the Native Americans) agriculture and the domestic arts". In practice, however, Jefferson's imperial policy and implementation of the ideal of an Empire of Liberty for North America's Native American population was radically different. In Drinnon's view, there was a vast disparity between Jefferson's ideas and his actual actions. According to Drinnon, "Jefferson had initiated the Indian removal policy through his energetic efforts to "obtain from the native proprietors the whole left bank of the Mississippi." One major reason the lands of the aboriginal inhabitants had been so drastically reduced was Jefferson's acquisition of a hundred million acres in treaties shot through with fraud, bribery, and intimidation. And when Indians inter fered with white definitions of the national interest, as did the "backward" tribes of the Northwest in 1812, Jefferson's humanitarianism hardened: "These will relapse into barbarism and misery, lose numbers by war and want," he grimly pre dicted to John Adams, "and we shall be obliged to drive them, with the beasts of the forest into the Stony mountains.". Monroe Doctrine The Monroe Doctrine, a U.S. foreign policy initiative introduced in 1823, stated that efforts by European countries to colonise or interfere with states in the Americas will be viewed as acts of aggression requiring U.S. intervention, while the U.S. promised to refrain from interfering the affairs of established European colonies and respect the control of the European nations over their Caribbean colonies. Its justification was to make the "New World" safe for liberty and American-style republicanism, although many Latin Americans viewed the doctrine as simply justification for the United States to establish imperialistic relations with Latin America without having to worry about European interference. The Monroe Doctrine was invoked during the Second French intervention in Mexico and with the German Empire during the Zimmermann Telegram affair in 1917. After 1960 the Monroe Doctrine was invoked to roll back Communism from its new base in Castro's Cuba. Ronald Reagan emphasized the need to roll back Communism in Nicaragua and Grenada. Reforming the world American Protestant and Catholic religious activists began missionary work in "pagan" areas from the 1820s, and expanded operations worldwide in the late 19th century. European nations (especially Britain, France and Germany) also had missionary programs, with these focused mostly on subjects within their own empires. Americans went anywhere it was possible, and the Young Men's Christian Association (YMCA) and Young Women's Christian Association (YWCA) were among the many groups involved in missionary work. Others included the student volunteer movement and the King's Daughters. Among Catholics, the three Maryknoll organizations were especially active in China, Africa, and Latin America. Religious reform organizations joined in attempts to spread modernity and worked to fight the corrupting effects of ignorance, disease, drugs and alcohol. For example, the World's Woman's Christian Temperance Union (WWCTU), a spinoff of the WCTU, had both strong religious convictions and a commitment to international efforts to shut down the liquor trade. By the 1930s the more evangelical Protestant groups redoubled their efforts, but the more liberal Protestants had second thoughts about their advocacy, especially after the failure of prohibition at home cast doubt on how easy it might be to reform the world. Other dimensions Economic dimensions of the Empire of Liberty involved dissemination of American management methods (such as Taylorization, Fordism, and the assembly line), technology, and popular culture such as film. In the 1930s, the Congress passed the Neutrality Acts, which attempted to avoid entering in conflicts with other nations. The United States became involved in World War II two years after its start. Writers on the Left often capitalized on anti-imperialistic ideals by using the label American Empire in as a criticism of the United States foreign policy as imperialistic. Noam Chomsky and Chalmers Johnson are prominent spokesmen for this position, having long been critical of American imperialism. Their argument is that an imperialistic America represents an evil, and indeed the very thing that the "Empire of Liberty" was conceived to counter, imperialism. They recommend an alternate course of "dismantling the empire", by which the United States foreign policy is moved in a different direction. Puerto Rican poet and novelist Giannina Braschi proclaims the collapse of the World Trade Center as the end of the American Empire and its "colonial" hold on Puerto Rico in her post 9-11 work "United States of Banana" (2011). See also - American exceptionalism - American imperialism - Christian mission - Civilizing mission - Liberal internationalism - White man's burden References - ^ Hyland says, "Jefferson's concept of an empire of liberty found an echo in Clinton's enlargement of democracies." William Hyland, Clinton's world: remaking American foreign policy (1999) p. 201 - ^ Dominic Tierney, How We Fight: Crusades, Quagmires, and the American Way of War (2010) p. 91 - ^ Richard H. Immerman, Empire for Liberty (2010) p. 158 - ^ David Reynolds, America, Empire of Liberty (2009) pp. xvii, 304, 458 - ^ See online source Archived 2009-12-07 at the Wayback Machine - ^ Jefferson to Dr. Joseph Priestley, 29 January 1804 - ^ Thomas Jefferson to Dr. Joseph Priestley, 29 January 1804 - ^ Jefferson to William Ludlow, 6 September 1824 - ^ Drinnon, Richard (1975). "The Metaphysics of Empire-Building: American Imperialism in the Age of Jefferson and Monroe". The Massachusetts Review. 16 (4): 666–688. ISSN 0025-4878. - ^ Drinnon, Richard (1975). "The Metaphysics of Empire-Building: American Imperialism in the Age of Jefferson and Monroe". The Massachusetts Review. 16 (4): 666–688. ISSN 0025-4878. - ^ Drinnon, Richard (1975). "The Metaphysics of Empire-Building: American Imperialism in the Age of Jefferson and Monroe". The Massachusetts Review. 16 (4): 666–688. ISSN 0025-4878. - ^ Barbara Reeves-Ellington, Kathryn Kish Sklar and Connie A. Shemo, Competing Kingdoms: Women, Mission, Nation, and the American Protestant Empire, 1812–1960 (2010) - ^ Andrew Porter, The Imperial Horizons of British Protestant Missions, 1880–1914 (2003) - ^ Jean-Paul Wiest, Maryknoll in China: A History, 1918–1955 (1997) ISBN 0-87332-418-8 - ^ Ian Tyrrell, Woman's World/Woman's Empire (University of North Carolina Press, 1999 ISBN 0-8078-1950-6) - ^ Ian Tyrrell, Reforming the World: The Creation of America's Moral Empire (Princeton University Press, 2010 ISBN 978-0-691-14521-1) - ^ Richard Pells, From Modernism to the Movies: The Globalization of American Culture in the Twentieth Century (2006) - ^ Another Leftist, Arno J. Mayer, once described the Roman Empire as a "tea party" in comparison to its American counterpart. Gabriele Zamparini; Lorenzo Meccoli (2003). "XXI CENTURY, Part 1: The Dawn". The Cat's Dream. 47:04. It [the American Empire] is an informal empire of the sort that, it seems to me, does not really have a precedent in history. I'm inclined to say that compared to the American Empire, even the Roman Empire may be said to have been something in the nature of a tea party. - ^ Chalmers A. Johnson, Dismantling the Empire: America's Last Best Hope (American Empire Project) (2010) - ^ Madelena Gonzales and Helene Laplace-Claverie, editors, Minority Theatre on the Global Stage: Challenging Paradigms from the Margins, 2012. Further reading |Library resources about | Empire of Liberty - Bacevich, Andrew J. American Empire: The Realities and Consequences of U.S. Diplomacy (2004) by a political scientist excerpt and text search - Cogliano, Francis D. Emperor of Liberty: Thomas Jefferson's Foreign Policy (2014) - Ferguson, Niall. Colossus: The Rise and Fall of the American Empire (2005), by a conservative historian excerpt and text search - Gordon, John Steele. Empire of Wealth: The Epic History of American Economic Power (2005) by a conservative popular historian excerpt and text search - Hampf, M. Michaela. Empire of Liberty: Die Vereinigten Staaten von der Reconstruction zum Spanisch-Amerikanischen Krieg, De Gruyter Oldenbourg, 2020 ISBN 978-3-11-065364-9. - Kagan, Robert. Dangerous Nation: America's Place in the World from Its Earliest Days to the Dawn of the Twentieth Century (2006), by a conservative - Nau, Henry R. "Conservative Internationalism," Policy Review #150. 2008. pp. 3+. by a conservative online[dead link] - Reynolds, David. America, Empire of Liberty: A New History (2009); also BBC Radio 4 series - Tucker, Robert W., and David C. Hendrickson. Empire of Liberty: The Statecraft of Thomas Jefferson (1990).
https://wiki.alquds.edu/?query=Empire_of_Liberty
Responding to the reports that 24 year old Mohamud Mohammed Hassan died shortly after being released from Police custody at the weekend, Cardiff Liberal Democrats have joined calls for the Independent Office for Police Conduct (IOPC) to intervene. Welsh Liberal Democrat Senedd candidate for Cardiff Central Rodney Berman said: "This is an incredibly unsettling and troubling incident and our thoughts are with the friends, family, and communities who are hurting and are angry at this time. "We are aware that investigations are ongoing, however we would impress on the Independent Office of Police Complaints the need for an urgent, independent investigation to understand what happened to Mohamud whilst in custody and following his release. "The widespread media and social media response to Mohamud's death calls for nothing less than a rapid, thorough investigation that treats Mohamud, his friends and family with the dignity they deserve. "Cardiff Liberal Democrats stand ready to offer any support to the community while investigations ongoing."
https://www.cardiffld.org.uk/ipoc_hassan
What is sampling? In survey research, sampling is the process of using a subset of a population to represent the whole population. Let’s say you wanted to do some research on everyone in North America. To ask every person would be almost impossible. Even if everyone said “yes”, carrying out a survey across different states, in different languages and timezones, and then collecting and processing all the results, would take a long time and be very costly. Sampling allows large-scale research to be carried out with a more realistic cost and time-frame because it uses a smaller number of individuals in the population to stand in for the whole. However, when you decide to sample, you take on a new task. You have to decide who is part of your sample and how to choose the people who will best represent the whole population. How you go about that is what the practice of sampling is all about. Sampling definitions: - Population The total number of people or things you are interested in - Sample A smaller number within your population that will represent the whole - Sampling The process and method of selecting your sample Why is sampling important? Although the idea of sampling is easiest to understand when you think about a very large population, it makes sense to use sampling methods in studies of all types and sizes. After all, if you can reduce the effort and cost of doing a study, why wouldn’t you? And because sampling allows you to research larger target populations using the same resources as you would smaller ones, it dramatically opens up the possibilities for research. Sampling is a little like having gears on a car or bicycle. Instead of always turning a set of wheels of a specific size and being constrained by their physical properties, it allows you to translate your effort to the wheels via the different gears, so you’re effectively choosing bigger or smaller wheels depending on the terrain you’re on and how much work you’re able to do. Sampling allows you to “gear” your research so you’re less limited by the constraints of cost, time, and complexity that come with different population sizes. It allows us to do things like carrying out exit polls during elections, map the spread and effects rates of epidemics across geographical areas, and carry out nationwide census research that provides a snapshot of society and culture. Probability and non-probability sampling Sampling strategies vary widely across different disciplines and research areas, and from study to study. There are two major types of sampling – probability and non-probability sampling. - Probability sampling, also known as random sampling, is a kind of sample selection where randomisation is used instead of deliberate choice. - Non-probability sampling techniques are where the researcher deliberately picks items or individuals for the sample based on their research goals or knowledge. Probability sampling methods There’s a wide range of probability sampling methods to explore and consider. Here are some of the best-known options. 1. Simple random sampling With simple random sampling, every element in the population has an equal chance of being selected as part of the sample. It’s something like picking a name out of a hat. Simple random sampling can be done by anonymising the population – e.g by assigning each item or person in the population a number and then picking numbers at random. Simple random sampling is easy to do and cheap, and it removes all risk of bias from the sampling process. However, it also offers no control for the researcher and may lead to unrepresentative groupings being picked by chance. 2. Systematic sampling With systematic sampling, also known as systematic clustering, the random selection only applies to the first item chosen. A rule then applies so that every nth item or person after that is picked. Although there’s randomness involved, the researcher can choose the interval at which items are picked, which allows them to make sure the selections won’t be accidentally clustered together. 3. Stratified sampling Stratified sampling involves random selection within predefined groups. It’s useful when researchers know something about the target population and can decide how to subdivide it (stratify it) in a way that makes sense for the research. For example, if you were researching travel behaviours in a group of people, it might be helpful to separate those who own or have use of a car from those who are dependent on public transport. Stratified sampling has benefits but it also introduces the question of how to stratify a population, which adds in more risk of bias. 4. Cluster sampling With cluster sampling, groups rather than individual units of the target population are selected at random. These might be pre-existing groups, such as people in certain zip codes or students belonging to an academic year. Cluster sampling can be done by selecting the entire cluster, or in the case of two-stage cluster sampling, by randomly selecting the cluster itself, then selecting at random again within the cluster. Non-probability sampling methods Non-probability sampling methods don’t offer the same bias-removal benefits as probability sampling, but there are times when these types of sampling are chosen for expediency or simplicity. Here are some forms of non-probability sampling and how they work. 1. Convenience sampling People or elements in a sample are selected on the basis of their availability. If you are doing a research survey and you work at a university, for example, a convenience sample might consist of students or co-workers who happen to be on campus with free time who are willing to take your questionnaire. This kind of sample can have value, especially if it’s done as an early or preliminary step, but significant bias will be introduced. 2. Quota sampling Like the probability-based stratified sampling method, this approach aims to achieve a spread across the target population by specifying who should be recruited for a survey according to certain groups or criteria. For example, your quota might include a certain number of males and a certain number of females, or people in certain age brackets or ethnic groups. Bias may be introduced during the selection itself – for example, volunteer bias might skew the sample towards people with free time who are interested in taking part. Or bias may be part and parcel of the way categories for the quotas are selected by researchers. 3. Purposive sampling Participants for the sample are chosen consciously by researchers based on their knowledge and understanding of the research question at hand or their goals. Also known as judgment sampling, this technique is unlikely to result in a representative sample, but it is a quick and fairly easy way to get a range of results or responses. 4. Snowball or referral sampling With this approach, people recruited to be part of a sample are asked to invite those they know to take part, who are then asked to invite their friends and family and so on. The participation radiates through a community of connected individuals like a snowball rolling downhill. This method can be helpful when the researcher doesn’t know very much about the target population and has no easy way to contact or access them. However it will introduce bias, for example by missing out isolated members of a community or skewing towards certain age or interest groups who recruit amongst themselves. Avoid or reduce sampling errors and bias Using a sample is a kind of short-cut. If you could ask every single person in a population to take part in your study and have each of them reply, you’d have a highly accurate (and very labor-intensive) project on your hands. But since that’s not realistic, sampling offers a “good-enough” solution that sacrifices some accuracy for the sake of practicality and ease. How much accuracy you lose out on depends on how well you control for sampling error, non-sampling error, and bias in your survey design. Our blog post helps you to steer clear of some of these issues. How to choose the correct sample size Finding the best sample size for your target population is something you’ll need to do again and again, as it’s different for every study. To make life easier, we’ve provided a sample size calculator. To use it, you need to know your - Population size - Confidence level - Margin of error (confidence interval) If any of those terms are unfamiliar, have a look at our blog post on determining sample size for details of what they mean and how to find them.
https://www.qualtrics.com/au/experience-management/research/sampling-methods/
Berlin Convention Office of visitBerlin gives details for the 6-months figures of conventions and meetings in the German capital. Berlin hosted 62,000 events, an increase of 1.5 percent compare to last year, with a total 4.82 million participants which also reflects 1.5 increase compare to same period last year. The number of overnight stays related to these meetings increased by 3 per cent to 3.3 million. Berlin is increasingly establishing itself as a centre for science and medical events. Based on the number of events, conferences and conventions related to medicine, science and research remain the leading category in Berlin (12% of all events, +1% compared to last year). They are followed by IT, electronics & communication and politics & public institutions (11% each). Almost one in five event participants (18%) came from abroad. The main international markets are Europe (15% of all participants) as well as the USA and Canada (8% of all participants). The average event duration was 1.9 days (first half 2014: 1.8 days). With 3.3 million overnight stays, around a quarter of the total 13.8 million hotel stays in Berlin in the first half of the year were generated by conference and meeting participants. Spendings by meeting participants who stay overnight in Berlin come to an average of €232 per day. Berlin's meeting and convention industry generated €2.2 billion in revenues in 2014. For Berlin, the visits to the city generated by tourism and the meetings industry are an enormous factor driving the local economy. In the past decade, the gross revenues generated by visitors to the German capital have grown by 82 per cent to over €10 billion a year. This represents €20 more being spent every day by each visitor to Berlin than in 2004 (+15%). This also benefits the city's labour market. Approximately 240,000 people are estimated to make their living here from tourism, up 70,000 from ten years ago. Among the highlights in the second half of 2015 will be the meeting of the European Society of Intensive Care "ESICM LIVES 2015", bringing some 5,000 participants to the city from 3 to 10 October. This will be followed by the “World Health Summit”, internationally one of the most important strategic forums for global health, bringing more than 1,300 experts from more than 80 countries to Berlin from 11 to 13 October. In the weeks following, 11,000 participants will come to Berlin for the “German Congress of Orthopaedics and Trauma Surgery” from 20 to 23 October, followed by the 2015 congress of the German Society for Psychiatry, Psychosomatics and Psychotherapy from 25 to 28 November with some 9,000 attendees expected. The second half of the year is also bringing innovative conferences to the German capital such as “NewsXchange” (28 to 29 October) and the “Global Social Business Summit” (4 to 6 November). “NewsXchange” is a platform for the international media industry, where current developments and trends in the media scene and journalism will be discussed. The Global Social Business Summit will bring together experts and decision makers from economics, society, politics and science to further develop the idea of socially responsible and sustainable entrepreneurship.
https://ftnnews.com/mice/28493-berlin-continues-to-be-germany-s-first-place-to-be-for-conventions
The Antibiotics Crisis: How Did We Get Here And Where Do We Go Next? In recent years there has been a lot of news about the impending antibiotics crisis, brought to a head by renewed awareness that we are running out of drugs to treat evolving superbugs, and with the startling revelation following the NDM-1 discovery, that microorganisms are also capable of sharing bits of themselves with each other to thwart even our most powerful last-line antibiotics. Is this the beginning of the end of antibiotics, as some scientists are predicting, are we about to return to a pre-penicillin world where a common bacterial infection could be a death sentence? Or are we just at the cusp of a new wave of inventions that will spur a new generation of drugs that will keep us ahead of the evolutionary race against harmful microorganisms? This article does not answer these questions, but attempts to present a digest of key facts and recent developments to illuminate the issues around them. It starts with a summary of what we mean by antibiotics and what they can and cannot treat. It then goes on to explain how antibiotic resistance arises, including the problem of multiple drug resistance, and why many experts say widespread and misguided use is to blame for the accelerated rate at which resistance has become a global problem, as has the dearth in new drug developments. It then describes some of the things researchers and organizations say we can do to to slow down the development of superbugs, and ends with a round up of some surprising new directions that could offer alternative solutions. Antibiotics are drugs that kill microorganisms like bacteria, fungi and parasites. They do not work against viruses because viruses are not microorganisms. When the press and media talk about antibiotics they generally mean drugs that kill bacteria, because most of the stories that have been hitting the headlines in recent years are about antibiotic-resistant bacteria or "superbugs" like the Methicillin-resistant Staphylococcus aureus (MRSA). Bacteria are very small creatures of usually only one cell, comprising internal cell structures but no distinct nucleus, surrounded by a cell wall. They can make their own proteins and reproduce themselves as long as they have a source of food. As far as humans are concerned, some bacteria are friendly and essential to wellbeing, they do helpful things like break down food in our gut, while others are dangerous because they attack our tissue and cells to make their food, or they produce toxins that poison and kill. Some bacteria cause no harm while they live in one part of the body, but then become potentially deadly once they enter the bloodstream. A good example is Escherichia coli (E. coli), which lives in the human gut and helps break down food, but if it enters the bloodstream (eg through a perforation in the intestines), it can cause severe cramping, diarrhea, and even death from peritonitis if not treated promptly. Another example is Staphylococcus, which lives harmlessly on human skin or even in our nostrils, but if it enters the bloodstream, it can lead to potentially fatal conditions like toxic shock syndrome. Our immune system has special cells that recognize bacteria as foreign agents and mobilize existing counter-agents or antibodies, or trigger the production of new antibodies, to attack and destroy the bacteria before they get a chance to seize a foothold and start replicating inside us. However, sometimes we lose the fight and succumb to infection, and in some cases, without treatment, the consequences can be very severe and even deadly. Antibiotics have made a big difference to mankind's fight against infectious microorganisms and have vastly improved the conditions and chances of success in many fields of medicine all over the world. They work because they block a life-sustaining function in the unwelcome microorganism. Some stop the microorganism from being able to make or maintain a cell wall, while others target a particular protein that is vital for survival or replication. An example of the former is penicillin, the first commercially available antibiotic that Alexander Flemming discovered in 1929. Penicillin stops bacteria like Strep (Streptococcus, a bacterium that is commonly found on skin or in the throat) from making strong cell walls. Before the introduction of penicillin in World War II, soldiers were more likely to die of bacterial infections than from their wounds. Viruses are not microorganisms, and although capable of self-replicating, do not appear to be "alive" at all: they are particles consisting of DNA or RNA, some long molecules, and a protein coat. They are much smaller than bacteria, have none of their internal cell machinery, and no cell wall. To replicate they have to get inside host cells and hijack their resources. And here lies a clue as to why we have a global problem with antibiotics and antibiotic resistance: too many doctors and healthcare professionals, often encouraged by patient demand, have been prescribing antibiotics to treat viral infections. This leads to imprudent use of antibiotics and greater opportunity for bacteria to mutate into resistant forms. Microorganisms are always evolving. By chance, every now and again, a generation gives rise to offspring with slightly different genes to their forebears, and the ones whose variations confer a survival advantage, eg to make better use of a resource or withstand an environmental stress, get to produce more offspring. Now add to that scenario the efforts of mankind: the production of antibiotics that are designed to kill off bacteria. From the perspective of microorganisms, this is just another environmental stress, or "selective pressure" that ensures those with the survival advantage get to produce proportionally more offspring next time around. This survival advantage perchance could be to evolve a slightly different protein or cellular mechanism to the one targeted by the antibiotic. Now you have a recipe for breeding resistant mutants, while killing off the ones with no resistance. Eventually, the dominant strain becomes the resistant one, as long as there is enough exposure to the antibiotic. In fact, several mechanisms have evolved in bacteria to make them antibiotic resistant. Some chemically modify the antibiotic rendering it inactive, some physically expel it from the bacterial cell, and others change the target site so the antibiotic can't find it or latch onto it. This evolutionary process is further boosted by the fact that bacteria also "swap" bits of genetic material, thus increasing the opportunity for bits that confer survival advantage to spread "horizontally" among species and not just "vertically" down generations of the same species. This is known as "horizontal gene transfer", or HGT. An example of HGT that hit the headlines in 2010 is the transfer of a piece of genetic material that codes for the enzyme NDM-1 (New-Delhi metallo beta-lactamase), an enzyme that destroys antibiotics, even (and this is why NDM-1 is cause for alarm) the super-strong carbapenems, which are generally reserved for use in emergencies and the treatment of infections caused by multiple-drug-resistant bacteria. NDM-1 is most often seen in Klebsiella pneumoniae and E.coli. Many of the antibiotics in use today are chemically synthesized cousins of naturally occurring molecules that evolved in microorganisms over millions of years, as they fought for dominance over limited resources. They themselves powered the means to produce and overcome, different antibiotic molecules. But the problem we are seeing now, of rising antibiotic resistance, has not taken millions of years, but only decades to come about, so what might explain that? When we began to use antibiotic molecules to treat bacterial infections, we exposed far more bacteria to much higher levels of antibiotics than they would come across in the natural world, producing an effect that the British Society for Immunology describes as "evolution in real time". In fact, resistance to antibiotics is not a new thing, and the early signs started quite soon after their introduction. For instance, resistance to streptomycin, chloramphenicol and tetracycline and the sulfonamides, was noted in the 1953 Shigella dysentery outbreak in Japan, only a decade after those drugs were introduced. Many experts believe that it is our widespread, and often misguided use of antibiotics to treat humans and animals that is responsible for the vastly accelerated speed at which antibiotic-resistant microorganisms have evolved. However, while numerous studies have shown there is a dynamic relationship between the prescribing of antibiotics, and the levels of antibiotic resistance in populations, too many doctors still prescribe antibiotics to patients to treat viral infections like coughs and colds. Some suggest this habit persists because doctors and patients fail to recognize that a course of antibiotics can result in resistance in a single person: they assume it is a population effect. Others may also not realize the full extent of the risks to health of inappropriate prescribing. In a study published last year in Infection Control and Hospital Epidemiology, US researchers found that giving patients antibiotics for viral infections not only did not benefit them, but may even have harmed them. For instance, a significant number of the patients they studied developed Clostridium difficile diarrhea, a bacterial condition linked with antibiotic use. The problem of medical over-use not just confined to the US. For instance, in most European countries, antibiotics are the second most widely used drugs after simple analgesics. Also, prescription drugs are not the only source of antibiotics in the environment to put "selective pressure" on bacteria. Antibiotics are in food and water. In the US, for example, giving antibiotics to animals is routine on large, concentrated farms that breed beef cattle, pigs and poultry for human consumption. The drugs are given not just to cure infection in sick animals, but also to prevent infection and promote faster growth in healthy animals. The antibiotics then find their way via effluent from houses and feedlots into the water systems and contaminate streams and groundwater. Such routine use of antibiotics affects not only the animals and the people who eat them, but also all those who consume the contaminated water. In his comprehensive and highly readable online "Textbook of Bacteriology", Dr Kenneth Todar, an emeritus lecturer in Microbiology at the University of Wisconsin-Madison, calls this a "double hit", because "...we get antibiotics in our food and drinking water, and we meanwhile promote bacterial resistance". For this reason, the European Union and other industrialized nations, have banned feeding antibiotics to animals, and recently, the US Food and Drug Administration (FDA) started urging farmers to limit their use of antibiotics. In fact, after decades of deliberation, it appears the FDA may be poised to issue its tightest guidelines yet on use of antibiotics in animals, with the intention of bringing to an end the use of the drugs simply to make animals grow faster. Todar says that the "non-therapeutic use of antibiotics in livestock production makes up at least 60 per cent of the total antimicrobial production in the United States", so this is not a small thing. Another industry that is starting to be a cause for concern is genetically modified crops, because some have antibiotic-resistant genes inserted as "markers". The marker genes are introduced into the crop plant during the early stages of development for scientific reasons (eg to help detect herbicide-resistant genes), but then serve no further purpose, and are left in the final product. Some people have criticized this approach because they say it could be a way for microorganisms in the environment to acquire the antibiotic-resistant genes. Todar says that in some cases, these "marker genes confer resistance to front-line antibiotics such as the beta-lactams and aminoglycosides". As the bacteria have evolved and acquired resistance to antibiotics, we have tried to stay one step ahead by developing new drugs, and adopting a protocol of first, second and last-line treatment. Last-line treatment drugs are reserved for patients whose bacterial infection is resistant to first and second-line treatments. But we are now seeing more and more multiple-drug-resistant (MDR) bacteria, that are able to resist even last-line treatments. In December 2010, the journal Infection Control and Hospital Epidemiology, published a study that reported a three-fold increase in cases involving drug-resistant strains of Acinetobacter in US hospitals from 1999 and 2006. This dangerous bacteria strikes patients in Intensive Care Units (ICUs) often causing severe pneumonia or bloodstream infection, some of which are now resistant to imipenem, a last-line treatment antibiotic. The researchers said that a lot of attention was being paid to MRSA, but we should also be worried about other bacteria like Acinetobacter because there are even fewer drugs in the development pipeline and we are running out of treatment options. As well as affecting ICU and other patients, Acinetobacter infections are arising in soldiers returning from the war in Iraq. It would appear that a contributing factor to the surge in MDR bacteria, or "superbugs", is that they spread from patient to patient in hospitals and long term care facilities like nursing homes. A study published in the journal Clinical Infectious Diseases in June 2005, found that living in a long-term care facility, being 65 or older, or taking antibiotics for two or more weeks, were all factors that increased patients' likelihood of carrying MDR bacteria upon admission to a hospital. Also, more recent research suggests that the problem of MDR may be more than just genetic. In a study published online in January 2011 in the Journal of Medical Microbiology, researchers proposed that a non-genetic mechanism called "persistence" makes bacteria temporarily hyper-resistant to all antibiotics at once. They found "persister" bacterial cells of Pseudomonas aeruginosa, an opportunistic human pathogen and a significant cause of hospital-acquired infections, were able to survive normally lethal levels of antibiotics without being genetically resistant to the drug. One of the reasons that despite being around for decades, it is only now that the threat of antibiotic resistance is being taken so seriously, is there has been a massive decline in the development of new antibiotics. Since the discovery of two classes of antibiotic over 70 years ago, penicillin in 1929 and the first sulfonamide, prontosil, in 1932, the ensuing decades have given rise to a total of 13 classes of antibiotic, some now in their fifth generation. At the peak of development, new drugs were coming out at a rate of 15 to 20 every ten years, but in the last ten years, we have seen only 6 new drugs, and, according to another article in the May 2010 issue of BMJ, titled "Stoking the Antibiotic Pipeline", only two new drugs are under development, and both are in the early stages when failure rates are high. In that article, authors Chantal Morel and Elias Mossialos of the London School of Economics and Political Science, cite that in 2004, only 1.6 per cent of drugs in the pipeline of the world's 15 largest drug companies were antibiotics, and give a number of reasons why the companies have reduced investment in antibiotics research. Among these (ironically) is the fact doctors are being encouraged to restrict use of antibiotics for the more serious cases, and antibiotics are not as profitable as drugs that mitigate symptoms. Plus of course, the spectre of antibiotic resistance means the lifespan of a new drug is likely to be curtailed, which means smaller returns on investment. This bleak scenario prompted Professor Tim Walsh of UK's Cardiff University, and colleagues, who in the September 2010 Lancet Infectious Diseases told us about NDM-1 and its threat to public health worldwide, to ask the question, "Is this the end of antibiotics?" "We have a bleak window of maybe 10 years, where we are going to have to use the antibiotics we have very wisely, but also grapple with the reality that we have nothing to treat these infections with," said Walsh. "In many ways, this is it," he said, "this is potentially the end." The British Society for Immunology agrees: the idea that all you have to do to keep on fighting the bacteria successfully is every year come up with "something new" no longer works when the pipeline for new drugs runs dry, they say. Against this prospect of a bleak future for our fight against harmful bacteria,with many experts saying it will take decades to reverse the dearth in research and development of antibacterial treatments, governments appear to be converging on a two-pronged approach: accelerate the development of new drugs and be very prudent with how we use our current and future arsenal of antibiotics so as to minimize exposure and slow down the evolution of drug-resistant strains of infectious bacteria. With the first of these strategies in mind, the European Council and the US have recently set up task forces and committees to spur the research and development of new antibacterial drugs, with the goal of developing 10 new drugs by 2020. To do this will take a huge concerted effort, plus significant changes in funding and legislation. In their BMJ paper, Morel and Mossialos suggest a range of mechanisms to encourage drug companies to develop new antibiotics. These include "push" mechanisms to subsidize early research, "pull" mechanisms to reward results, some significant changes to laws and regulations, and others that use a combination of methods. For instance, under "push" mechanisms they suggest tax incentives tied to early research activities, plus greater funding of public-private partnerships and schemes that train new and experienced researchers, promote multidisciplinary collaboration and create open access resources such as molecule libraries. And under "pull" mechanisms they suggest introducing schemes to purchase drugs at pre-agreed prices and volumes, plus prizes and lump sum rewards, including the option of allowing developers to choose between keeping ownership of the patent for a new drug, or being bought out of it with a financial lump sum. To accelerate the timescale of drug development, Morel and Mossialos also suggest ways to speed up assessment, and that some or even a large proportion of phase III trials should be allowed to take place after the drug is already on the market. They also suggest relaxing anti-trust laws to encourage developers of products with similar resistance-related characteristics to work together, eg so as to reduce the risk of drug resistance arising from different products for the same condition. Another idea is to give antibiotic drugs "orphan-like" status, a scheme currently used in Europe to incentivize drug companies to make drugs for rare diseases, such as getting help with protocols, tax incentives, fee reductions before and after authorization, and 10-year market exclusivity. Morel and Mossialos point out, none of this will work, if we don't at the same dismantle the current "incentive structures that lead to overuse of antibiotics, which is currently fueling the spread of resistant bacteria". However, despite this rather pessimistic backdrop, there appears to be a faint glimmer of optimism among some scientists who believe that the tide is already starting to turn. In a paper published in the July 2010 issue of the International Journal of Antimicrobial Agents, Dr Ursula Theuretzbacher, founder and principal of the Center for Anti-Infective Agents in Vienna, Austria, wrote that innovation in antibiotic drugs "proceeds in waves", and that "interest in antibiotics, particularly in drugs effective against MDR Gram-negative bacteria, is back". She said we appear to be at the start of a new wave that will hopefully yield new antibiotic drugs in about 10 to 15 years time; but, she agrees with many others who say that in the meantime we must continue to address the problem with "a multifaceted set of solutions based on currently available tools". A November 2010 article in the New York Times also hints of a new wave, suggesting signs that the drug industry is picking up on its own. This is supported by figures from the FDA that show the number of antibiotics in clinical trials has gone up in the last three years, which the New York Times says is mostly due to the efforts of small drug companies, who can be satisfied with lower sales volumes. Whether "push and pull", or any other incentives can help stoke the research and development pipeline, it still makes sense to make prudent use of antibiotics, because unnecessary exposure just gives bacteria another opportunity to develop resistance. The consensus appears to be that a multifaceted strategy is needed, which includes ongoing education of prescribers and users of antibiotics, evidence-based guidelines and policies for hospitals and healthcare settings (including improving hospital hygiene), and improved prescribing practices. Monitoring of hospital antibiotic resistance and antibiotic use. Optimizing timing and duration of antibiotics for surgery to lower surgical site infections and reduce emergence of resistant bacteria. In some cases, shorter rather than longer treatments can be given without affecting patient outcomes and lowers the frequency of antibiotic resistance. Taking samples before therapy, monitoring culture results, and streamlining use of antibiotics based on these results can lead to reductions in unnecessary use of antibiotics. Note that antibiotic exposure is linked to the emergence of antibiotic resistance. Take responsibility for promoting appropriate use of antibiotics in order to keep antibiotics effective. Only prescribe antibiotics when necessary. Base antibiotic prescriptions on a symptomatic diagnosis and not on patient pressure. Use their status as an authoritative source of information to advise patients on the risks of inappropriate antibiotic use. "Antibiotics cure bacterial infections, not viral infections such as colds or flu, most coughs and bronchitis, sore throats not caused by strep, or runny noses". Get Smart includes a comprehensive set of education materials for doctors and patients, and also urges doctors not to give way to patient pressure and to educate their patients about appropriate use of antibiotics. The message appears to be getting through, because National Ambulatory Medical Care Survey (NAMCS) data shows that the Get Smart Campaign contributed to a 25% reduction in antimicrobial use per outpatient office visit for presumed viral infection, and has reduced antibiotic prescriptions for children under 5 in ambulatory ear infection visits: in 2007, there were 47.5 antibiotic prescriptions per 100 visits, down from 61 in 2006 and 69 in 1997. Cold plasma therapy: A team of Russian and German scientists found that a ten-minute treatment with low temperature plasma (high energy ionized gas) killed drug-resistant bacteria causing wound infections in rats and increased the rate of wound healing by damaging microbial DNA and surface structures. Their study appears in the January 2010 issue of the Journal of Medical Microbiology. Fungus-farming ants: Researchers at the University of East Anglia in the UK found that ants, who tend farms of fungi that they grow to feed their larvae and queen, use antibiotics to inhibit the growth of unwanted microorganisms. The antibiotics are made by actinomycete bacteria that live on the ants in a mutual symbiosis. The researchers said they not only found a new antibiotic, but they also learned important clues that can teach us how to slow drug-resistant bacteria. The study appeared in the journal BMC Biology in August 2010. Natural enzymes in body fluids: A US team from Georgia Institute of Technology and University of Maryland has developed a pioneering method of identifying naturally occurring "lytic enzymes" found in body fluids like tears and saliva that are capable of attacking harmful bacteria, including antibiotic-resistant ones like MRSA, while leaving friendly bacteria alone. The study appeared in the journal Physical Biology in October 2010. Good Samaritan bacteria: Dr James Collins, a biologist at Boston University and his team were astonished to find an example of Good Samaritan behavior among bacteria, whereby resistant mutants were secreting a molecule called "indole" that thwarts their own growth but helps other bacteria survive by triggering drug-expelling pumps on their cell membranes. The team hope their research on "bacterial charity", which appeared in a September 2010 issue of Nature, will spur the development of more powerful antibiotics. Also, the current crisis in antibiotic therapy, may also mean that we turn our attention to other, long forgotten ways of overcoming microorganisms. One of these is Phage Therapy, which has been practised in the Soviet Union since the days of Stalin. Phages are natural viruses that specifically infect and kill target bacteria, in a similar way to the lytic enzymes discovered by the US team reported in the Physical Biology study. The discovery of antibiotics is thought to have turned Western countries away from phage therapy, but there are reports that soldiers with dysentry in World War I were successfully treated with phages, as were cholera victims in India in the 1920s. The Eliava Institute of Bacteriophage, Microbiology, and Virology (EIBMV) in Tbilisi, Georgia receives patients from all over the world for treatment with phage therapy. They have successfully treated patients with chronic conditions like sinusitis, urinary tract infections, prostatitis, methicillin-resistant Staph infections, and non-healing wounds, according to an article that appeared in Genetic Engineering and Biotechnology News in October 2008. EIBMV have a large phage collection and have recently partnered with a California-based company to bring their expertise to a wider international market. Sources: Medical News Today Archives; MedicineNet.com; ExplorePAHistory.com; "The Future of Antibiotics", British Society for Immunology, May 2010; So, Gupta and Cars, "Tackling antibiotic resistance", BMJ BMJ 2010, 340:c2071; "Antibiotic resistance" European Research in Action Leaflet, European Commission, Aug 2003; Shiley, Lautenbach, and Lee, "The Use of Antimicrobial Agents after the Diagnosis of Viral Respiratory Tract Infections in Hospitalized Adults: Antibiotics or Anxiolytics?" Infection Control and Hospital Epidemiology Nov 2010, 31:11; Pop-Vicas and D'Agata, "The Rising Influx of Multidrug-Resistant Gram-Negative Bacilli into a Tertiary Care Hospital", Clinical Infectious Diseases, Jun 2005, 40:12; De Groote et al "Pseudomonas aeruginosa fosfomycin resistance mechanisms affect non-inherited fluoroquinolone tolerance", Journal of Medical Microbiology 2011; Morel and Mossialos, "Stoking the antibiotic pipeline", BMJ 2010, 340:c2115; Kumarasamy, Toleman, Walsh et al, "Emergence of a new antibiotic resistance mechanism in India, Pakistan, and the UK: a molecular, biological, and epidemiological study", Lancet Infectious Diseases, 10 (9), Sep 2010; Sarah Boseley, "Are you ready for a world without antibiotics?" Guardian, 12 Aug 2010; Theuretzbacher, "Future antibiotics scenarios: is the tide starting to turn?", International Journal of Antimicrobial Agents, 34 (1), Jul 2009; Andrew Pollack, "Antibiotics Research Subsidies Weighed by US", New York Times, 5 Nov 2010; "Questions and answers about NDM-1 and carbapenem resistance", Health Protection Agency, 2010; Erik Eckholm, "US Meat Farmers Brace for Limits on Antibiotics", New York Times, 14 Sep 2010; Todar's Online Textbook of Bacteriology; "Bacteriophage-Based Antibiotic Therapy", Genetic Engineering and Biotechnology News, Oct 2008. Paddock, Catharine. "The Antibiotics Crisis: How Did We Get Here And Where Do We Go Next?." Medical News Today. MediLexicon, Intl., 10 Jan. 2011. Web.
https://www.medicalnewstoday.com/articles/213193.php
# Aquarius (constellation) Aquarius is an equatorial constellation of the zodiac, between Capricornus and Pisces. Its name is Latin for "water-carrier" or "cup-carrier", and its old astronomical symbol is (♒︎), a representation of water. Aquarius is one of the oldest of the recognized constellations along the zodiac (the Sun's apparent path). It was one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and it remains one of the 88 modern constellations. It is found in a region often called the Sea due to its profusion of constellations with watery associations such as Cetus the whale, Pisces the fish, and Eridanus the river. At apparent magnitude 2.9, Beta Aquarii is the brightest star in the constellation. ## History and mythology Aquarius is identified as GU.LA "The Great One" in the Babylonian star catalogues and represents the god Ea himself, who is commonly depicted holding an overflowing vase. The Babylonian star-figure appears on entitlement stones and cylinder seals from the second millennium. It contained the winter solstice in the Early Bronze Age. In Old Babylonian astronomy, Ea was the ruler of the southernmost quarter of the Sun's path, the "Way of Ea", corresponding to the period of 45 days on either side of winter solstice. Aquarius was also associated with the destructive floods that the Babylonians regularly experienced, and thus was negatively connoted. In Ancient Egypt astronomy, Aquarius was associated with the annual flood of the Nile; the banks were said to flood when Aquarius put his jar into the river, beginning spring. In the Greek tradition, the constellation came to be represented simply as a single vase from which a stream poured down to Piscis Austrinus. The name in the Hindu zodiac is likewise kumbha "water-pitcher". In Greek mythology, Aquarius is sometimes associated with Deucalion, the son of Prometheus who built a ship with his wife Pyrrha to survive an imminent flood. They sailed for nine days before washing ashore on Mount Parnassus. Aquarius is also sometimes identified with beautiful Ganymede, a youth in Greek mythology and the son of Trojan king Tros, who was taken to Mount Olympus by Zeus to act as cup-carrier to the gods. Neighboring Aquila represents the eagle, under Zeus' command, that snatched the young boy; some versions of the myth indicate that the eagle was in fact Zeus transformed. An alternative version of the tale recounts Ganymede's kidnapping by the goddess of the dawn, Eos, motivated by her affection for young men; Zeus then stole him from Eos and employed him as cup-bearer. Yet another figure associated with the water bearer is Cecrops I, a king of Athens who sacrificed water instead of wine to the gods. ### Depictions In the first century, Ptolemy's Almagest established the common Western depiction of Aquarius. His water jar, an asterism itself, consists of Gamma, Pi, Eta, and Zeta Aquarii; it pours water in a stream of more than 20 stars terminating with Fomalhaut, now assigned solely to Piscis Austrinus. The water bearer's head is represented by 5th magnitude 25 Aquarii while his left shoulder is Beta Aquarii; his right shoulder and forearm are represented by Alpha and Gamma Aquarii respectively. ### In Eastern astronomy In Chinese astronomy, the stream of water flowing from the Water Jar was depicted as the "Army of Yu-Lin" (Yu-lim-kiun or Yulinjun, Hanzi: 羽林君). The name "Yu-lin" means "feathers and forests", referring to the numerous light-footed soldiers from the northern reaches of the empire represented by these faint stars. The constellation's stars were the most numerous of any Chinese constellation, numbering 45, the majority of which were located in modern Aquarius. The celestial army was protected by the wall Leibizhen (垒壁阵), which counted Iota, Lambda, Phi, and Sigma Aquarii among its 12 stars. 88, 89, and 98 Aquarii represent Fou-youe, the axes used as weapons and for hostage executions. Also in Aquarius is Loui-pi-tchin, the ramparts that stretch from 29 and 27 Piscium and 33 and 30 Aquarii through Phi, Lambda, Sigma, and Iota Aquarii to Delta, Gamma, Kappa, and Epsilon Capricorni. Near the border with Cetus, the axe Fuyue was represented by three stars; its position is disputed and may have instead been located in Sculptor. Tienliecheng also has a disputed position; the 13-star castle replete with ramparts may have possessed Nu and Xi Aquarii but may instead have been located south in Piscis Austrinus. The Water Jar asterism was seen to the ancient Chinese as the tomb, Fenmu. Nearby, the emperors' mausoleum Xiuliang stood, demarcated by Kappa Aquarii and three other collinear stars. Ku ("crying") and Qi ("weeping"), each composed of two stars, were located in the same region. Three of the Chinese lunar mansions shared their name with constellations. Nu, also the name for the 10th lunar mansion, was a handmaiden represented by Epsilon, Mu, 3, and 4 Aquarii. The 11th lunar mansion shared its name with the constellation Xu ("emptiness"), formed by Beta Aquarii and Alpha Equulei; it represented a bleak place associated with death and funerals. Wei, the rooftop and 12th lunar mansion, was a V-shaped constellation formed by Alpha Aquarii, Theta Pegasi, and Epsilon Pegasi; it shared its name with two other Chinese constellations, in modern-day Scorpius and Aries. ## Features ### Stars Despite both its prominent position on the zodiac and its large size, Aquarius has no particularly bright stars, its four brightest stars being less than magnitude 2. However, recent research has shown that there are several stars lying within its borders that possess planetary systems. The two brightest stars, Alpha and Beta Aquarii, are luminous yellow supergiants, of spectral types G0Ib and G2Ib respectively, that were once hot blue-white B-class main sequence stars 5 to 9 times as massive as the Sun. The two are also moving through space perpendicular to the plane of the Milky Way. Just shading Alpha, Beta Aquarii is the brightest star in Aquarius with an apparent magnitude of 2.91. It also has the proper name of Sadalsuud. Having cooled and swollen to around 50 times the Sun's diameter, it is around 2200 times as luminous as the Sun. It is around 6.4 times as massive as the Sun and around 56 million years old. Sadalsuud is 540 ± 20 light-years from Earth. Alpha Aquarii, also known as Sadalmelik, has an apparent magnitude of 2.94. It is 520 ± 20 light-years distant from Earth, and is around 6.5 times as massive as the Sun and 3000 times as luminous. It is 53 million years old. γ Aquarii, also called Sadachbia, is a white main sequence star of spectral type star of spectral type A0V that is between 158 and 315 million years old and is around two and a half times the Sun's mass, and double its radius. Of magnitude 3.85, it is 164 ± 9 light years away. It has a luminosity of 50 L☉. The name Sadachbia comes from the Arabic for "lucky stars of the tents", sa'd al-akhbiya. δ Aquarii, also known as Skat or Scheat is a blue-white A2 spectral type star of apparent magnitude 3.27 and luminosity of 105 L☉. ε Aquarii, also known as Albali, is a blue-white A1 spectral type star with an apparent magnitude of 3.77, an absolute magnitude of 1.2, and a luminosity of 28 L☉. ζ Aquarii is an F2 spectral type double star; both stars are white. Overall, it appears to be of magnitude 3.6 and luminosity of 50 L☉. The primary has a magnitude of 4.53 and the secondary a magnitude of 4.31, but both have an absolute magnitude of 0.6. Its orbital period is 760 years; the two components are currently moving farther apart. θ Aquarii, sometimes called Ancha, is a G8 spectral type star with an apparent magnitude of 4.16 and an absolute magnitude of 1.4. κ Aquarii, also called Situla, has an apparent magnitude of 5.03. λ Aquarii, also called Hudoor or Ekchusis, is an M2 spectral type star of magnitude 3.74 and luminosity of 120 L☉. ξ Aquarii, also called Bunda, is an A7 spectral type star with an apparent magnitude of 4.69 and an absolute magnitude of 2.4. π Aquarii, also called Seat, is a B0 spectral type star with an apparent magnitude of 4.66 and an absolute magnitude of −4.1. ### Planetary systems Twelve exoplanet systems have been found in Aquarius as of 2013. Gliese 876, one of the nearest stars to Earth at a distance of 15 light-years, was the first red dwarf star to be found to possess a planetary system. It is orbited by four planets, including one terrestrial planet 6.6 times the mass of Earth. The planets vary in orbital period from 2 days to 124 days. 91 Aquarii is an orange giant star orbited by one planet, 91 Aquarii b. The planet's mass is 2.9 times the mass of Jupiter, and its orbital period is 182 days. Gliese 849 is a red dwarf star orbited by the first known long-period Jupiter-like planet, Gliese 849 b. The planet's mass is 0.99 times that of Jupiter and its orbital period is 1,852 days. There are also less-prominent systems in Aquarius. WASP-6, a type G8 star of magnitude 12.4, is host to one exoplanet, WASP-6 b. The star is 307 parsecs from Earth and has a mass of 0.888 solar masses and a radius of 0.87 solar radii. WASP-6 b was discovered in 2008 by the transit method. It orbits its parent star every 3.36 days at a distance of 0.042 astronomical units (AU). It is 0.503 Jupiter masses but has a proportionally larger radius of 1.224 Jupiter radii. HD 206610, a K0 star located 194 parsecs from Earth, is host to one planet, HD 206610 b. The host star is larger than the Sun; more massive at 1.56 solar masses and larger at 6.1 solar radii. The planet was discovered by the radial velocity method in 2010 and has a mass of 2.2 Jupiter masses. It orbits every 610 days at a distance of 1.68 AU. Much closer to its sun is WASP-47 b, which orbits every 4.15 days only 0.052 AU from its sun, yellow dwarf (G9V) WASP-47. WASP-47 is close in size to the Sun, having a radius of 1.15 solar radii and a mass even closer at 1.08 solar masses. WASP-47 b was discovered in 2011 by the transit method, like WASP-6 b. It is slightly larger than Jupiter with a mass of 1.14 Jupiter masses and a radius of 1.15 Jupiter masses. There are several more single-planet systems in Aquarius. HD 210277, a magnitude 6.63 yellow star located 21.29 parsecs from Earth, is host to one known planet: HD 210277 b. The 1.23 Jupiter mass planet orbits at nearly the same distance as Earth orbits the Sun—1.1 AU, though its orbital period is significantly longer at around 442 days. HD 210277 b was discovered earlier than most of the other planets in Aquarius, detected by the radial velocity method in 1998. The star it orbits resembles the Sun beyond their similar spectral class; it has a radius of 1.1 solar radii and a mass of 1.09 solar masses. HD 212771 b, a larger planet at 2.3 Jupiter masses, orbits host star HD 212771 at a distance of 1.22 AU. The star itself, barely below the threshold of naked-eye visibility at magnitude 7.6, is a G8IV (yellow subgiant) star located 131 parsecs from Earth. Though it has a similar mass to the Sun—1.15 solar masses—it is significantly less dense with its radius of 5 solar radii. Its lone planet was discovered in 2010 by the radial velocity method, like several other exoplanets in the constellation. As of 2013, there were only two known multiple-planet systems within the bounds of Aquarius: the Gliese 876 and HD 215152 systems. The former is quite prominent; the latter has only two planets and has a host star farther away at 21.5 parsecs. The HD 215152 system consists of the planets HD 215152 b and HD 215152 c orbiting their K0-type, magnitude 8.13 sun. Both discovered in 2011 by the radial velocity method, the two tiny planets orbit very close to their host star. HD 215152 c is the larger at 0.0097 Jupiter masses (still significantly larger than the Earth, which weighs in at 0.00315 Jupiter masses); its smaller sibling is barely smaller at 0.0087 Jupiter masses. The error in the mass measurements (0.0032 and 0.0049 MJ respectively) is large enough to make this discrepancy statistically insignificant. HD 215152 c also orbits further from the star than HD 215152 b, 0.0852 AU compared to 0.0652. On 23 February 2017, NASA announced that ultracool dwarf star TRAPPIST-1 in Aquarius has seven Earth-like rocky planets. Of these, as many as four may lie within the system's habitable zone, and may have liquid water on their surfaces. The discovery of the TRAPPIST-1 system is seen by astronomers as a significant step toward finding life beyond Earth. ### Deep sky objects Because of its position away from the galactic plane, the majority of deep-sky objects in Aquarius are galaxies, globular clusters, and planetary nebulae. Aquarius contains three deep sky objects that are in the Messier catalog: the globular clusters Messier 2, Messier 72, and the asterism Messier 73. While M73 was originally catalogued as a sparsely populated open cluster, modern analysis indicates the 6 main stars are not close enough together to fit this definition, reclassifying M73 as an asterism. Two well-known planetary nebulae are also located in Aquarius: the Saturn Nebula (NGC 7009), to the southeast of μ Aquarii; and the famous Helix Nebula (NGC 7293), southwest of δ Aquarii. M2, also catalogued as NGC 7089, is a rich globular cluster located approximately 37,000 light-years from Earth. At magnitude 6.5, it is viewable in small-aperture instruments, but a 100 mm aperture telescope is needed to resolve any stars. M72, also catalogued as NGC 6981, is a small 9th magnitude globular cluster located approximately 56,000 light-years from Earth. M73, also catalogued as NGC 6994, is an open cluster with highly disputed status. Aquarius is also home to several planetary nebulae. NGC 7009, also known as the Saturn Nebula, is an 8th magnitude planetary nebula located 3,000 light-years from Earth. It was given its moniker by the 19th century astronomer Lord Rosse for its resemblance to the planet Saturn in a telescope; it has faint protrusions on either side that resemble Saturn's rings. It appears blue-green in a telescope and has a central star of magnitude 11.3. Compared to the Helix Nebula, another planetary nebula in Aquarius, it is quite small. NGC 7293, also known as the Helix Nebula, is the closest planetary nebula to Earth at a distance of 650 light-years. It covers 0.25 square degrees, making it also the largest planetary nebula as seen from Earth. However, because it is so large, it is only viewable as a very faint object, though it has a fairly high integrated magnitude of 6.0. One of the visible galaxies in Aquarius is NGC 7727, of particular interest for amateur astronomers who wish to discover or observe supernovae. A spiral galaxy (type S), it has an integrated magnitude of 10.7 and is 3 by 3 arcseconds. NGC 7252 is a tangle of stars resulting from the collision of two large galaxies and is known as the Atoms-for-Peace galaxy because of its resemblance to a cartoon atom. ### Meteor showers There are three major meteor showers with radiants in Aquarius: the Eta Aquariids, the Delta Aquariids, and the Iota Aquariids. The Eta Aquariids are the strongest meteor shower radiating from Aquarius. It peaks between 5 and 6 May with a rate of approximately 35 meteors per hour. Originally discovered by Chinese astronomers in 401, Eta Aquariids can be seen coming from the Water Jar beginning on 21 April and as late as 12 May. The parent body of the shower is Halley's Comet, a periodic comet. Fireballs are common shortly after the peak, approximately between 9 May and 11 May. The normal meteors appear to have yellow trails. The Delta Aquariids is a double radiant meteor shower that peaks first on 29 July and second on 6 August. The first radiant is located in the south of the constellation, while the second radiant is located in the northern circlet of Pisces asterism. The southern radiant's peak rate is about 20 meteors per hour, while the northern radiant's peak rate is about 10 meteors per hour. The Iota Aquariids is a fairly weak meteor shower that peaks on 6 August, with a rate of approximately 8 meteors per hour. ## Astrology As of 2002, the Sun appears in the constellation Aquarius from 16 February to 12 March. In tropical astrology, the Sun is considered to be in the sign Aquarius from 20 January to 19 February, and in sidereal astrology, from 15 February to 14 March. Aquarius is also associated with the Age of Aquarius, a concept popular in 1960s counterculture. Despite this prominence, the Age of Aquarius will not dawn until the year 2597, as an astrological age does not begin until the Sun is in a particular constellation on the vernal equinox.
https://en.wikipedia.org/wiki/Aquarius_(constellation)
Accountability alone can’t ensure people in positions of power at workplaces aren’t making decisions based on personal biases, according to a new study. Published in the Academy of Management Journal, the study investigated the “power of cognitive and emotional processing relative to external control.” The researchers noted that “bias suppression,” or making decisions without the interference of one’s personal views and self-interests, is an “essential institutional objective.” One of the ways to enforce this is through accountability, but despite the best of intentions, practicing accountability may not ensure sustained bias suppression, the findings suggest. “Across multiple studies, we found that bias suppression with high accountability induces counterfactual thinking… In other words, the decision-maker questions what would, could, or should have transpired had they chosen differently,” Brittany Solomon, a management professor at the University of Notre Dame, who led the study, said in a statement. “Then they regret the decision they made and ultimately — with subsequent low accountability — reverse their action,” she explained. However, if the accountability is initially low, and subsequently made higher, does that produce the opposite effect on people suppressing their biases? Not quite, the researchers found. In this scenario, a previously biased decision is simply sustained. Explaining the team’s findings through an example, Solomon said that “a manager with high accountability may avoid showing favoritism to a subordinate who is also a friend. If the manager no longer feels such pressure in the future, they are more likely to favor that friend over other subordinates… However, a manager who initially has little or no accountability may show favoritism to their friend and continue favoring that friend over subordinates even when they are highly accountable.” Related on The Swaddle: ‘Toxic’ Workplace Culture Linked To a Threefold Increase in Risk of Depression, Study Shows So, can an unvarying degree of accountability help organizations ensure sustained suppression of bias from people in charge of making decisions? Solomon’s statement suggests that may not be a solution either. “People often opt to indulge their biases and continue doing so, despite high accountability, because they view the biased decision as the right decision,” she notes. In addition, even if an individual succeeds in suppressing their bias once, their own cognitive and emotional processing will undermine their efforts to continue the suppression more and more with every consecutive decision they make, the study found. “[C]ounterfactual thinking and feelings of regret for not following their personal instincts or preferences are so strong that people tend to reverse their unbiased decisions,” Solomon explained. Since the biggest challenge to suppressing one’s bias appears to be internal, the solution may also need to be designed to accommodate that. For instance, research suggests that people are more likely to associate the trait of ‘brilliance’ with men over women, and a 2018 study had also found that people were almost 40% less likely to refer women for a job if they were led to believe it required high-level intellectual abilities. Given how entrenched this bias is, it’s unlikely the element of accountability alone can convince individuals that they’ve made the right choice by choosing a female candidate for a job instead of choosing a man. Moreover, given how research has shown that often the most biased individuals are the ones who don’t even believe gender inequality exists in the workplace. It’s even more unlikely that they’re aware of their own biases, let alone suppressing them. But obviously, if allowed to persist and interfere with decision-making, biases can impact workplace diversity. So, the researchers recommend instilling organizational values in employees to override their individual biases, rather than just expecting compliance enforced through external means like accountability. Basically, “organizations should be more realistic about employees’ ability or lack thereof to consistently suppress bias,” Solomon says.
https://theswaddle.com/accountability-alone-isnt-enough-to-suppress-workplace-bias-study/
Forces on Charges Spinning Around Again Here we are again: Today we will have a discussion about charge motion in magnetic fields and so some observations of wires in magnetic fields. There is A new WebAssign (Arent you excited??) posted. I am still working on the gradebook so you can see your grades all together. Friday will be a Quiz on this weeks work. Maybe last weeks too! Next week more of the same. Last Session: F=Bqv Sin(B,V) Right hand rule for direction. A LONG, STRAIGHT WIRE o I B 2 r VECTOR! o 4 10 7 T m A permeability of free space What is magnetic field at X?? B X 1 M er t e B 450 o I 2 r 1 Ampere (4 10 7 T m A) (1Amp) B 2 (1meter ) Current OUT of Paper B 2 10 7 T DIRECTION? NW REMEMBER: The effect of the sign of a moving Positive and negative charges will feel opposite effects from a magnetic field. The Velocity Selector 7 Magnetism Lets Look at the effect of crossed E and B Fields: x x x B E x x x v q,m 8 Magnetism What is the relation between the intensities of the electric and magnetic fields for the particle to move in a straight line ?. x x x B E x x x v q m FE = q E and FB = q v B If FE = FB the particle will move following a straight line trajectory qE=qvB v=E/B FB FE 9 Magnetism A charged particle is injected into a Magnetic Field. Can the field do WORK on the particle? A B C D YES NO It depends on the direction the particle is moving in with respect to the field. I want to go back on Spring Break! The Motion of a Charged Particle in a Magnetic Field The electrical force can do work on a charged particle. Parabola The magnetic force cannot do work on a charged particle. The Motion of a Charged Particle in a Magnetic Field The magnetic force always remains perpendicular to the velocity and is directed toward the center of the circular path. v2 Fc m r v2 qvB m r mv r qB 21.4 The Mass Spectrometer r mv mv qB eB 1 2 mv 2 qV 1 2 mv 2 eV magnitude of electron charge er 2 2 B m 2V KE=PE Mass Spectrometer Smaller Mass 14 Magnetism Kratos Profile HV-3 Gas Chromatograph & Direct Probe Mass Spectrometer Description Medium resolution double focusing (E/B) magnetic sector mass spectrometer with gas chromatograph and direct probe inlets; electron impact and chemical ionization sources. 17 Figure 3.5.1 Chromatogram at target concentration. 1 = benzalazine, 2 AcPP. Magnetism EXAMPLLE: Molecular structure and mass spectrum of 1-acetyl-4-(2-pyridyl)piperazine. The mass spectrum was obtained with a Perkin-Elmer ion trap detector. An Example A beam of electrons whose kinetic energy is K emerges from a thin-foil window at the end of an accelerator tube. There is a metal plate a distance d from this window and perpendicular to the direction of the emerging beam. Show that we can prevent the beam from hitting the plate if we apply a uniform magnetic field B such that 2mK B 2 2 ed 20 Magnetism Problem Continued r From Before mv r qB 1 2 2K K mv so v 2 m m 2K 2mK r 2 2 d eB m e B Solve for B :
https://makeslider.com/slide/forces-on-charges-ucf-physics-o2hpwh
Image source: Introduction Hello it's a me again Drifter Programming! Today we continue with Electromagnetism to get into Mutual Inductance which is a very interesting topic! So, without further do, let's get straight into it! Mutual Inductance Until now we saw that an conductor generates an induced emf and current within itself as the result of electromagnetic induction, which is caused by a changing magnetic field around it. When this emf is induced from the same circuit to itself then this effect is called Self-Induction (L) something that we will cover next time... The so called Mutual Inductance is about the interaction of a coil's magnetic field on another coil as it induces voltage in the adjacent coil. When the emf is induced into an adjacent coil within the same magnetic field, the emf is said to be induced magntetically, inductively or by Mutual Induction, symbolized as M. When two or more coils are magnetically linked together by a common magnetic flux they are said to have the property of Mutual Inductance. We can define mutual induction as the current flowing in one coil that induces a voltage in an adjacent coil. You can clearly see that I said the same thing with "different" words, so that you understand it better :P Mutual Inductance has a lot of applications that we will cover in a bit, but is also a bad thing cause "stray" or "leakage" inductance from a coil can interfere with the operation of another adjacent component. To avoid this electromagnetic induction, some form of electrical screening to a ground potential may be required. So, let's now get into the things that this mutual inductance depends on. The magnitude depends on the relative positioning of the two coils. By that I mean the physical distance that the coils are apart. When the distance is small more magnetic flux generated by the first coil will interact with the other coil, causing a large amount of mutual inductance. When the two coils are farther part and at different angels the amount of induced magnetic flux is much weaker then before and so the angle also plays a very important role. Another way of increasing the mutual inductance is by increasing the number of turns of either coil (transformer). A soft iron core or common soft iron core unity coupling can also minimize the losses due to the leakage of flux, something that shows us that the relative permeability also plays an important role. We can summarize all this in the following equation: Where: - μ0 is the permeability of free space (4π x 10^-7) - µr is the relative permeability of the soft iron core (or other material placed in-between) - N is in the number of coil turns (for each coil separately) - A is in the cross-sectional area in m^2 - l is the coils length in meters Note that a perfect flux linkage was supposed between the two coils! Magnetic inductance is a purely geometric quantity and depends only on the size, number of turns, relative position and relative orientation of the two circuits. In SI units mutual inductance is calculate in Henries (H) where: The most typical unit used in experiments is the milli-henry (mH). Note that the actual currents flowing through the wires don't affect the mutual inductance! Let's now use M for any two arbitrary conducting circuits! Consider two conducting circuits, labelled 1 and 2. A current I1 is flowing around circuit 1 and generating a magnetic field B1, which gives rise to a magnetic flux Φ2 linking it to circuit 2. Doubling the current would double the magnetic field and therefore flux on the second circuit. This conclusion shows us the linearity of the laws of magnetostatics. Having no current (I1 = 0) of course no magnetic flux is applied on circuit 2. The magnetic flux Φ2 is directly proportional to the current and mutual inductance as following: where M21 is the mutual inductance of circuit 2 with respect to circuit 1. Similarly the flux Φ1 in circuit 1, because of the current I2 flowing around circuit 2 is: Mathematically we can prove that M = M12 = M21, something that I will avoid covering :P Let's consider that the current flowing through circuit 1 changes by an amount dI1 in time interval dt. Therefore the magnetic flux changes by an amount of dΦ2 = MdI1 and according to Faraday's law the following amount of emf is being generated around the second circuit: Likewise the emf generated around the first circuit, when the current flowing through circuit 2 change by an amount of dI2 in time interval dt causing a magnetic flux change of dΦ1 = MdI, is: Note that there is no direct physical coupling between the two circuits. The coupling is due entirely to the magnetic field generated by the currents flowing around the circuits. Examples/Applications Let's now get into some applications! Tesla coil Consider a tesla coil, which is build up of a coil of N1 turns and a coil of N2 turns with a length of l and cross-sectional area A. Consider that a current i1 is flowing through the first coil (coil 1) and that a current i2 is flowing through coil 2. Let's get into the induced emf in coil 1 from coil 2... The magnetic field due to coil 1 is: The mutual inductance is: The induced emf in coil 1 from coil 2 is: Transformer A transformer is build up of two circuits called primary (actual energy supplier) and secondary and a iron core. When more current flows in the secondary of a transformer as it supplies more power, then more current must flow in the primary circuit as well, cause it's supplying the energy. This coupling between the primary and secondary is most conveniently described in terms of mutual inductance. For ideal coupling the mutual inductance becomes: Applying the voltage law to both circuits of a transformer we get: The mutual inductance actas as the voltage source for the secondary circuit as: A transformer is the most common application of mutual inductance! REFERENCES: - https://www.electronics-tutorials.ws/inductor/mutual-inductance.html - http://farside.ph.utexas.edu/teaching/302l/lectures/node101.html - https://www.phys.hawaii.edu/~fah/272www/272lectures/fall2010-28.pdf - http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/indmut.html - http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/tracir.html#c1 Mathematical equations that I had in this post where drawn using quicklatex! Previous posts about Electromagnetism Electric fields: Getting into Electromagnetism -> electromagnetim, electric charge, conductors, insulators, quantization Coulomb's law with examples -> Coulomb's law, superposition principle, Coulomb constant, how to solve problems, examples Electric fields and field lines -> Electric fields, Solving problems around Electric fields and field lines Electric dipoles -> Electric dipole, torque, potential and field Electric charge and field Exercises -> examples in electric charges and fields Electric flux: Electric flux and Gauss's law -> Electric flux, Gauss's law Applications of Gauss's law (part 1) -> applying Gauss's law, Gauss applications Applications of Gauss's law (part 2) -> more Gauss applications Electric flux exercises -> examples in electric flux and Gauss's law Electric potential: Electric potential energy -> explanation of work-energy, electric potential energy Calculating electric potentials -> more stuff about potential energy, potential, calculating potentials Equipotential surfaces and potential gradient -> Equipotential surface, potential gradient Millikan's Oil Drop Experiment -> Millikan's experiment, electronvolt Cathode ray tubes explained using electric potential -> cathode ray tube explanation Electric potential exercises (part 1) -> applications of potential Electric potential exercises (part 2) -> applications of potential gradient, advanced examples Capacitance: Capacitors (Condensers) and Capacitance -> Capacitors, capacitance, calculating capacitance How to solve problems around Capacitors -> combination, solving problems, simple example Electric field energy and density -> Electric field energy, energy density Dielectric materials -> Dielectrics, dielectric constant, permittivity and strength, how to solve problems Electric capacitance exercises -> examples in capacitance, energy density and dielectrics Current, resistance and EMF: Electric current -> Electric current, current density Electrical resistivity and conductivity -> Electrical resistivity, conductivity, thermal coefficient of resistivity, hyperconductivity Electric resistance -> Resistance, temperature, resistors Electromotive Force (EMF) and Internal resistance -> Electromotive force, internal resistance Power and Wattage of Electronic Circuits -> Power in general, power/wattage of electronic circuits Electric current, resistance and emf exercises -> exampes in all those topics Direct current (DC) circuits: Resistor Combinations -> Resistor combinations, how to solve problems Kirchhoff's laws with applications -> Kirchhoff's laws, how to solve problems, applications Electrical measuring instruments -> what are they?, types list, getting into some of them, an application Electronic circuits with resistors and capacitors (R-C) -> R-C Circuit, charging, time constant, discharging, how to apply RC circuit exercises -> examples in Kirchhoff, charging, discharging capacitor with/without internal resistance Magnetic field and forces: Magnetic fields -> Magnetism, Magnetic field Magnetic field lines and Gauss's law of Magnetism -> magnetic field lines, mono- and dipoles, Flux, Gauss's law of magnetism The motion of charged particles inside of a magnetic field -> straight-line, spiral and helical particle motion Applications of charged particle motion -> CERN, Cyclotrons, Synchrotrons, Cavity Magetron, Mass Spectrometry and Magnetic lens Magnetic force applied on Current-Carrying Conductors -> magnetic force on current-carrying conductors/wires, proofs Magnetic force and torque applied on current loops (circuits) -> magnetic force on current loops, magnetic moment and torque Explaining the Physics behind Electromotors -> tesla, history and explaining the physics behind them Magnetic field exercises -> examples in magnetic force, magnetic flux, particle motion and forces/torque on current-carrying conductors Magnetic field sources: Magnetic field of a moving charged particle -> moving charge, magnetic field, force between parallel charged particles Magnetic field of current-carrying conductors -> magnetic field of current, Biot-Savart law Force between parallel conductors and the magnetic field of a current loop-> force between parallel conductors, magnetic field of current loop Ampere's law and Applications -> Ampere's law, applications Magnetic materials -> Magnetic materials, classification and types, material examples Displacement current -> Displacement current, Extension of Ampere's law Exercises in Magnetic field sources -> examples all around magnetic field sources Electromagnetic Induction: Electromagnetic Induction and Faraday's law -> Electromagnetic Induction, Experiments, Faraday's law Motional Electromotive Force (Emf) -> Motional Emf, Faraday's law and motional emf, generalization Lenz's law and Induced Electric fields -> Lenz's law, Induced Electric Fields Eddy Currents and Applications -> Εddy currents, applications (brakes, testing, others) Maxwell's equations -> What they are, each equation analyzed separately Electromagnetic Induction exercises -> examples all around Electromagnetic Induction And this is actually it for today's post! Next time in Physics we will get into Self Induction and maybe even Magnetic Energy... Bye!
https://www.ecency.com/physics/@drifter1/physics-electromagnetism-mutual-inductance
Labradors and Friends™ Dog Rescue Group, Inc. is a non-profit animal rescue group dedicated to rescuing labrador retrievers and labrador mixed breed dogs in California and placing them in loving homes. National canine cancer foundation NCCF ""The passion that moves us forward is from experiencing what Cancer really does to the ones we love. We are driven because there is a hole in our soul where once was the love of our dog." Gary D. Nice.
https://campschultz.com/favorites-sites/
An object that sinks in water has negative buoyancy because the amount of water it displaces weighs less than the object itself. Society places an enormous amount of responsibilities on its champions or favorites. But it is enough to slightly alter the whole volume of the sub. The density of the two cylinders can be the same. Its properties never cease to amaze - imagine a chemical that gets more dense as it cools - until it reaches a point where it rapidly becomes less dense! The subimago is covered with small waxy hairs and beads that are hydrophobic. Whirligig beetles are unusual in several instances. Water plays still another role in the global heat balance. No, the volume doesn't have to be the same. When some objected that sure, it works in Alaska because it's so damn cold up there, Exxon promptly repeated the experiment though on a smaller scale at sites all over the United States. This is what happens when a hotter body is subjected to a colder one. All you need to do is to compare boats with the same displacement but different hull designs. He found out that if an object is lighter than the weight of the water it displaces, it will float, but if it is heavier, it will sink. Well I see that I was too hasty with my previous reply. I chose this experiment because I have always wondered how a drop of water stays together as it falls through the air. I hope you understand what I am trying to do. They all have hydrophobic bodies with one exception - the claws. Some mayflies that do come close to the water during their later mating flights may retain some parts of the body particularly the underside in a hydrofuge hydrophobic state. For polyhedra objects with flat polygonal faces the surface area is the sum of the areas of its faces. Of crucial interest to us will be the phytoplankton, those organisms such as algae that photosynthesize, yet are at the whim of the currents as they lack the ability to swim strongly. For a floating object, the weight of the object equals the buoyant force, which equals the weight of the displaced fluid. However, if the change in the shape affects the volume of water displaced, then the bouyancy of the object is affected. You probably think of a fluid as a liquid, but a fluid is simply anything that can flow. Buoyancy depends only on total volume of water displaced and the mass of the object, not the shape of the displacement. There could still be minor inaccuracies in these results because of this problem; however a trend is still easily recognised in the results, so minor inaccuracies are only a small problem. Buoyancy depends only on the mass and displacement. If samples were taken from different areas on a college campus that are assumed to be dense with bacteria, would the… 1095 Words 5 Pages different areas, and especially in the making of ships. This factor cannot be overemphasized when exploring the biological significance of color in aquatic organisms - they must not be judged in terms of their surface coloration also, unless you know something about the color perception of the organisms in question, coloration is meaningless. My hypothesis was; if water mixed with salt and water mixed with detergent is tested, then detergent will cause the surface tension to weaken the… 1440 Words 6 Pages This experiment examined the relation between wood and aluminum surfaces and their friction. Terrestrial organisms may experience a 30 o C temperature change daily - especially in deserts where there is little water vapor in the air to block heat transfer - more on that later. Yes it does because depending on the materials of the boat and how much their mass is can change the buoyancy of the toy boat. Because there is a pressure difference between the two ends of the tube, a column of fluid can be maintained in the tube, with the height of the column proportional to the pressure difference. A well is dug 20m deep and it has a diameter 7m. According to legend, this is what Archimedes' cried when he discovered an important fact about buoyancy, so important that we call it Archimedes' principle and so important that Archimedes allegedly jumped from his bath and ran naked through the streets after figuring it out. Fats and oils are also a storage medium for energy and can be used for insulation in warm-blooded animals. Fantasy play provides opportunities for growth and development. This force is equal to the weight of the liquid that is displaced by an object. With enough soap added, there is not enough force from the water to keep the aluminum on the surface and it sinks. As depth increases, so does the effort you must put into walking. The unit for the buoyant force like other forces is the Newton N. A Little About Buoyancy o In order to understand how a boat can float in water we must first go over one of the principles behind such a feat: buoyancy. Victims often assume a bent, fetal position, giving rise to the common name for this syndrome - the bends. If the mater … ial's mass is not alot then it can help the boat float. Organisms living in sizable pond-size bodies of water do not experience diurnal temperature changes except at the surface and edges , and only a 30 o C change seasonally, spread out over a period of weeks. Substances, including water, ions, and molecules that are required for cellular activities, can enter and leave cells by a passive process such as diffusion. Weather something floats or sinks essentially comes down to weather the bouyancy force is greater of … the object's weight is greater. The shape determines the opposing force friction encountered. However, there are more significant aspects apart from solely determining how or what the reactants underwent to change into products. In the photo to the right, the snorkeler's white t-shirt takes on a bluish cast as the red and yellow light needed to constitute white light have been absorbed. . Even a baseball will spin bec … ause of the unevenness of how friction acts on the surface facing down.
http://blankless.com/how-does-surface-area-affect-buoyancy.html
What’s the Format: 6:00-6:15pm – Networking & Preparation 6:15-6:30pm – Introductions & Welcome 6:30-7:30pm – Panel Discussion and Q&A 7:30-8:00pm – Networking 09/01/2021 – About the Topic: Immersive Technology is especially helpful when it comes to navigating the landscape around us and most importantly how to find desired locations such as a restaurant to eat at, venue to be entertained in, museums to be awed in, or transportation routes to travel to your desired destination. Especially while wanting to socially distance, virtual and augmented reality have helped in a number of ways in the hospitality industry. From simple components such as links to menus, technology can be incredibly powerful for maintaining safer distancing. Hospitality has been one of the most impacted industries due to the pandemic and really highlights the opportunities that immersive technologies can be used and be relied on. Join us September 1st in a virtual environment and hear from industry leaders about how they’re using immersive technologies, how the pandemic has impacted their adoptions and how, and see where they think we are going in the future. Panel Members: Attending VRAR Chicago Virtual Meetups We encourage folks to join us in the virtual environment Conference Hall by downloading the Virbela open campus client at virbela.com/download. Otherwise, people can join by watching the scheduled live stream at facebook.com/VRARChicago/live_videos – where we will field questions from! See more about getting started and navigating Virbela below. The Virbela virtual environment resembles a physical space, so you will be able to talk with other attendees prior to the start of the presentations. We suggest that you arrive no later than 6:00pm and we will get started promptly at 6:15. Please try to get familiar with the technology prior to the day of the meetup, as we will be unable to provide support during the event. We will start with informal networking, then host a panel discussion to answer audience questions, and we end with more networking. We keep it simple and informal – this meetup is about you, our members, and not just a soap box for the host, presenters, or our sponsors. Come and get your questions answered, take your next step in Immersive Technology. Signing up for Virbela Virbela is available on mac, windows, rift and vive. First, you will want to create your account and then download the program onto your computer by going to their registration page. For Rift and Vive, it should be pretty straightforward in their interface. - You will need to verify your email address - Create an avatar - Log into their “open campus” Once logged in and in the Virbela open campus area, walk around with your arrow keys and then click the Go To menu on top left of the screen. Our program will be in the Conference Hall, so you will need to click that option to join us. System Recommendations - PC or Mac, Windows 7 SP1+ or newerMac OS X 10.11 or newer - Headphones and a microphone - Power cord plugged in for extended use Entering the Virtual Venue – Conference Hall - Log into Virbela - Click Go To menu on top left of your screen - Select Campus Locations - Select Conference Hall VRAR Chicago Virtual Environment Code of Conduct Open microphones and/or not using headphones can create an echo effect that can negatively affect others’ experiences - Please use headphones if you can. - Keep your microphone on push-to-talk (using the number 1 key) - Act as if this is a space that you are in physically! - For the Questions and Answers portion - Click “Raise Hand” on bottom of your screen - Wait to be called on - Ask your question - Press the 1 key and hold it to open your mic - Ask your question and wait a moment - Release the 1 key Virbela Tricks - Press and hold the 1 button to open your microphone to speak - Pressing the spacebar unlocks your view or “head” to look around - You can click someone’s name and click Go To and automatically walk to them - Your Go To menu lets you raise your hand, wave, or even dance! - Click the screen menu on top of your screen once in the Conference Hall to select your preferred ‘view’ to enjoy the program.
https://vrarchicago.com/the-next-evolution-of-hospitality/
Going public is the preeminent governing strategy of modern presidents. When presidents go public, they attempt to influence the decisions, actions, and opinions of others through speechmaking and other public engagement. Although some scholars of the rhetorical presidency show how presidents have used speeches to govern since the dawn of American democracy, the bulk of scholarship centers on the modern presidency, as both advances in communications technologies and changes in federal policymaking institutions spurred presidents to go public. Going public as a leadership strategy involves a variety of presidential speeches designed to reach a range of institutions and actors. Strategies include going local, speaking on national television, or saturating news coverage by sustaining attention to a top priority. The president’s target audience can be Congress, the public, news media, or bureaucracy. Presidents have had some success going public, although the ways in which presidents have been successful vary by strategy and target audience. Going public is more than just presidential leadership of others. It is also about what incentivizes the president’s efforts to use speeches to govern in the first place. Thus, a second focus of research on going public is what explains speechmaking and the tendency of presidents to respond to those institutions and actors that they also attempt to lead. The majority of existing research centers on presidential leadership of, and responsiveness to, mass public opinion, but the emergence of a more polarized public may influence why presidents go public and may change what political scientists conclude concerning going public and presidential leadership in a more polarized political age. Article Matthew Eshbaugh-Soha Article The multifaceted nature of decentralization, democracy, and development renders relationships among them ambivalent and conditional. It is certainly possible to decentralize in ways that foster local democracy and improvements in socioeconomic well-being. The empirical record, however, is mixed, and not only because the phenomena of interest have multiple dimensions and are open to interpretation. Whatever its form, decentralization is inherently political. In the African context, the extent and form of decentralization has been influenced by international support, the challenges of extending state authority in relatively young multi-ethnic states, and, increasingly, electoral considerations. By the 1980s, the broad consensus in the constructive developmental role of a strong central state that had characterized the immediate postwar period gave way to a growing perception of statist approaches as impeding democracy and, especially, development. For some, decentralization implied an expansion of popular participation that promised greater sensitivity to local knowledge and more responsiveness to local concerns. Others saw decentralization as part of a broader agenda of scaling back the central state, reducing its role, its size, and its costs. Yet others saw decentralization as part of a strategy of achieving sustainable natural resource management or political stability in post-conflict societies. By the early 1990s, a wide variety of international organizations were promoting decentralization and providing both financial and technical support for decentralization reforms. In the African context, political decisions about whether and how to decentralize reflect the continued salience of ethno-regional identities and non-state authorities, especially traditional or customary leaders. Incumbents may decentralize because they hope to consolidate their political position by crowding out or co-opting rivals, depoliticizing conflicts, or deflecting blame to subnational actors. Indeed, reforms made in the name of decentralization often strengthen the political center, at least over the short to medium term. Whether it attempts to co-opt or sideline them, decentralization interacts with and may reinforce the salience of ethno-regional identities and traditional authorities. To the extent that democracy presumes the equality of all citizens, regardless of ascribed status or identity, the reinforcement of ethno-regional identities and unelected authorities threatens democracy. The international spread of decentralization reforms coincided with the increasing prevalence of multiparty elections. In countries that hold elections, electoral considerations inevitably influence political interests in decentralization. Central government incumbents may view decentralization as a way to keep voters happy by improving access to and the quality of public services, as a form of political insurance, or as strengthening rivals. Whether incumbents and challengers view decentralization as a threat or an opportunity depends on not only the form of decentralization under consideration, but also their estimations of their competitiveness in elections at various levels (national, regional, local) and the interaction between the spatial distribution of electoral support and the electoral system. Electoral dynamics and considerations also influence the implementation and consequences of decentralization, perhaps especially when political rivals control different levels of government. Whether decentralization promotes democracy and development hinges on not only the form of decentralization, but also how broader political dynamics condition decentralization in practice. Article Christina J. Schneider How does domestic politics affect international cooperation? Even though classic work on international relations already acknowledges the central role of domestic politics in international relations, the first generation of scholarly work on international cooperation focused almost exclusively on the international sources of cooperation. Theories that explicitly link domestic politics and international cooperation did not take a more prominent place in the scholarly work on international cooperation until the late 1980s. Recent research analyzes how interests and institutions at the domestic level affect the cooperation of governments at the international level. The analysis is structured along a political economy model, which emphasizes the decision making calculus of office-motivated political leaders who find themselves under pressure by different societal groups interested in promoting or hindering international cooperation. These pressures are conveyed, constrained, and calibrated by domestic institutions, which provide an important context for policy making, and in particular for the choice to cooperate at the international level. This standard political economy model of domestic politics is embedded within models of international cooperation, which entail decisions by governments about (a) whether to cooperate (and to comply with international agreements), (b) how to distribute the gains and costs from cooperation, (c) and how to design cooperation as to maximize the likelihood that the public good will be provided. Domestic politics is significant to explain all aspects of international cooperation. The likelihood that governments engage in international cooperation does not only depend on international factors, but is also and sometimes predominantly driven by the demands of societal groups and variations in institutional structures across countries. Domestic factors can explain how governments behave in distributive negotiations, whether they can achieve advantageous deals, and if negotiations succeed to produce an international collective action. They also contribute to our understanding about whether and how governments comply with international agreements, and consequently, how the design of international institutions affects government compliance. More recently, scholars have become interested in the democratic responsiveness of governments when they cooperate at the international level. Whereas research is still sparse, emerging evidence points to responsive conduct of governments particularly when international cooperation is politicized at the national level. Article Benjamin Ferland and Matt Golder One common way to think about citizen representation is in terms of the ideological distance between citizens and their representatives. Are political elites ideologically congruent with citizen preferences? Electoral systems are an especially important political institution to consider when studying citizen representation because they influence the size and ideological composition of party systems, how votes are translated into legislative seats, the types of governments that form after elections, and the types of policies that get implemented. In effect, electoral institutions affect each stage of the representation process as one moves from citizen preferences to policy outcomes. Research on ideological congruence indicates that electoral rules can cause distortions in citizen-elite congruence to emerge and disappear as one moves through the representation process. In this regard, studies show that proportional electoral systems enjoy a representational advantage over majoritarian systems when it comes to legislative congruence (the ideological distance between the median legislative party and the median citizen) but that this advantage disappears when it comes to government congruence (the ideological distance between the government and the median citizen). Although research on citizen-elite ideological congruence has made significant progress over the last two decades, several new lines of inquiry are still worth pursuing. One is to move beyond the traditional focus on the left–right ideological dimension to evaluate citizen representation in a truly multidimensional framework. Another is to develop a unified theoretical framework for thinking about ideological congruence and ideological responsiveness. For too long, scholars have conducted studies of citizen-elite congruence and responsiveness in relative isolation, even though they address fundamentally related issues. In terms of measurement issues, progress can be made by developing better instruments to help locate citizens and elites on a common metric and paying more attention to the policymaking dynamics associated with minority and coalition governments. Existing studies of ideological congruence focus on the United States and the parliamentary democracies of Western Europe. Scholars might fruitfully extend the study of citizen representation to presidential democracies, other regions of the world, and even authoritarian regimes. Among other things, this may require that scholars think about how to conceptualize and measure citizen representation in countries where parties are not programmatic or where elites are not necessarily elected. Article Sarah Poggione Initial research at the state level argued that there was little relationship between citizen preferences and policy. Later work successfully contested this view. First using state demographics or party voting as proxies for state opinion and then later developing measures of state ideology and measures of issue-specific state opinion, scholars found evidence that state policy is responsive to public preferences. However, lesbian, gay, bisexual, and transgender (LGBT) policies are often recognized as distinct from other policy areas like economic, welfare, and regulatory issues. Scholars note that LGBT policies, due to their high saliency and relative simplicity, promote greater public input. Research on LGBT policies demonstrates the effects of both ideology and issue-specific opinion, exploring how the linkage between opinion and policy differs across more and less salient policy areas. This work also examines how political institutions and processes shape democratic responsiveness on LGBT issues. Recent research also considers how LGBT policies shape public opinion. While these strands in the literature are critical to understanding LGBT politics in the United States, they also contribute to the understanding of the quality of democratic governance in the U.S. federal system and the mechanics of the linkage between public opinion and policy. Article Christopher Wlezien and Stuart N. Soroka The link between public opinion and public policy is of special importance in representative democracies, as we expect elected officials to care about what voters think. Not surprisingly, a large body of literature tests whether policy is a function of public preferences. Some literature also considers the mechanisms by which preferences are converted to policy. Yet other work explores whether and how the magnitude of opinion representation varies systematically across issues and political institutions. In all this research, public opinion is an independent variable—an important driver of public policy change—but it is also a dependent variable, one that is a consequence of policy itself. Indeed, the ongoing existence of both policy representation and public responsiveness is critical to the functioning of representative democracy.
https://oxfordre.com/politics/search?btog=chap&f_0=keyword&q_0=responsiveness
4 states to have assembly elections, Lok Sabha polls simultaneously New Delhi, Mar 10: Elections for Andhra Pradesh, Arunachal Pradesh, Odisha and Sikkim will be held simultaneously with the LS polls, the Election Commission announced on Sunday. "EC has decided to hold elections for these four states simultaneously to the Lok Sabha elections. The elections for these four states will follow the same schedule as the elections for the parliamentary constituencies in these states," said CEC Sunil Arora. Three of these states are ruled by non-BJP parties, where the BJP has been trying to crave a niche. The elections for the 15th Legislative Assembly of Andhra Pradesh will be held in a single phase on April 11. In Andhra Pradesh, this will technically be the first parliamentary election since the bifurcation of Telangana. The state of Andhra Pradesh will vote to elect members to the 176-member Vidhan Sabha. While the elected members in Andhra Pradesh are 175, one member is nominated to the legislative assembly of the state. Odisha Assembly Elections from April 11 to April 29 Assembly elections in Odisha will be conducted between April 11 and April 29, the Election Commission announced on Sunday in New Delhi. Odisha has 147 assembly constituencies that will go to polls in four phases starting April 11 and will be held along with the Lok Sabha elections. Counting of votes for the Odisha assembly elections will also be held on May 23 along with the Lok Sabha elections. The term for Odisha Assembly ends on June 11. In Odisha, Chief Minister Naveen Patnaik, who has been ruling the state since 2000, will make another bid for power. Sikkim Assembly election on April 11 The northeastern state of Sikkim is set to take place on April 11. Sikkim, ruled by the Pawan Chamling-led Sikkim Democratic Front, has 32 assembly seats. Despite being part of the BJP-led Northeastern Democratic Alliance (NEDA), the SDF has decided to go alone in the elections. Arunachal Pradesh to vote on April 11 Assembly elections will be held in Arunachal Pradesh on April 11. Arunachal Pradesh is currently ruled by Bharatiya Janata Party The state assembly has 60 seats out of which 48 are with BJP. National People's Party has 5, Congress has 5 seats. Two seats are with independent MLAs. We use cookies to ensure that we give you the best experience on our website. This includes cookies from third party social media websites and ad networks. Such third party cookies may track your use on Oneindia sites for better rendering. Our partners use cookies to ensure we show you advertising that is relevant to you. If you continue without changing your settings, we'll assume that you are happy to receive all cookies on Oneindia website. However, you can change your cookie settings at any time. Learn more
All relevant data are within the paper. Introduction {#sec001} ============ The search for a treatment for ischemic stroke has met many challenges. With stroke currently the fifth leading cause of death in the United States, second globally, and a leading cause of serious long-term disability \[[@pone.0183909.ref001],[@pone.0183909.ref002]\], there is a need for the development of a rapid and long-lasting therapeutic that can benefit a large population of stroke patients. The difficulty with this task partially lies in the fact that the stroke population is heterogeneous, and many patients do not receive medical care before ischemia has damaged the brain. Our lab has previously reported that a sensory stimulation-based collateral therapeutic can completely protect rats from impending ischemic damage if delivered within the first 2 hours post-pMCAo \[[@pone.0183909.ref003]\]. This type of treatment, if relevant to humans, is promising as it is noninvasive, nonpharmacological, and requires no special equipment. Earlier work from our lab has shown that in a rat model, the pial collateral vessels that anastomose with the distal MCA branches are critical to reperfusion of the occluded MCA and protection of ischemic cortex. When these distal branches were occluded in addition to the standard MCA occlusion at the M1 segment, rats were not protected despite having received immediate whisker stimulation \[[@pone.0183909.ref003]\]. Therefore, retrograde reperfusion of the MCA occurs through these existing patent collaterals, ensuring that the viability of neurons in the threatened region is maintained. Additionally, this protection has been observed under both pentobarbital and isoflurane anesthesia \[[@pone.0183909.ref004]\], and in awake, behaving subjects \[[@pone.0183909.ref005]\]. Since an important component underlying cortical protection by sensory stimulation has been elucidated (collateral blood flow), investigation of the underlying molecular mechanisms of protection was a logical next step. The use of mice as animal models has flourished in recent decades due to the availability of genetic manipulations that enable the dissection of molecular mechanisms in models of disease. The C57BL/6J strain, widely used in stroke research, is known to have numerous pial collaterals \[[@pone.0183909.ref006],[@pone.0183909.ref007],[@pone.0183909.ref008]\], and in fact was shown to have high numbers of collaterals and larger collateral vessel diameters than 14 other mouse strains \[[@pone.0183909.ref009]\]. Additionally, Chalothorn et al. \[[@pone.0183909.ref010]\] has shown that CD1 mice, a slightly larger mouse strain, also have functioning collaterals. Importantly, there is increasing evidence that poor pial collateral vessel flow in humans is a critical predictor of stroke severity as it has been linked to poor outcome even in the event of recanalization \[[@pone.0183909.ref011]--[@pone.0183909.ref016]\]. Thus, given the importance of pial collaterals for the effectiveness of our treatment, as well as for clinical outcome, our main goal was to test whether immediate delivery of the collateral-based sensory stimulation treatment used in our lab for rats, could also protect the C57BL/6J and CD1 mouse strains from impending ischemic stroke damage. If protected, these mice would open new research avenues for exploring the mechanisms underlying the protection we observe. Materials and methods {#sec002} ===================== All procedures were in compliance with NIH guidelines and approved by UC Irvine Animal Care and Use Committee (protocol \#: 1997--1608, assurance ID\#: A3416.01), and in compliance with the ARRIVE guidelines. Subjects and surgical preparation {#sec003} --------------------------------- Twenty-five experimental subjects, 25-30g 10--12 week old male C57BL/6J mice (Jackson Laboratories, Bar Harbor, ME, USA), and twenty-four experimental subjects, 30-40g 10--12 week old male CD1 mice (Charles River Laboratories, Wilmington, MA, USA) were individually housed in standard cages. At the beginning of each experiment, subjects were injected intraperitoneally with a Nembutal bolus (50 mg/kg b.w.). Supplemental injections of Nembutal (27.5 mg/kg b.w.) were given as necessary. After resection of soft tissue, the parietal bone was thinned to \~150μm using a dental drill to create an 'imaging' area in the skull over the left primary somatosensory cortex. 5% dextrose (0.3mL) and atropine (0.05 mg/kg, b.w.) were administered at the beginning of the experiment and every six hours after until the animal was returned to its home cage. Body temperature was measured via a rectal probe, and maintained at 37° Celsius by a self-regulating thermal blanket. Auto clipping of the skin above the imaging window was performed 5 hours after pMCAo for all experimental groups. Animals were returned to their home cage and allowed to recover overnight prior to all +24 hour experimentation. Overview {#sec004} -------- Using a within subject design that is identical to our previous studies, 25 C57BL/6J mice and 24 CD1 mice were randomly assigned to a +0h group (+0h stands for immediate or zero hours following pMCAo), a no-stimulation control group, or a surgical sham group. Baseline functional imaging (ISOI) and blood flow imaging (LSI) was collected for all subjects at the beginning of surgery. All +0h subjects (n = 8, C57BL/6J; n = 8, CD1) then received a pMCAo, and immediate post-occlusion single whisker stimulation. Pre- and post-occlusion whisker stimulation consisted of 1 s of 5 Hz deflections of a single whisker (whisker C2). Post-occlusion, this stimulation treatment was intermittently (with random intervals averaging 21 seconds) delivered 256 times, totaling 4.27 minutes of stimulation over the course of 2 hours \[[@pone.0183909.ref003]\]. No-stimulation controls (n = 8, C57BL/6J; n = 8, CD1) underwent identical pMCAo to that of +0h subjects, but never received whisker stimulation; pMCAo was immediately followed by a 5-hour no-stimulation period. Surgical shams (n = 7, C57BL/6J; n = 8, CD1) underwent identical surgery to that of +0h subjects, with the suture needle and thread passing under the MCA, but sutures were not tied around the MCA, leaving the blood vessel intact. Sham surgery was immediately followed by whisker stimulation. +0h and sham subjects remained anesthetized throughout the 2-hour stimulation period, while no-stimulation controls remained anesthetized throughout the 5-hour no-stimulation period. Experiments were designed so animals were randomly assigned to one of the three groups after pMCAo. Additionally, two +0h subjects received full whisker array stimulation treatment according to the same stimulation paradigm. After whisker stimulation or quiet period, all mice were placed back in their home cage for recovery until their follow-up assessment at 24 hours post-pMCAo, which consisted of ISOI and LSI (animals receiving full whisker array stimulation only underwent ISOI). At the end of pMCAO surgery, mice received subcutaneous ampicillin antibiotic injections (100 mg/kg), and Flunixin meglumine analgesic was injected subcutaneously (2 mg/kg). The closed wound was covered with topical antibiotic, and mice were monitored while recovering from anesthesia. At the end of 24 hour imaging, mice were then euthanized with sodium pentobarbital (50 mg/kg, intraperitoneally), and the brains were harvested for histological assessment. All analysis was conducted under blinded conditions. Histology (2,3,5-triphenyltetrazolium chloride staining for infarct) {#sec005} -------------------------------------------------------------------- At the conclusion of each experiment, mice were euthanized and the brain was removed, sectioned into 2 mm coronal slices, and incubated in 2% 2,3,5-triphenyltetrazolium chloride at 7°C for 20 min in the dark \[[@pone.0183909.ref017]\]. The TTC-stained sections were photographed with a digital camera, and images were analyzed using ImageJ software. The total infarct volume is determined by multiplying the infarct area of each slice by the slice thickness. An observer blind to experimental condition performed this volume calculation. A small surgical lesion is occasionally apparent at the immediate site of MCA occlusion. This occurs infrequently and equivalently in all experimental groups. The small amount of damage occasionally produced at the surgical site can be readily distinguished from the large ischemic infarct and is excluded from infarct analysis \[[@pone.0183909.ref018]\]. Permanent middle cerebral artery occlusion (pMCAO) {#sec006} -------------------------------------------------- Permanent ischemic conditions were modeled after the same procedure in rats \[[@pone.0183909.ref019]\]. The base of the left proximal middle cerebral artery at the M1 segment \[[@pone.0183909.ref018],[@pone.0183909.ref020],[@pone.0183909.ref021]\] is permanently occluded, blocking flow to all MCA cortical branches. This is achieved by careful removal of the skull and dura from a 2x2mm 'surgical window' placed beyond the bottom left portion of the imaging window, directly over the M1 segment of MCA, just distal to MCA's lenticulostriate branches and proximal to any cortical branching. The M1 occlusion therefore entails that only cortical ischemic damage is expected in this occlusion model. A half-curve reverse cutting suture needle is cut in half and threaded with two 4--0 silk threads and passed through the pial layer of the meninges, below MCA (the needle is kept above the cortical surface to the extent possible to minimize damage). Then the two threads (moved to \~1mm apart after being strung beneath the artery) are both tied and tightened around MCA and the vessel is transected (completely severed) between the two knots. Care is taken to avoid damaging the artery, and experiments are terminated if there are signs of bleeding from MCA (1 case). Intrinsic signal optical imaging (ISOI) and analysis {#sec007} ---------------------------------------------------- A detailed description of ISOI data acquisition and analysis can be found elsewhere \[[@pone.0183909.ref022]--[@pone.0183909.ref024]\]. Briefly, a charge coupled device (CCD) camera (either a 16-bit Cascade 512F or a 12-bit Quantix 0206, Photometrics, Tucson, AZ, USA) equipped with an inverted 50 mm AF Nikon lens (1:1:8, Melville, NY, USA) combined with an extender (model PK-13, Nikon, Melville, NY, USA) was used for imaging and controlled by V++ Precision Digital Imaging System software (Digital Optics, Auckland, NZ). During each 15-s trial, 1.5 s of prestimulus data followed by 13.5 s of post stimulus data is collected, with a 6±5 sec random inter-trial interval. Stimulus consists of a single whisker (or full whisker array) being deflected by 9° in the rostral-caudal direction at a rate of 5 Hz for a total stimulus duration of 1 second. The cortex is illuminated with a red light emitting diode (635 nm maximum wavelength). Data are collected in blocks of 64 stimulation trials, and a sampled time point (for example pre-pMCAo baseline) is considered complete upon summation of 128 stimulation trials. Ratio images are created from calculating fractional change (FC) values by dividing each 500ms frame of post-stimulus signal activity by the 500ms frame of pre-stimulus intrinsic signal activity collected immediately before stimulus onset. The ratio image containing the maximum areal extent for the intrinsic signal is Gaussian filtered (half width = 5) and the areal extent quantified at a threshold level of 2.5 x 10^−4^ fractional change away from zero. Peak amplitude is quantified in fractional change units from the pixel with the peak activity within the maximum areal extent. Laser speckle imaging (LSI) and analysis {#sec008} ---------------------------------------- A detailed description of LSI \[[@pone.0183909.ref025],[@pone.0183909.ref026]\] data acquisition and analysis can be found elsewhere \[[@pone.0183909.ref003]\]. Briefly, a 632.8 nm 15 mW HeNe laser was used as the illumination source over a region of \~25mm^2^, and collected images were processed as previously described \[[@pone.0183909.ref003]\]. Speckle contrast images were converted to speckle index images by calculating their inverse squares multiplied by the exposure time in seconds, so that larger index values corresponded to faster blood flow. Speckle index images were then averaged to improve signal-to-noise ratio. To quantify blood flow within the MCA, we calculated the mean value within a region of interest (ROI) in MCA cortical branches as defined according to several criteria described previously \[[@pone.0183909.ref003]\]. All flow index values were scaled over a range where 0 flow was set at noise values. At the completion of 24 hours imaging, subjects were sacrificed with Nembutal, and dead animal (noise) values were collected ten minutes after cessation of the heart beat; these dead values were subtracted from all baseline and 24 hour values. Statistical analysis {#sec009} -------------------- For imaging data, ANOVA were run on baseline values in all experimental groups to ensure no significant differences before pMCAo. Because there were no responses to quantify at 24 hours, post-pMCAo imaging evoked area and amplitude were converted to difference score values (post-occlusion---baseline) with values away from 0 signifying a change from baseline. A constant was added to difference values, which were then transformed with a square root function to better satisfy the assumptions of ANOVA and inferential statistics were performed on the transformed data. For speckle imaging, average baseline arbitrary units measurements were normalized to 100%, and % change from baseline values were calculated for all measurements after pMCAo. For all experimental groups, following ANOVA, specific contrasts were calculated to identify which groups differed from baseline. Alpha level was set to 0.05 and Bonferroni adjustments were applied to account for multiple contrasts. Infarct volume comparisons were also calculated by employing ANOVA. All plotting and statistics were performed using SYSTAT 11 (SYSTAT Software Inc., Chicago, IL, USA). Results {#sec010} ======= Treatment does not protect cortical activity in C57BL/6J or CD1 mice {#sec011} -------------------------------------------------------------------- Before pMCAO, there were no significant differences in the area or amplitude of the whisker functional representation (WFR) between the three C57BL/6J groups, or between the three CD1 groups (n = 8 for all groups). However, at 24 hours post-pMCAo there was a significant difference between C57BL/6J groups for both area (F~2,20~ = 6.26, P \< 0.01, ANOVA) and amplitude (F~2,20~ = 7.46, P \< 0.001, ANOVA). Post-hoc Tukey's HSD tests showed that the area and amplitude for surgical controls (n = 7) were significantly different from the other two groups at 24 hours. Despite receiving immediate treatment, the treated C57BL/6J mice (n = 8) were equivalent to untreated C57BL/6J subjects (n = 8), with both groups exhibiting a reduction in both area and amplitude at 24 hours compared to baseline. For treated subjects, this reduction was significant for both area and amplitude (area: F~1,20~ = 10.85, P \< 0.005; amplitude: F~1,20~ = 41.54, P \< 0.001), and the same was true for untreated subjects (area: F~1,20~ = 9.96, P \< 0.005; amplitude: F~1,20~ = 59.49, P \< 0.001). Surgical controls, although trending for an increase in area, didn't have a significant change in area or amplitude at 24 hours post-pMCAo (area: F~1,20~ = 0.51, P \> 0.05; amplitude: F~1,20~ = 4.49, P \< 0.05, not significant with Bonferonni correction) (Figs [1A](#pone.0183909.g001){ref-type="fig"} and [2A](#pone.0183909.g002){ref-type="fig"}). ![Treated C57BL/6J and CD1 mice are not protected from ischemic damage.\ A, Experimental schema with ISOI, LSI and TTC representative cases for C57BL/6J (green, blue, and yellow boxes) and CD1 mice (red, purple, and grey boxes): treated (top), untreated (middle), and surgical control (bottom) subjects, before (left) and 24 hours after (right) pMCAo. All subjects had whisker functional representations (WFRs) and blood flow within the MCA at baseline. When assessed at 24 hours post-pMCAo, treated and untreated subjects of both strains lacked WFRs, retrograde blood flow within the MCA was minimal, and TTC staining revealed infarct (right; see arrows). Surgical controls of both strains, however, maintained both WFRs and MCA blood flow, and did not sustain infarct since these subjects never received pMCAo. B, The imaging window (black-rimmed square) is centered over the barrel cortex (black regions), which is fed by major MCA branches (in red); the C2 whisker barrel is highlighted in red. The smaller representative images of LSI blood flow within the MCA are taken from a portion of a major MCA branch within this imaging window; the location in relation to C2 can vary between animals.](pone.0183909.g001){#pone.0183909.g001} ![WFR quantification for C57BL/6J and CD1 mice.\ A, WFRs were quantified in terms of their area and amplitude at baseline and 24 hours post-pMCAo. WFR quantification for C57BL/6J and CD1 mice shows that treated and untreated subjects had significant reductions in WFRs at 24 hours, while surgical controls maintained cortical activity. No differences were found between treated, untreated, and surgical control subjects across C57BL/6J and CD1 strains, indicating these groups were equivalent at baseline and 24 hours. (NS = not significant, \* = p ≤ 0.05, \*\* = p ≤ 0.01, \*\*\* = p ≤ 0.001). B, Two mice received full whisker array stimulation, and both subjects lacked WFR's at 24 hours; TTC representative case shows infarct at 24 hours (black arrow).](pone.0183909.g002){#pone.0183909.g002} Similar results were found for CD1 mice. At 24 hours post-pMCAO, there was a significant difference between CD1 groups for both area (F~2,21~ = 6.84, P \< 0.01, ANOVA) and amplitude (F~2,21~ = 10.02, P \< 0.001, ANOVA). Post-hoc Tukey's HSD tests showed that the area and amplitude for surgical controls were significantly different from the other two groups at 24 hours; CD1 treated and untreated subjects showed a drastic reduction in area and amplitude at 24 hours post-pMCAo. This reduction was significant for both area and amplitude for treated (area: F~2,21~ = 6.21, P \< 0.01; amplitude: F~2,21~ = 13.88, P \< 0.01) and untreated subjects (area: F~2,21~ = 14.02, P \< 0.01; amplitude: F~2,21~ = 15.41, P \< 0.01) (Figs [1A](#pone.0183909.g001){ref-type="fig"} and [2A](#pone.0183909.g002){ref-type="fig"}). Finally, we determined whether there were any differences between the three groups across both C57BL/6J and CD1 strains. At baseline, we observed no difference among experimental groups or strains for area. For amplitude, there was no difference among experimental groups, but there was for strains (untreated: F~1,41~ = 6.42, P \< 0.02, ANOVA; surgical controls: F~1,41~ = 5.62, P \< 0.02, ANOVA). At 24 hours post-pMCAo, there was a difference among experimental groups for area (F~2,41~ = 25.86, P \< 0.01, ANOVA) and amplitude (F~2,41~ = 49.31, P \< 0.001, ANOVA), since surgical controls maintained baseline levels of evoked activity but treated and untreated controls had significant reductions. There was no difference among strains, however, for either area or amplitude ([Fig 2A](#pone.0183909.g002){ref-type="fig"}). Full whisker array stimulation was tested in two C57BL/6J mice and, similar to subjects receiving single whisker stimulation, neither subject was protected according to ISOI and TTC analysis ([Fig 2B](#pone.0183909.g002){ref-type="fig"}). Mice exhibit minimal collateral-based blood flow to the MCA post-occlusion {#sec012} -------------------------------------------------------------------------- C57BL/6J and CD1 mice underwent laser speckle imaging to assess whether there was retrograde blood flow in the MCA 24 hours post-occlusion. While subjects from all C57BL/6J groups had similar blood flow at baseline, there was a significant difference between groups at 24 hours post-pMCAo (F~2,20~ = 14.53, P \< 0.001, ANOVA). Post-hoc Tukey's HSD test showed that blood flow in surgical controls was significantly different from the other two groups at 24 hours. After pMCAo, both treated and untreated subjects showed a severe reduction in blood flow within the MCA, indicating minimal collateral-based blood flow. Treated (n = 8; F~1,20~ = 22.31, P \< 0.001) and untreated (n = 8; F~1,20~ = 22.24, P \< 0.001) C57BL/6J mice had significantly lower values compared to surgical controls (n = 7). (Figs [1A, 1B](#pone.0183909.g001){ref-type="fig"} and [3A](#pone.0183909.g003){ref-type="fig"}). ![Blood flow quantification for C57BL/6J and CD1 mice at baseline and 24 hours post-pMCAo as a percent of baseline flow.\ A, At baseline, C57BL/6J treated, untreated and surgical control groups were equivalent, but at 24 hours post-pMCAO, treated subjects were equivalent to untreated subjects and had significantly reduced blood flow within the MCA. Surgical controls maintained baseline levels of flow at 24 hours. B, Similar to C57BL/6J mice, CD1 treated and untreated subjects were equivalent and had significant reductions in blood flow at 24 hours, while surgical controls maintained baseline levels of flow at this time point. (\*\* = p ≤ 0.01, \*\*\* = p ≤ 0.001).](pone.0183909.g003){#pone.0183909.g003} Similar to the C57BL/6J mice, there was a significant difference in blood flow between CD1 groups at 24 hours post-occlusion (F~2,21~ = 8.62, P \< 0.005, ANOVA), and post-hoc Tukey's HSD test showed that the surgical controls (n = 8) were significantly different from treated and untreated subjects. Treated (n = 8; F~1,21~ = 12.65, P \< 0.005) and untreated (n = 8; F~1,21~ = 13.21, P \< 0.005) groups both had major reductions in MCA blood flow at 24 hours post-pMCAo (Figs [1](#pone.0183909.g001){ref-type="fig"} and [3B](#pone.0183909.g003){ref-type="fig"}). Histology revealed that mice sustain ischemic damage despite receiving immediate treatment {#sec013} ------------------------------------------------------------------------------------------ Ischemic damage was assessed with TTC staining and revealed that immediate treatment did not prevent infarct in C57BL/6J or CD1 mice (Figs [1A](#pone.0183909.g001){ref-type="fig"} and [4](#pone.0183909.g004){ref-type="fig"}). There was a significant difference in infarct size between C57BL/6J groups (treated: n = 8; untreated: n = 8; surgical controls: n = 7, no infarcts but occasionally there would be evidence of surgical damage of \<1mm^3^; F~2,20~ = 18.45, P \< 0.001, ANOVA), and post-hoc Tukey's test showed the significant difference lied only between surgical controls and the other two C57BL/6J groups. For treated and untreated subjects, this is 6.93 ± 1.23% and 6.94 ± 0.52% of the ipsi-ischemic hemisphere, respectively. Thus, treated and untreated C57BL/6J groups sustained infarcts of equivalent size, while the surgical controls had no ischemic damage since they did not receive pMCAo. ![TTC staining revealed no protective effect of treatment for either C57BL/6J or CD1 subjects.\ Infarct quantification for C57BL/6J and CD1 treated, untreated and surgical control groups. Despite receiving immediate treatment, treated subjects sustained infarct equivalent in size to untreated subjects. Surgical controls were significantly different from treated and untreated subjects as they did not sustain ischemic damage, but occasionally subjects would have evidence of surgical damage that was less than 1 mm^3^. Additionally, C57BL/6J and CD1 groups were equivalent for infarct volume. (NS = not significant, \*\*\* = p ≤ 0.001).](pone.0183909.g004){#pone.0183909.g004} A significant difference was observed between CD1 groups (F~2,19~ = 37, P \< 0.001, ANOVA); post-hoc Tukey's test showed that CD1 surgical controls (n = 8) were significantly different from treated (n = 8) and untreated control groups (n = 6). When represented as a percentage of ipsi-ischemic hemisphere, treated, untreated and control subjects had infarcts of 9.36 ± 1.02%, 8.37 ± 1.12%, and 0.48 ± 0.20% respectively. Finally, when comparing treated, untreated and surgical control groups for both C57BL/6J and CD1 strains, there was an effect of group since surgical controls from neither strain sustained infarct (F~2,39~ = 45.70, P \< 0.001, ANOVA), but importantly, there was no difference in infarct size between groups among the two strains. Discussion {#sec014} ========== This study assessed whether our previous findings of complete protection from impending ischemic stroke damage in rats could be replicated in an additional species. Mice were chosen since, if findings had been replicated, this would allow us to begin dissecting the molecular mechanisms underlying this treatment. Although both C57BL/6J and CD1 mice are known to have functioning collaterals, we did not observe protection from ischemic damage in either strain with this collateral-based single-whisker stimulation treatment. Additionally, subjects that received full-whisker array stimulation treatment were not protected from ischemia either. Despite receiving whisker stimulation treatment immediately post-occlusion, treated and untreated subjects from both strains exhibited significant reductions in cortical activity and blood flow 24 hours post-pMCAo, while C57BL/6J surgical controls confirmed that the surgical manipulations themselves (aside from pMCAo) were not responsible for the impairments observed. Ischemic damage was assessed histologically with TTC staining, confirming the presence of infarct in untreated and treated subjects in both mouse strains. Notably, the presence of the stimulation treatment didn't change the outcome. It is helpful to think about infarct size in these mice in terms of percent infarct of the ipsi-ischemic hemisphere so comparisons can be made to our previous studies in rats. The percentages presented here in mice fall within the range of what we have observed in our rat studies (3--12% of hemisphere) \[[@pone.0183909.ref019]\], and they are also comparable to what is observed in humans (4.5--14% of hemisphere) \[[@pone.0183909.ref027]\]. The minimal retrograde MCA blood flow observed here suggests that the main potential problem may be that pial collaterals in these mice are not recruited in the same manner as in rats that do show protection when receiving immediate treatment \[[@pone.0183909.ref028]\]. In these mouse strains, the significantly reduced blood flow at 24 hours post-pMCAo may not be sufficient to protect cortex from impending ischemic damage. It's important to keep in mind that laser speckle blood flow images the surface vasculature, thereby limiting our ability to determine the role of any additional factors. Thus, in addition to the possibility of impaired pial collateral function, we can postulate that the lack of sufficient blood flow and protection from ischemic stroke could be due to several factors, such as an incomplete circle of Willis, impaired neurovascular coupling and functional hyperemia, and/or impaired function of penetrating arterioles. These will be discussed further below. Although pial collaterals have been shown to be critical to reducing infarct size, the circle of Willis in C57BL/6J mice is known to be highly variable in terms of the extent of its primary collateralization \[[@pone.0183909.ref029]\], with many mice lacking either one or both posterior communicating arteries, leaving about 10% with a complete circle of Willis \[[@pone.0183909.ref030]\]. It's important to note here that this percentage is much smaller than the percentage of humans with a complete circle of Willis, and that rats are similar to humans in this respect \[[@pone.0183909.ref021],[@pone.0183909.ref031]\]. Thus, the reduced redundancy in blood flow at this early level of cerebral vascularization in these mice may set the stage for the decreased ability to sustain sufficient pial collateral flow after an ischemic event. Additionally, C57BL/6J mice are known to have fewer pial collaterals between the PCA and MCA compared to collaterals interconnecting the ACA and MCA trees \[[@pone.0183909.ref009]\]. It is not clear from our previous work in rats whether collaterals from the ACA or PCA trees are involved to an equal extent or whether one is more involved than the other in protection from ischemic damage. If collaterals from the PCA are recruited to a larger extent than those from the ACA, this could help explain the lack of protection in mice, however further work is necessary to determine the involvement of collaterals from different arterial trees. Despite this deficit, there is evidence that these mice do have some pial collateral flow after MCA occlusion. As Zhang et al. \[[@pone.0183909.ref009]\] showed, C57BL/6J mice had large numbers of pial collaterals and larger collateral vessel diameters compared to fourteen other mouse strains, and this was associated with smaller infarcts than in other strains. Similarly, Li and Murphy \[[@pone.0183909.ref032]\] observed spontaneous retrograde flow in pial collaterals of C57BL/6J mice from the Anterior Cerebral Arterial (ACA) tree 19 minutes after a temporary filament occlusion of the MCA, however, this flow decreased closer to the core of the MCA tree and this damaged region did not recover after reperfusion as a result of removal of the occlusion. Thus, despite the presence of spontaneous reperfusion via pial collaterals in both of these studies, it was not sufficient to completely protect from ischemic damage under their conditions. Additionally, Cristofaro et al. \[[@pone.0183909.ref033]\] found that transgenic CD1 mice that had increased density of pial collaterals also sustained ischemic damage, as not all collaterals were functional. It becomes clear from these results, along with our data, that the impact of collateral flow on infarct size, or complete protection from ischemic damage in the case of our rat studies, relies not just on collateral vessel numbers but importantly on vessel functionality. Another potential explanation for the lack of protection is perhaps that the mechanisms of neurovascular coupling that mediate the functional hyperemia response differ between rats and mice. Spontaneous reperfusion occurs as a result of pressure drop after MCA occlusion, which leads to the pial collateral anastomoses dilating to allow retrograde flow from the ACA and PCA \[[@pone.0183909.ref009],[@pone.0183909.ref016],[@pone.0183909.ref034]\], thereby potentially reducing infarct size but not resulting in complete protection from damage. However, functional hyperemia during evoked cortical activity results in dilation of the local vasculature in order to meet the increased energy demands of the tissue. Sensory stimulation treatment takes advantage of functional hyperemia, resulting in enhanced collateral flow. Although we have shown that enhanced collateral flow has resulted in complete protection from ischemic stroke in rats \[[@pone.0183909.ref003]--[@pone.0183909.ref005],[@pone.0183909.ref028],[@pone.0183909.ref035]--[@pone.0183909.ref036]\], this enhanced flow was not observed in mice. Neurovascular coupling is known to be impaired to varying degrees under ischemic conditions, and targeting these coupling mechanisms has been suggested as a strategy for reducing damage that may succeed in translation to humans \[[@pone.0183909.ref037]\]. Thus, it would not be too surprising if an impaired coupling response was responsible for the lack of protection in mice. Further studies are necessary to determine to what extent functional hyperemia and pial collateral dilation may be impaired during ischemia in order to understand the inability of the vasculature in these mice to respond to our collateral-based sensory stimulation treatment in a similar manner as in rats. Related to the idea of impaired neurovascular coupling and functional hyperemia, it's also important to consider that the penetrating arterioles may not have dilated sufficiently to feed the capillaries, and thus the parenchyma \[[@pone.0183909.ref038],[@pone.0183909.ref039]\]. Penetrating arterioles can be thought of as bottlenecks, and their responsiveness has been shown to ultimately regulate the rescuing of penumbral tissue since there is no blood flow between neighboring penetrating arterioles as they are not interconnected. Additionally, their dilation is associated with dilation of upstream pial vessels \[[@pone.0183909.ref040]\] and, in fact, the dilation initiated by active neurons can be propagated retrogradely to pial arterioles \[[@pone.0183909.ref041],[@pone.0183909.ref042]\]. In C57BL/6J mice, Baran et al. \[[@pone.0183909.ref038]\] showed that penetrating arterioles connected to the MCA that are close to numerous pial collaterals between MCA and ACA dilate, whereas those penetrating arterioles further away from collaterals constrict. They concluded that to have dilation of penetrating arterioles to support protection of the tissue from ischemia, there must be blood flow in the pial collaterals through the anastomoses between the MCA and ACA. The coupling of the dilation of pial collaterals and penetrating arterioles after ischemic stroke is clearly a complex process. In our model, it is possible that the evoked cortical activity from treatment did not result in adequate dilation and retrograde blood flow through the pial collaterals, leading to impaired dilation of penetrating arterioles, further compounding the issue and resulting in infarct. Our LSI results would also support this interpretation since minimal blood flow would be observed in MCA if penetrating arterioles were not adequately dilated. In future studies, it would be important to assess perfusion in the penetrating arterioles, perhaps with functional ultrasound \[[@pone.0183909.ref043]\], in order to dissect their role in protection from or deterioration to damage. The difference in outcome between rats and mice begs the question of which one is a better model for humans. We believe that both can be relevant models and can represent different stroke patient populations. Significant differences in the amount and functionality of cerebral collateralization are documented in humans \[[@pone.0183909.ref009],[@pone.0183909.ref012],[@pone.0183909.ref014],[@pone.0183909.ref044],[@pone.0183909.ref045]\]. Thus, our rat studies may represent humans that have well-developed functional pial collaterals and less impaired neurovascular coupling, while results from the mice may indicate the potential outcome for stroke patients that lack functioning collaterals without comorbidities. If relevant to humans, the collateral-based sensory stimulation treatment described by our lab may be a promising treatment for some populations of ischemic stroke patients. In terms of translation of this treatment to humans, whiskers comprise a large portion of somatosensory cortex in rodents, with the equivalent in humans being the fingers and mouth. Both cortical regions in rodents and humans are fed by the MCA, thus, ischemic stroke within the MCA in humans could potentially be treated with stimulation of the hands or mouth. However, determination of the extent of cerebral collateralization in patients, along with the location of the stroke, should be routinely performed prior to administration of this treatment. This would not only confirm that the stroke has occurred in regions of the brain that can be treated with this mode of stimulation, but could also indicate which patients may show a beneficial response to our collateral based therapeutic treatment. We would like to thank Dr. Melissa Davis for her technical input regarding mouse pMCAo, and Peggy Galvez for her assistance with imaging and histological analysis. [^1]: **Competing Interests:**The authors have declared that no competing interests exist.
Technical Field Background Art Best Mode for Carrying Out the Invention Embodiments Productive Example 1: Methyl 3-amino-5-hydroxymethylbenzoate: Productive Example 2: Methyl 5-hydroxymethyl-3-iodobenzoate: Productive Example 3: Dihydroxy-(3-cyanophenyl)borane: Productive Example 4: Methyl 3-(3-cyanophenyl)-5-(hydroxymethyl)benzoate: Productive Example 11: Methyl 3-(3-cyanophenyl)-5-(bromomethyl)benzoate: Productive Example 12: Methyl 3-(3-cyanophenyl)-5-(aminomethyl)benzoate: Productive Example 13: Methyl 3-(3-cyanophenyl)-5-[((N-t-butoxycarbonyl)piperidin-4-ylmethyl) aminomethyl)benzoate: Productive Example 15: Methyl 3-(3-cyanophenyl)-5-[((N-t-butoxycarbonyl)piperidin-4-carbonyl) aminomethyl)benzoate: Productive Example 23: Productive Example 28: Methyl 3-(3-cyanophenyl)-5-[N-[(N-t-butoxycarbonyl)piperidin-4-ylmethyl]-N-acetylaminomethyl]benzoate: Productive Example 33: Methyl 3-(3-cyanophenyl)-5-[N-[(N-t-butoxycarbonylpiperidin-4-yl) methyl]-N-trifluoroacetylaminomethy]benzoate: Productive Example 34: Methyl 3-(3-cyanophenyl)-5-[2-[(4-t-butoxycarbonyl)piperazin-1-yl]-ethoxy)methyl]benzoate: Productive Example 35: Methyl 3-(3-cyanophenyl)-5-[(1-acetylpiperidin-4-yl)-methoxymethyl]benzoate: Productive Example 36: Methyl 3-(3-cyanophenyl)-5-((1-(t-butoxycarbonylmethyl)-4-piperidyl)-methoxymethyl)benzoate: Productive Example 37: Methyl 3-(3-cyanophenyl)-5-((2,2,2-trifluoro-N-((1-(2-hydroxyethyl)(4-piperidyl))methyl)acetylamino)methyl)benzoate: Productive Example 38: 3-(3-cyanophenyl)-5-[2-(N-t-butoxycarbonylpiperidin-4-yl)-methoxymethyl]benzoic acid: Productive Example 39: 3-(3-cyanophenyl)-5-[2-(1-t-butoxycarbonylpiperidin-4-yl)methoxymethyl]benzoic acid dimethylamide: Productive Example 40: 1-acetyl-3-(3-cyanophenyl)-5-[2-(N-t-butoxycarbonylpiperidin-4-yl) methoxymethyl]benzene: Example 1 Methyl 3-(3-amidinophenyl)-5-[(4-piperidinyl)methoxymehthyl]-5-benzoate·salt: Example 41 Methyl 3-(3-amidinophenyl)-5-[(1-acetoimidoyl-4-piperidinyl)methoxymethyl]benzoate·salt: Example 61 3-(3-amidinophenyl)-5-[(1-acetoimidoyl-4-piperidinyl)methoxymethyl]benzoic acid·hydrochloride: Example 69 3-(3-amidinophenyl)-5-[[(4-piperidyl)methyl]aminomethyl]phenylcarbonylaminoacetic acid·salt: Experiment 1 (1) Determination of inhibiting activity of activated blood coagulation factor X (FXa): (2) Determination of thrombin inhibiting activity: (3) Determination of anticoagulation activity (APTT): (4) Determination of acetylcholine esterase (AChE) inhibiting activity: (5) Determination of bioavailability (BA): The present invention relates to novel, and selective, activated blood coagulation factor Xa (hereafter, "FXa") inhibitors of the general formula (I). A therapy for anticoagulation plays an important part in the medical treatment and prophylaxis of thromboembolisms such as myocardial infarction, cerebral thrombosis, thrombosis of peripheral arteries, and thrombosis of deep veins. In particular, for the prophylaxis of chronic thrombosis, harmless and appropriate oral anticoagulants which can be administrated over a long period of time are desired. However, to date, warfarin potassium agent which are difficult to control the extent of the anti-coagulation are the only above-mentioned anticoagulants, and thus a need for anticoagulants which are easy to use is left. Though antithrombin agents have been developed as anticoagulants in the past, it is known that these agents, for example hirudin, have a risk of a tendency toward bleeding as a side effect. The fact that inhibition of FXa, located upstream of thrombin in the blood coagulation cascade is systematically more effective than inhibition of thrombin and that the FXa inhibitors do not cause the above significant side effect and is clinically preferable, has begun to be understood. Biphenylamidine compounds, which exhibit FXa inhibition activity, were disclosed in The 17th Symposium on Medicinal Chemistry, The 6th Annual Meeting of Division of Medicinal Chemistry, Abstracts, 184 - 185, 1997. However, compounds of the present invention are novel compounds which differ distinctly in the use of a heteroatom in a linkage between the biphenylamidine structure which may interact with an S1 pocket and the cyclic structure which may interact with an aryl binding site, and in the presence of a substituent such as a carboxyl group on a linker benzene ring. Further, Japanese Unexamined Patent Publication (Kokai) No. 4-264068 discloses biphenylamidine derivatives as cyclic imino-derivatives. However, compounds of the present invention differ in the presence of a bond, through a heteroatom, at a benzyl-position. Therefore, an object of the present invention is to provide a novel compound which may be a FXa inhibitor having a clinical applicability. Disclosure of the invention 1 1 - 8 1 - 8 R is a hydrogen atom, a fluorine atom, a chlorine atom, a bromine atom, a hydroxyl group, an amino group, a nitro group, a C alkyl group, or a C alkoxy group; 1 - 4 L is a direct bond or a C alkylene group; 2 1 - 8 1 - 8 1 - 8 1 - 8 1 - 8 1 - 8 1 - 8 1 - 8 1 - 8 R is a fluorine atom; a chlorine atom; a bromine atom; a hydroxyl group; an amino group; a C alkoxy group; a carboxyl group; a C alkoxycarbonyl group; an aryloxycarbonyl group; an aralkoxycarbonyl group; a carbamoyl group wherein a nitrogen atom constituting the carbamoyl may be substituted with a mono- or di-C alkyl group or may be a nitrogen atom in an amino acid; a C alkylcarbonyl group; a C alkylsulfenyl group; a C alkylsulfinyl group; a C alkylsulfonyl group; a mono- or di-C alkylamino group; a mono- or di-C alkylaminosulfonyl group; a sulfo group; a phosphono group; a bis(hydroxycarbonyl)methyl group; a bis(alkoxycarbonyl)methyl group; or a 5-tetrazolyl group; 3 1 - 8 1 - 8 1 - 8 R is a hydrogen atom, a fluorine atom, a chlorine atom, a bromine atom, a hydroxyl group, an amino group, a nitro group, a C alkyl group, a C alkoxy group, a carboxyl group, or a C alkoxycarbonyl group; 2 2 2 4 5 5 5 5 X is any of the formulae: -O-, -S-, -SO-, -SO-, -NH-CO-NH-, -N(R)-, -CO-N(R)-, -N(R)-CO-, -N(R)-SO-, -SO-N(R)-, wherein 4 1 - 10 1 - 10 1 - 10 3 - 8 R is a hydrogen atom, a C alkyl group, a C alkylcarbonyl group, a C alkylsulfonyl group, a C cycloalkyl group, or an aryl group, 5 4 5 1 - 10 3 - 8 1 - 8 1 - 8 R is a hydrogen atom, a C alkyl group, a C cycloalkyl group, or an aryl group, wherein an alkyl group in the R and R may be substituted with an aryl group, a hydroxyl group, an amino group, a fluorine atom, a chlorine atom, a bromine atom, a C alkoxy group, a carboxyl group, a C alkoxycarbonyl group, an aryloxycarbonyl group, an aralkoxycarbonyl group, a carbamoyl group, or a 5-tetrazolyl group; 4 - 8 4 - 8 1 - 8 1 - 8 1 - 8 Y is a C cycloalkyl group wherein a methylene group in the C cycloalkyl may be replaced with a carbonyl group, or may be substituted with a fluorine atom, a chlorine atom, a bromine atom, a hydroxyl group, an amino group, a C alkyl group, a C alkoxy group, a carbamoyl group, a C alkoxycarbonyl group, a carboxyl group, an aminoalkyl group, a mono- or di-alkylamino group, or a mono- or di-alkylaminoalkyl group; or the following 5 - 8-membered ring of the formulae I-1 or I-2: wherein, in the formulae I-1 and I-2, in each cyclic system, the methylene group may be replaced with a carbonyl group, and the cycle may have unsaturated bonds, 6 1 - 8 1 - 8 R is a hydrogen atom, a fluorine atom, a chlorine atom, a bromine atom, a hydroxyl group, an amino group, a nitro group, a C alkyl group, or a C alkoxy group, W is C-H, or a nitrogen atom, with the proviso that W is not a nitrogen atom when the cycle is 5-membered ring, 1 - 10 1 1 - 8 1 1 - 8 1 - 8 Z is a hydrogen atom; a C alkyl group wherein the alkyl group may be substituted with a hydroxyl group except when Z is a C alkyl, an amino group, a C alkoxy group except when Z is a C alkyl, a carboxyl group, a C alkoxycarbonyl group, an aryloxycarbonyl group or an aralkoxycarbonyl group; a C alkylcarbonyl group; an arylcarbonyl group; an aralkylcarbonyl group; an amidino group; or the following group of the formula I-3: wherein, in the formula I-3, 7 1 - 8 1 - 8 R is a C alkyl group wherein the alkyl group may be substituted with a hydroxyl group or a C alkoxy group; an aralkyl group; or an aryl group; m is an integer of 1 - 3; n is an integer of 0 - 3, with the proviso that W is not a nitrogen atom when n is 0 - 1; 1. A biphenylamidine derivative of general formula (1): wherein or a pharmaceutically acceptable salt thereof. 1 1 - 4 1 - 4 R is a hydrogen atom, a fluorine atom, a chlorine atom, a bromine atom, a hydroxyl group, an amino group, a C alkyl group, or a C alkoxy group; 1 - 4 L is a direct bond or a C alkylene group; 2 1 - 8 1 - 8 1 - 8 1 - 8 1 - 8 1 - 8 1 - 8 1 - 8 1 - 8 R is a fluorine atom; a chlorine atom; a bromine atom; a hydroxyl group; an amino group; a C alkoxy group; a carboxyl group; a C alkoxycarbonyl group; an aryloxycarbonyl group; an aralkoxycarbonyl group; a carbamoyl group wherein a nitrogen atom in the carbamoyl group may be substituted with a mono- or di-C alkyl group or may be a nitrogen atom in an amino acid; a C alkylcarbonyl group; a C alkylsulfenyl group; a C alkylsulfinyl group; a C alkylsulfonyl group; a mono-or di-C alkylamino group; a mono- or di-C alkylaminosulfonyl group; a sulfo group; a phosphono group; a bis(hydroxycarbonyl)methyl group; a bis(alkoxycarbonyl)methyl group; or a 5-tetrazolyl group; 3 R is a hydrogen atom; 4 5 5 5 5 2 2 X is any of the formulae: -O-, -S-, -N(R)-, -CO-N(R)-, -N(R)-CO-, -N(R)SO-, or -SO-N(R)-; wherein 4 1 - 10 1 - 10 1 - 10 R is a hydrogen atom, a C alkyl group, a C alkylcarbonyl group, or a C alkylsulfonyl group, 5 4 5 1 - 10 1 - 8 1 - 8 R is a hydrogen atom, or a C alkyl group, wherein an alkyl group in the R and R may be substituted with an aryl group, a hydroxy group, an amino group, a fluorine atom, a chlorine atom, a bromine atom, a C alkoxy group, a carboxyl group, a C alkoxycarbonyl group, an aryloxycarbonyl group, an aralkoxycarbonyl group, a carbamoyl group, or a 5-tetrazoyl group; 4 - 8 4 - 8 1 - 8 1 - 8 1 - 8 Y is a C cycloalkyl group wherein a methylene group constituting the C cycloalkyl may be replaced with a carbonyl group, or may be substituted with a fluorine atom, a chlorine atom, a bromine atom, a hydroxyl group, an amino group, a C alkyl group, a C alkoxy group, a carbamoyl group, a C alkoxycarbonyl group, a carboxyl group, an aminoalkyl group, a mono- or di-alkylamino group, or a mono- or di-alkylaminoalkyl group; or the following 5 - 8-membered ring of the formula II-1: wherein, in formulae II-1, in the cyclic system, the methylene may be replaced with a carbonyl group, 6 1 - 4 1 - 4 R is a hydrogen atom, a fluorine atom, a chlorine atom, a bromine atom, a hydroxyl group, an amino group, a C alkyl group, or a C alkoxy group; W is C-H, or a nitrogen atom, with the proviso that W is not a nitrogen atom when the cycle is 5-membered ring, 1 - 10 1 1 - 8 1 1 - 8 1 - 8 Z is a hydrogen atom; a C alkyl group wherein the alkyl group may be substituted with a hydroxyl group except when Z is a C alkyl, an amino group, a C alkoxy group except when Z is a C alkyl, a carboxyl group, a C alkoxycarbonyl group, an aryloxycarbonyl group, or an aralkoxycarbonyl group; a C alkylcarbonyl group; an arylcarbonyl group; an aralkylcarbonyl group; an amidino group; or the following group of the formula II-2: wherein, in formula II-2, 7 1 - 8 1 - 4 R is a C alkyl group wherein the alkyl group may be substituted with a hydroxyl group or a C alkoxy group; an aralkyl group; or an aryl group; m is an integer of 1 - 3; n is an integer of 0 - 3, with the proviso that W is not a nitrogen atom when n is 0 - 1; 2. A biphenylamidine derivative wherein, in said formula (1), or a pharmaceutically acceptable salt thereof. 1 - 4 L is a bond or a C alkylene group; 2 1 - 4 1 - 4 1 - 4 R is a carboxyl group; a C alkoxycarbonyl group; an aralkoxycarbonyl group; a carbamoyl group wherein a nitrogen atom constituting the carbamoyl group may be substituted with a mono- or di-C alkyl group or may be a nitrogen atom in an amino acid; or a C alkylcarbonyl group; 4 X is -O-, -N(R)-, or -NH-CO-, wherein 4 1 - 10 1 - 10 1 - 10 1 - 8 R is a hydrogen atom, a C alkyl group, a C alkylcarbonyl group or a C alkylsulfonyl group, the alkyl group being optionally substituted with a hydroxyl group, an amino group, a fluorine group, a carboxyl group or a C alkoxycarbonyl group; 5 - 6 5 - 6 1 - 4 Y is a C cycloalkyl group wherein a methylene group constituting the C cycloalkyl group may be substituted with a carbamonyl group, a C alkoxy group or a carboxyl group; or the following 5 - 6-membered ring of the formula III-1: wherein, in formula III-1, W is C-H, or a nitrogen atom, with the proviso that W is not a nitrogen atom when the cycle is 5-membered ring, 1 - 4 1 1 - 4 1 - 4 Z is a hydrogen atom; a C alkyl group wherein the alkyl group may be substituted with a hydroxyl group except when Z is a C alkyl, an amino group, a carboxyl group or a C alkoxycarbonyl group; a C alkylcarbonyl group; an amidino group; or the following group of the formula III-2: wherein, in formula III-2, 7 1 - 4 R is a C alkyl group wherein the alkyl group may be substituted with a hydroxyl group; n is an integer of 0 - 2; with the proviso that W is not a nitrogen atom when n is 0 - 1; 3. A biphenylamidine derivative of general formula (2): wherein or a pharmaceutically acceptable salt thereof. 4 X is -O-, or -N(R)-, wherein 4 1 - 10 1 - 10 1 - 10 1 - 8 R is a hydrogen atom, a C alkyl group, a C alkylcarbonyl group or a C alkylsulfonyl group, wherein the alkyl being optionally substituted with a hydroxyl group, an amino group, a fluorine atom, a carboxyl group or a C alkoxycarbonyl group; 4. A biphenylamidine derivative wherein, in said formula (2), or a pharmaceutically acceptable salt thereof. X is -NH-CO-, 5. A biphenylamidine derivative wherein, in said formula (2), or a pharmaceutically acceptable salt thereof. L is a bond; 2 R is a carboxyl group or a methoxycarbonyl group; 4 X is -O-, or -N(R)-, wherein 4 R is a hydrogen atom, a methyl group or a 2-hydroxyethyl group; Y is any of the formulae: n is 1; 6. A biphenylamidine derivative wherein, in general formula (2), or a pharmaceutically acceptable salt thereof. 7. A prodrug which generates a biphenylamidine derivative or a pharmaceutically acceptable salt thereof according to any one of said 1 - 6, in vivo. 8. A blood coagulation inhibitor comprising at least a biphenylamidine derivative or a pharmaceutically acceptable salt thereof according to any one of said 1 - 7, and a pharmaceutically acceptable carrier. 9. A prophylactic agent for thrombosis or embolus, comprising at least a biphenylamidine derivative or a pharmaceutically acceptable salt thereof according to any one of said 1 - 7, and a pharmaceutically acceptable carrier. 10. A therapeutic agent for thrombosis or embolus, comprising at least a biphenylamidine derivative or a pharmaceutically acceptable salt thereof according to any one of said 1 - 7, and a pharmaceutically acceptable carrier. The inventors have made every effort to achieve the above purpose and, as a result, devised the following 1 - 10 inventions. The present invention is detailed in the following description. In the definition regarding the substituents in a compound of formula (1) of the present invention: 1 - 8 The term "C alkyl" means a branched or straight carbon chain having 1 to 8 carbons, and includes for example, methyl, ethyl, propyl, isopropyl, butyl, isobutyl, tert-butyl, pentyl, neo-pentyl, isopentyl, 1,2-dimethylpropyl, hexyl, isohexyl, 1,1-dimethylbutyl, 2,2-dimethylbutyl, 1-ethylbutyl, 2-ethylbutyl, isoheptyl, octyl, or isooctyl, etc. Among them, one having 1 to 4 carbons is preferable and methyl or ethyl is particularly preferable. 1 - 8 The term "C alkoxy" means an alkoxy group having 1 to 8 carbons, and includes for example, methoxy, ethoxy, propoxy, isopropoxy, butoxy, isobutoxy, sec-butoxy, tert-butoxy, pentyloxy, neo-pentyloxy, tert-pentyloxy, 2-methylbutoxy, hexyloxy, isohexyloxy, heptyloxy, isoheptyloxy, octyloxy, or isooctyloxy, etc.. Among them, one having 1 to 4 carbons is preferable and methoxy or ethoxy is particularly preferable. 1 - 4 The term "C alkylene" means a straight alkylene having 1 to 4 carbons, and includes methylene, ethylene, propylene, or butylene. 1 - 8 The term "C alkoxycarbonyl" means methoxycarbonyl, ethoxycarbonyl, propoxycarbonyl, isopropoxycarbonyl, butoxycarbonyl, isobutoxycarbonyl, sec-butoxycarbonyl, tert-butoxycarbonyl, pentyloxycarbonyl, isopentyloxycarbonyl, neopentyloxycarbonyl, hexyloxycarbonyl, heptyloxycarbonyl, or octyloxycarbonyl, etc.; preferably, it is methoxycarbonyl, ethoxycarbonyl or tert-butoxycarbonyl; and more preferably, it is methoxycarbonyl. The term "aryloxycarbonyl" means phenoxycarbonyl, naphthyloxycarbonyl, 4-methylphenoxycarbonyl, 3-chlorophenoxycarbonyl, or 4-methoxyphenoxycarbonyl, etc.; and preferably, it is phenoxycarbonyl. The term "aralkoxycarbonyl" means benzyloxycarbonyl, 4-methoxybenzyloxycarbonyl, or 3-trifluoromethylbenzyloxycarbonyl, etc.; and preferably, it is benzyloxycarbonyl. The term "amino acid" means a natural or non-natural commercially available amino acid; preferably, is glycine, alanine or β-alanine; and more preferably, it is glycine. 1 - 8 The term "C alkylcarbonyl" means a carbonyl group having a straight or branched carbon chain having 1 to 8 carbons, and includes for example, hormyl, acetyl, propionyl, butyryl, isobutyryl, valeryl, isovaleryl, pivaloyl, hexanoyl, heptanoyl, or octanoyl, etc.; preferably, it is the one having 1 to 4 carbons; and more preferably, it is acetyl or propionyl. 1 - 8 The term "C alkylsulfenyl" means an alkylsulfenyl group having 1 to 8 carbons, and includes for example, methylthio, ethylthio, butylthio, isobutythio, pentylthio, hexylthio, heptylthio, or octylthio, etc., and preferably, it is methylthio. 1 - 8 The term "C alkylsulfinyl" means an alkylsulfinyl group having 1 to 8 carbons, and includes for example, methylsulfinyl, ethylsulfinyl, butylsulfinyl, hexylsulfinyl, or octylsulfinyl, etc., and preferably, it is methylsulfinyl. 1 - 8 The term "C alkylsulfonyl" means an alkylsulfonyl group having 1 to 8 carbons, and includes for example, methylsulfonyl, ethylsulfonyl, butylsulfonyl, hexylsulfonyl, or octylsulfonyl, etc., and preferably, it is methylsulfonyl. 1 - 8 The term "mono- or di-C alkylamino" means methylamino, dimethylamino, ethylamino, propylamino, diethylamino, isopropylamino, diisopropylamino, dibutylamino, butylamino, isobutylamino, sec-butylamino, tert-butylamino, pentylamino, hexylamino, heptylamino, or octylamino, etc.; preferably, it is methylamino, dimethylamino, ethylamino, diethylamino or propylamino; and more preferably, it is methylamino or dimethylamino. 1 - 8 The term "mono- or di-C alkylaminosulfonyl" means for example methylaminosulfonyl, dimethylaminosulfonyl, ethylaminosulfonyl, propylaminosulfonyl, diethylaminosulfonyl, isopropylaminosulfonyl, diisopropylaminosulfonyl, dibutylaminosulfonyl, butylaminosulfonyl, isobutylaminosulfonyl, sec-butylaminosulfonyl, tert-butylaminosulfonyl, pentylaminosulfonyl, hexylaminosulfonyl, heptylaminosulfonyl, or octylaminosulfonyl, etc.; preferably, it is methylaminosulfonyl, dimethylaminosulfonyl, ethylaminosulfonyl, diethylaminosulfonyl or propylaminosulfonyl; and more preferably, it is methylaminosulfonyl or dimethylaminosulfonyl. The term "bis(alkoxycarbonyl)methyl" means, particularly, bis(methoxycarbonyl)methyl, or bis(ethoxycarbonyl)methyl, etc.; preferably it is bis(methoxycarbonyl)methyl. 1 - 10 The term "C alkyl" means a straight or branched carbon chain having 1 to 10 carbons, and includes for example, methyl, ethyl, propyl, isopropyl, butyl, isobutyl, tert-butyl, pentyl, neo-pentyl, isopentyl, 1,2-dimethylpropyl, hexyl, isohexyl, 1,1-dimethylbutyl, 2,2-dimethylbutyl, 1-ethylbutyl, 2-ethylbutyl, heptyl, isoheptyl, 1-methylhexyl, 2-methylhexyl, octyl, 2-ethylhexyl, nonyl, decyl, or 1-methylnonyl, etc. Among them, the one having 1 to 4 carbons is preferable, and methyl or ethyl is particularly preferable. 1 - 10 The term "C alkylcarbonyl" means a carbonyl group having a straight or branched carbon chain having 1 to 10 carbons, and includes for example, hormyl, acetyl, propionyl, butyryl, isobutyryl, valeryl, isovaleryl, pivaloyl, hexanoyl, heptanoyl, octanoyl, nonanoyl, or decanoyl, etc.; preferably, it is one having 1 to 4 carbons; and more preferably, it is acetyl or propionyl. 1 - 10 The term "C alkylsulfonyl" means an alkylsulfonyl group having 1 to 10 carbons, and includes for example, methylsulfonyl, ethylsulfonyl, propylsulfonyl, isopropylsulfonyl, butylsulfonyl, isobutylsulfonyl, pentylsulfonyl, isopentylsulfonyl, neopentylsulfonyl, hexylsulfonyl, heptylsulfonyl, octylsulfonyl, nonylsulfonyl, or decylsulfonyl, etc.; preferably, it is one having 1 to 4 carbons; and more preferably, it is methylsulfonyl or ethylsulfonyl. 3 - 8 The term "C cycloalkyl" means a cycloalkyl group having 3 to 8 carbons, and includes particularly, cyclopropyl, cyclobutyl, cyclopentyl, cyclohexyl, cycloheptyl, or cyclooctyl; and is preferably cyclopropyl. The term "aryl" means particularly a carbocyclic aryl group such as phenyl or naphthyl, or heteroaryl such as pyridyl or furyl, and preferably, it is phenyl. 4 - 8 The term "C cycloalkyl" means a cycloalkyl group having 4 to 8 carbons, and includes particularly, cyclobutyl, cyclopentyl, cyclohexyl, cycloheptyl, or cyclooctyl, etc.; and it is preferably, cyclopentyl or cyclohexyl. The term "aminoalkyl" means an straight alkyl having an amino group and 1 to 8 carbons, and includes particularly, 8-aminooctyl, 6-aminohexyl, 4-aminobutyl, 2-aminoethyl, or aminomethyl; preferably, it is 2-aminoethyl or aminomethyl. The term "mono- or di-alkylamino" means methylamino, dimethylamino, ethylamino, propylamino, diethylamino, isopropylamino, diisopropylamino, dibutylamino, butylamino, isobutylamino, sec-butylamino, tert-butylamino, etc.; preferably, it is methylamino, dimethylamino, ethylamino, diethylamino, isopropylamino, or diisopropylamino; and more preferably, it is ethylamino, diethylamino, or isopropylamino. 1 - 10 The term "mono- or di-alkylaminoalkyl" means particularly , methylaminoethyl, dimethylaminoethyl, ethylaminoethyl, methylaminopropyl, dimethylaminopropyl, ethylaminopropyl, diethylaminopropyl, methylaminobutyl, or dimethylaminobutyl, etc.; preferably, it is methylaminoethyl, dimethylaminoethyl, or ethylaminoethyl. "C alkyl" which binds to a nitrogen atom as Z means a straight or branched carbon chain having 1 to 10 carbons, and is for example, methyl, ethyl, propyl, isopropyl, butyl, isobutyl, tert-butyl, pentyl, neo-pentyl, isopentyl, 1,2-dimethylpropyl, hexyl, isohexyl, 1,1-dimethylbutyl, 2,2-dimethylbutyl, 1-ethylbutyl, 2-ethylbutyl, heptyl, isoheptyl, 1-methylhexyl, 2-methylhexyl, octyl, 2-ethylhexyl, nonyl, decyl, or 1-methylnonyl, etc. Among them, one having 1 to 4 carbons is preferable, and isopropyl or propyl is particularly preferable. The term "arylcarbonyl" means benzoyl, 4-methoxybenzoyl, or 3-trifluoromethylbenzoyl, etc., and preferably, it is benzoyl. The term "aralkylcarbonyl" includes particularly, benzylcarbonyl, phenethylcarbonyl, phenylpropylcarbonyl, 1-naphthylmethylcarbonyl, or 2-naphthylmethylcarbonyl, etc.; and preferably, it is benzylcarbonyl. The term "aralkyl" includes particularly, benzyl, phenethyl, phenylpropyl, 1-naphthylmethyl, or 2-naphthylmethyl, etc.; and preferably, it is benzyl. Further, in the definition regarding the substituent in a compound of formula (2) of the present invention: 1 - 4 The term "C alkoxycarbonyl" means methoxycarbonyl, ethoxycarbonyl, propoxycarbonyl, isopropoxycarbonyl, butoxycarbonyl, isobutoxycarbonyl, sec-butoxycarbonyl, or tert-butoxycarbonyl; preferably, it is methoxycarbonyl, ethoxycarbonyl, or tert-butoxycarbonyl; and more preferably, it is methoxycarbonyl. 1 - 4 The term "C alkyl" means a straight or branched carbon chain having 1 to 4 carbons, and includes for example, methyl, ethyl, propyl, isopropyl, butyl, isobutyl, or tert-butyl; and preferably, it is methyl or ethyl. 1 - 4 The term "C alkylcarbonyl" means a carbonyl group having a straight or branched carbon chain having 1 to 4 carbons, and includes for example, hormyl, acetyl, propionyl, butyryl, or isobutyryl, etc.; and preferably, it is acetyl or propionyl. 5 - 6 The term "C cycloalkyl" means a cycloalkyl group having 5 to 6 carbons, and includes cyclopentyl or cyclohexyl; and it is preferably cyclohexyl. 1 - 4 The term "C alkoxy" means an alkoxy group having 1 to 4 carbons, and includes particularly, methoxy, ethoxy, propoxy, isopropoxy, butoxy, isobutoxy, sec-butoxy, or tert-butoxy, etc. Among them, methoxy or ethoxy is preferable. The compound (1) of the present invention may form acid addition salts. Further, it may form salts with bases, depending on the species of the substituent. These salts are not restricted insofar as they are pharmaceutically acceptable, and include particularly, mineral salts such as hydrochloride, hydrobromide, hydroiodide, phosphate, nitrate or sulfate, etc.; organic sulfonates such as methanesulfonate, 2-hydroxyethanesulfonate or p-toluenesulfonate, etc.; and organic carbonates such as acetate, trifluoroacetate, propionate, oxalate, citrate, malonate, succinate, glutarate, adipate, tartrate, maleate, malate, or mandelate, etc. As salts with bases, salts with inorganic bases such as sodium salts, potassium salts, magnesium salts, calcium salts or alminium salts, and salts with organic bases such as methylamine salts, ethylamine salts, lysine salts or ornithine salts, etc. are included. The preferred compounds of the invention are found in Table 1. More preferred compounds of the invention are compounds specified by the following compound numbers, among compounds listed in Table 1. Compound No.: 23, 29, 30, 31, 53, 54, 57, 58, 59, 60, 91, 92, 93, 115, 119, 120, 121, 156, 166, 168, 201, 205, 206, 207, 244, 245, and 246. The representative strategies for syhthesizing compounds of formula (1) of the present invention are detailed in the following description. 1 - 4 1 - 4 1 - 4 1 - 4 According to the present invention, in the case that starting compounds or intermediates have substituents which influence the reaction such as hydroxyl, amino or carboxyl, etc., it is preferred to adequately protect such functional groups to carry out the reaction of etherification, and then detach the protecting group. The protecting group is not limited insofar as it is one which is usually employed on respective substituents and does not have an adverse effect on the other elements during processes of the protection and deprotection, and includes for example, trialkylsilyl, C alkoxymethyl, tetrahydropyranyl, acyl or C alkoxycarbonyl as a protecting group on hydroxyl; C alkoxycarbonyl, benzyloxycarbonyl or acyl as a protecting group on amino; and C alkyl as a protecting group on carboxyl. The deprotection reaction can be carried out according to processes which are usually practiced on respective protecting groups. 1 3 1 8 1 - 8 Among nitriles which are precursors of the present compounds of formula (1), compounds having an oxygen as X can be synthesized, for example, according to the following reaction: wherein R, R, L, m, and n are as defined in formula (1); Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y; R means hydrogen, fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, or C alkoxy. 1 2 n That is, as seen in the above reaction (a-1), nitriles which are precursors of the compound of the invention can be produced by mixing alcohol represented by formula: Y-(CH)-OH with a raw material, biphenylalkyl bromide in the presence of bases. 1 3 1 Moreover, among nitriles which are precursors of the present compounds of formula (1), compounds having an oxygen as X can be synthesized, for example, according to the following reaction: wherein R, R, L, m, and n are as defined in formula (1); Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y. 1 2 n That is, nitriles which are precursors of the present compound can be produced by mixing alcohol represented by formula Y-(CH)-OH with a raw material, 3-bromo-3-iodophenylalkyl bromide in the presence of bases to produce 3-bromo-3-iodophenylalkyl ether, then introducing substituent -L-COOMe into the resulting ether by the monocarbonylation or monoalkylation to produce 3-bromophenylalkylether, and then carrying out the coupling reaction with a cyanophenyl-boronic acid derivative. The etherification of the first step in reactions (a-1) and (a-2) is carried out using aliphatic ether solvent, such as tetrahydrofuran or diethylether, aprotic hydrocarbons such as benzene or toluene, aprotic polar solvents such as DMF or HMPA, or a mixture thereof, etc., and as bases, a metal oxide such as barium oxide or zinc oxide, metal hydroxide such as sodium hydroxide or potassium hydroxide, or a metal hydride such as sodium hydride, etc. are used. The reaction proceeds at 0 - 100°C for 3 - 72 hours with stirring. Preferably, it is carried out at 20 - 80°C for 8 - 36 hours, using sodium hydride, in absolute aliphatic ethers such as THF or ether. (i) Monocarbonylation by introduction of carbon monoxide (in the case that L is a bond): Iodine can be substituted with methoxycarbonyl group by dissolving the ethers obtained from the first step of reaction (a-1) into methanol, adding bivalent palladium catalyst and bases such as tertiary amine such as triethylamine, and optionally phosphine ligand such as triphenylphosphine, and stirring for 3 - 48 hours under room temperature or under heating in an atmosphere of carbon monoxide. Preferably, it is carried out using, as a catalyst, bistriphenylphosphine palladium or palladium acetate and as a base, diisopropylethylamine or tributylalminium, at 60 - 80°C for 12 - 36 hours. 1 - 8 1 - 8 1 - 8 2 n 2 1 - 8 1 - 8 1 - 8 1 - 8 1 - 8 1 - 8 1- 8 1 - 8 1 - 8 1 - 4 1 3 9 2 1 10 4 1 3 9 2 1 1 4 1 3 9 2 1 1 3 9 2 1 1 3 9 2 1 1 3 5 9 2 1 1 3 5 9 2 1 1 3 5 9 2 1 1 3 5 9 2 1 1 3 9 2 1 1 3 1 9 2 11 (ii) Monoalkylation using an organic zinc reagent (in the case that L is C alkylene): Iodine can be substituted with alkyl by dissolving the ethers obtained from the first step of reaction (a-1) and 0-valence palladium catalyst such as tetrakistriphenylphosphine palladium into the solvent such as THF or DMF, benzene, or toluene, or a mixture thereof, adding, to this solution, THF solution containing alkyl zinc reagent of formula: l-Zn-L-COOMe, and stirring for 3 - 48 hours under room temperature or under heating in an atmosphere of carbon monoxide. Preferably, it is carried out using, as a catalyst, tetrakistriphenylphosphine palladium and as a solvent, THF, at 20 - 80°C for 6 - 36 hours. The biphenylation which is the third step of the reaction (a-2) can be carried out by reacting monohalide with cyanophenyl boronic acid in presence of palladium catalyst. This reaction proceeds usually by heating, with stirring in DMF, the monohalide obtained from the second step of the reaction (a-2) and bivalent palladium catalyst such as palladium acetate, and additionally, bases such as triethylamine, and triarylphosphines to produce the cyanobiphenyl compound of interest. Preferably, it is carried out at 60 - 100°C for 2 - 24 hours. Moreover, among nitriles which are precursors of the present compounds of formula (1), compounds having a nitrogen as X can be synthesized, for example, according to the following reactions (b-1) and (b-2): wherein R, R, L, m, and n are as defined in formula (1); R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1); Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y; R means a substituent R except for hydrogen and aryl; E is a leaving group such as chlorine, bromine, iodine, acyloxy or sulfonyloxy. wherein R, R, L, m, and n are as defined in formula (1): R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1); Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y; Ar means aryl; E is a leaving group such as chlorine, bromine, iodine, acyloxy or sulfonyloxy. The N-alkylation of reactions (b-1) and (b-2) can be carried out using a condition for alkylation which is known. That is, the starting material, biphenylalkylbromide can be reacted with amines of formula; Y-(CH)-NH in the presence of mineral salts such as potassium carbonate or amines such as tertiary amines which act as a base, to produce a secondary amine which is a compound of the present invention. This compound can be reacted with alkylating agent of formula: R-E to produce a tertiary amine which is a compound of the present invention. The above reactions are carried out usually by mixing amines with alkylating agents at optional rate in suitable solvents, and stirring then for 1 - 96 hours under cooling, under room temperature or under heating. Usually, the reactions are carried out using, as a base, mineral salts such as potassium carbonate or sodium carbonate or organic tertiary amines such as triethylamine or pyridine, and using, as a solvent, alcohols such as methanol or ethanol, hydrocarbons such as benzene or toluene, or a solvents which do not influence the reaction such as THF, dioxane, acetonitrile, DMF or DMSO, or a mixture thereof, at the rate of alkylating agents to amines of 1:10 - 10:1. Preferably, it is done at an alkylating agents to amines rate of 1:5 - 1:1, under room temperature or under heating, for 2 - 24 hours. Among nitriles which are precursors of the present compounds of formula (1), compounds having a sulfur as X can be snthesized, for example, according to the following reactions (c-1) and (c-2): wherein R, R, L, m, and n are as defined in formula (1): R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1); Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y; and E is a leaving group such as chlorine, bromine, iodine, sulfonate. wherein R, R, L, m, and n are as defined in formula (1): R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1); Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y; and E is a leaving group such as chlorine, bromine, iodine, or sulfonate. The thioetherification of reactions (c-1) and (c-2) can be carried out using a condition for thioetherification which is known. Usually, it is done by mixing alkyl halides with thiols at an optional rate in suitable solvents in the presence of bases such as sodium hydroxide or ammonia, and stirring them under cooling, under room temperature or under heating for 30 minutes to 96 hours. As a solvent, compounds which do not act on the reaction such as water, ethanol, DMF or toluene are employed, and as a base, sodium hydroxide, ammonia or cesium carbonate, etc. is employed. The reactions are carried out preferably by mixing at the rate of alkyl halides to thiols being 1:5 - 5:1, and stirring under room temperature or under heating for 30 minutes to 24 hours. Moreover, the resulting sulfide can be subjected to oxidation such as in the following reaction (d) to produce a compound having sulfoxide or sulfone as X among the compound of formula (1). wherein R, R, L, m, and n are as defined in formula (1): R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1): and Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y. The oxidation of reaction (d) can be carried out according to a process described in Jikken Kagaku Kohza (The 4th Edition), 24, Organic Synthesis VI - heteroelement·metallic element compounds -, p.350 - 373, edited by the Japan Chemical Association. Usually, the reaction is carried out using sulfides or sulfoxides using alcohols such as water or ethanol, etc. as a solvent and hydrogen peroxide, peracetic acid, metaperiodic acid or m-chloroperbenzoic acid, etc. as an oxidizing agent under cooling, under room temperature or under heating with stirring for 30 minutes to 24 hours. Preferably, the sulfoxide is produced for 30 to 12 hours at 0 - 20°C, while the sulfone is produced for 1 - 12 hours at 0 - 80°C. Further, among nitriles which are precursors of the present compounds of formula (1), compounds having an amido linkage as X can be synthesized, for example, according to the following reactions (e-1) and (e-2): wherein R, R, R, L, m, and n are as defined in formula (1); R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1): Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y; and G is halogen, acyloxy, p-nitrophenoxy or hydroxyl, etc. wherein R, R, R, L, m, and n are as defined in formula (1): R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1); Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y; and G is halogen, acyloxy, p-nitrophenoxy or hydroxyl, etc. The reactions of (e-1) and (e-2) can be carried out using a condition for amidation which is known. Usually, the amides can be obtained by mixing active derivatives of carboxylic acids with amine compounds in suitable solvents in the presence of bases, for acylation. As the active derivatives of carboxylic acids for use, active esters such as acid halides, mixed acid anhydrides or p-nitrophenol, etc. are employed under cooling or under room temperature for 30 minutes to 24 hours. Preferably, it is done in halogenated hydrocarbons such as dichloromethane, aliphatic ethers such as THF or diethylether, or solvents such as acetonitrile or DMF, or a solvent mixture thereof, using tertiary amines such as triethylamine as bases, at 0 - 20°C for 1 - 18 hours. Also, these amides can be obtained by the condensation between amines and carboxylic acids in presence of condensating agents such as carbodiimades. In this case, halogenated hydrocarbons such as DMF or chloroform as solvents are suitable while N,N-dicyclohexylcarbodiimide, 1-ethyl-(3-(N,N-dimethylamino)propyl)carbodiimide, carbonyldiimidazole, diphenylphosphorylazide, or diethylphosphorylcyanide are suitable as condensating agents. The reaction is usually carried out under cooling or under room temperature for 2 - 48 hours. Moreover, among nitriles which are precursors of the present compounds of formula (1), compounds having a sulfoneamide structure as X can be synthesized, for example, according to the following reactions (f-1) or (f-2): wherein R, R, R, L, m, and n are as defined in formula (1); R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1); and Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y. wherein R, R, R, L, m, and n are as defined in formula (1); R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1); and Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y. The reactions of (f-1) and (f-2) can be carried out by reacting an amine with active derivatives of sulfonic acids in suitable solvents in the presence of bases to produce sulfonamids of interest. As the active derivatives of sulfonic acids, sulfonyl halide is preferable, and the reaction is carried out in halogenated hydrocarbons such as dichloromethane, aliphatic ethers such as THF or diethylether, a solvent such as acetonitrile or DMF, or a mixture of the solvents at 0 - 20°C for 1 - 24 hours, using tertiary amines such as triethylamine as a base. Also, among nitriles which are precursors of the present compounds of formula (1), compounds having a urea structure as X can be synthesized, for example, according to the following reaction (g): wherein R, R L, m and n are as defined in formula (1): R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1); and Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y. That is, compounds having a urea structure as X can be produced by reacting, as a raw material, amine with isocyanate derivatives in a suitable solve under cooling to heating. A solvent used in this reaction can be DMF, THF, dioxane, dichloroethane, chloroform, acetnitrile, DMSO, benzene, or toluene, etc. The nitriles which are precursors of the compound of the present invention produced by the above reactions (a-1), (a-2), (b-1), (b-2), (c-1), (c-2), (d), (e-1), (e-2), (f-1), (f-2), and (g) can be converted to the benzamidine derivatives which are a compound of the present invention by the reaction of amidination as follows: wherein R, R, L, X, m and n are as defined in formula (1); Y means a substituent Y defined in formula (1) except for the one having the structures defined in the formula I-3 as a substituent Z on Y; R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1); and R means C alkyl. This amidination is carried out according to the condition for reaction detailed in the following (iii) or (iv): 11 (iii) Amidination through imidation using hydrogen halide in alcohol solution: The reaction by which the imidates are obtained from nitriles and alcohols, proceeds, for example, by dissolving alkoxymethylphenylbenzonitriles in alcohols having 1 to 4 carbons (ROH) containing hydrogen halides such as hydrogen chloride or hydrogen bromide, etc. with stirring. The reaction is usually carried out at -20 - 30°C, for 12 - 96 hours. Preferably, it is done in a hydrogen chloride in methanol or ethanol solution, at -10 - +30°C, for 24 - 72 hours. The reaction between the imidate and ammonia proceeds by stirring the imidate in an alcohol having 1 to 4 carbons such as methanol or ethanol containing ammonia or amines such as hydroxylamine, hydrazine or carbamate ester, or in aliphatic ethers such as diethylether, or in halogenated hydrocarbons such as dichloromethane or chloroform, or a mixture thereof to produce the benzamidine derivative which is a compound of the present invention. The reaction is usually carried out at the temperature of -10 - +50°C, for 1 to 48 hours. Preferably, it is carried out at 0 - 30°C for 2 - 12 hours. 11 1 3 6 9 2 1 3 6 9 2 2 9 2 1 - 8 1 - 8 (iv) Amidination through an imidate prepared by direct bubbling of hydrogen halide: The reaction between nitriles and alcohols proceeds, for example by dissolving nitriles in aliphatic ethers such as diethylether, or halogenated hydrocarbons such as chloroform, or aprotic solvents such as benzene, adding the equivalent or an excess of an alcohol having 1 to 4 carbons (ROH), bubbling hydrogen halides such as hydrogen chloride or hydrogen bromide at -30 - 0°C for 30 minutes to 6 hours with stirring, then stopping the bubbling, and stirring at 0 - 50°C for 3 - 96 hours. Preferably, it is done by bubbling hydrogen chloride for 1 - 3 hours at -10 - 0°C with stirring in halogenated hydrocarbons containing the equivalent or excess methanol or ethanol, then stopping the bubbling, and stirring at 10 - 40°C for 8 - 24 hours. The resulting imidates can be converted to benzamidine derivatives (1) which are compounds of the present invention by stirring them in alcohol solvents having 1 to 4 carbons such as methanol or ethanol containing ammonia or amines such as hydroxylamine, hydrazine or carbamate ester, or aliphatic ether solvents such as diethylether, or halogenated hydrocarbon solvents such as chloroform, or a mixture thereof. The reaction is usually carried out at the temperature of -20 - +50°C for 1 - 4 hours. Preferably, it is carried out in saturated ammonia ethanol solution at 0 - 30°C for 2 - 12 hours. Among the compounds of the present invention of formula (1), compounds having a substituent Y wherein a substituent Z has the structures defined in formula I-3 can be produced by carrying out the imidoylation of the following (j-1) and (j-2), after yielding the benzamidine compounds having a secondary amino group in a substituent Y by the above reaction (h): wherein R, R, R, L, W, X, Z, m and n are as defined in formula (1); R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1). wherein R, R, R, L, W, X, Z, m and n are as defined in formula (1); R means fluorine, chlorine, bromine, hydroxyl or protected hydroxyl, amino or protected amino, C alkoxy, or methoxycarbonyl among substituent R defined in formula (1). This imidoylation proceeds by mixing benzamidine compounds having a secondary amino group in a substituent Y with the equivalent or excess imidates in water, or alcohols having 1 to 4 carbons such as methanol or ethanol, or aliphatic ethers such as diethylether, or halogenated hydrocarbons such as chloroform, or polar solvents such as DMF or DMSO, or a mixture thereof in presence of bases, with stirring. The reaction is usually carried out at room temperature for 1 - 24 hours. As a base, N-methylmorpholine, triethylamine, diisopropylethylamine, sodium hydroxide, or potassium hydroxide, etc. can be used. Among the compounds of the present invention of formula (1), compounds having a carboxyl as R are produced by ester hydrolysis of compounds having methoxycarbonyl as R among the benzamidine compounds produced by the above reaction (h), (j-1) and (j-2). This hydrolysis can be carried out under a basic condition, an acidic condition, or a neutral condition, if necessary. In the reaction under the basic condition, as a base, sodium hydroxide, potassium hydroxide, lithium hydroxide, or barium hydroxide, etc. can be used, and under the acidic condition, hydrochloric acid, sulfuric acid, or Lewis acids such as boron trichloride, trifluoroacetic acid, or p-toluenesulfonic acid, etc., are included, while under the neutral condition, a halogen ion such as lithium iodide or lithium bromide, alkali metal salts with thiol or selenol, iodotrimethylsilane, and enzymes such as esterase are included. The solvent for use includes polar solvents such as water, alcohols, acetone, dioxane, THF, DMF, DMSO, etc., or a mixture thereof. The reaction is usually carried out at room temperature or under heating for 2 - 96 hours. The suitable condition of the reaction temperature or the reaction time, etc. differs, depending on the reaction condition used, and can be selected appropriately by a conventional process. In the compounds having a carboxyl is a substituent R, obtained from the above process, the carboxyl can be converted to the other esters by the following process (v), (vi), or (vii): 2 (v) Conversion from carboxyl to alkoxycarbonyl: The carboxyl can be converted to the alkoxycarbonyl by reacting compounds having carboxyl as a substituent R among compounds of formula (1) with the equivalent or excess alkylating agents (for example, methyl acyloxychlorides such as methyl acetoxychloride or methyl pivaloyloxychloride, or allyl chlorides, or benzyl chlorides) in halogenated hydrocarbons such as dichloromethane, or aliphatic ethers such as THF, or aprotic polar solvents such as DMF, or a mixture thereof, under presence of tertiary amines such as triethylamine or diisopropylethylamine, at -10 - +80°C for 1 - 48 hours. Preferably, it is done using the equivalent to a slight excess of alkylating agent, in the presence of diisopropylethylamine, at 20 - 60°C, for 2 - 24 hours. 2 (vi) Conversion from carboxyl to aralkoxycarbonyl: The carboxyl can be converted to the aralkoxycarbonyl by reacting compounds having carboxyl as a substituent R among compounds of formula (1) with the equivalent or excess alcohols such as benzyl alcohol in a solvent of halogenated hydrocarbons such as dichloromethane, in the presence of acid catalysts such as hydrogen chloride, sulfuric acid or sulfonic acid. The reaction is usually carried out at room temperature or under heating for 1 - 72 hours. Preferably, it is done using the equivalent to a sight excess of alcohols under presence of diisopropylethylamine, at 20 - 60°C, for 2 - 24 hours. 2 2 (vii) Conversion of carboxyl to aryloxycarbonyl: The carboxyl can be converted to the aryloxycarbonyl by reacting compounds having carboxyl as a substituent R among compounds of formula (1) with the equivalent or an excess of aromatic compound having hydroxyl such as phenol in a solvent of aliphatic ethers such as diethylether, under presence of the condensating agents such as dicyclohexylcarbodiimide. The reaction is usually carried out at 0 - 50°C for 1 - 48 hours. Preferably, it is done at room temperature for 3 - 24 hours. Also, compounds having a carboxyl as R can be converted to ones having carbamoyl by known techniques, for example, by treating the carboxyl with oxalyl chloride, etc. to produce acid halides, and reacting with ammonia solution. Similarly, it can be converted to N-methyl-N-methoxycarbamoyl by acid halides with N-methyl-N-methoxyamine, and further this can be converted to alkylcarbonyl by reacting with various alkylmagnesium reactants. Among the present compounds synthesized by the above processes, compounds having an amidino group as a substituent A can be introduced through one of the electrons in a nitrogen constituting the amidino group with various carbonyls by the following process (ix), (x), or (xi). (ix) Aryloxycarbonylation of amidino: Aryloxycarbonyl can be introduced through one of the electrons in a nitrogen constituting an amidino by stirring compounds having an amidino as a substituent A among the compounds of formula (1) with the equivalent to excess aryl chloroformates such as phenyl chloroformate in a mixed solvent of water and halogenated hydrocarbons such as dichloromethane in the presence of bases such as sodium hydroxide or potassium hydroxide. The reaction is usually carried out at -10 - +40°C for 3 - 48 hours. Preferably, it is done using the equivalent or a little excess aryl chloroformate at 0 - 30°C for 6 - 24 hours. (x) Alkoxycarbonylation of amidino: Alkoxycarbonyl can be introduced through one of the electrons in a nitrogen constituting an amidino by reacting compounds having an amidino as a substituent A among the compounds of formula (1) with the equivalent to excess alkylcarbonic acid p-nitrophenyl ester in an absolute solvent such as THF or DMF in the presence of bases such as metal hydrides such as sodium hydride or tertiary amines, at -10 - +30°C for 3 - 48 hours. Preferably, it is done with the equivalent to a slight excess of p-nitrophenyl ester of alkylcarbonates under presence of tertiary amines such as triethylamine or diisopropylethylamine, at -10 - +40°C for 6 - 24 hours. (xi) Arylcarbonylation of amidino: Arylcarbonyl can be introduced through one of the electrons in a nitrogen constituting an amidino by reacting compounds having an amidino as a substituent A among the compounds of formula (1) with the equivalent to excess aromatic carboxylic acid chloride such as benzoylchloride in halogenated hydrocarbons such as methylene chloride or solvents such as THE, DMF or pyridine, or a mixture thereof in the presence of bases such as amines, at -10 - +30°C for 1 - 48 hours. Preferably, it is done with the equivalent to a slight excess of aromatic carboxylic acid chloride under presence of amines such as triethylamine, diisopropylethylamine or pyridine, at -10 - +40°C for 2 - 24 hours. The reaction for introducing a substituent: -L-COOMe to ethers, which is the second step of reaction (a-2), can be carried out according to the following reactions (i) or (ii): Furthermore, the compounds of formula (1) can be produced by an optional combination of other well-known etherification, amidination, hydrolysis, alkylimidoylation, amidation or esterification processes, or process which is usually employed by those skilled in the art. The alkoxymethylphenylbenzamidine derivatives (1) produced as above, can be isolated and purified by the known techniques for example by extraction, precipitation, fractional chromatography, fractional crystallization, or recrystallization, etc. Further, a pharmaceutically acceptable salt of the compound of the present invention can be produced by subjecting it to a usual salt-forming reaction. The biphenylamidine derivatives and pharmaceutically acceptable salts thereof of the invention have an effect of inhibiting Fxa activity, and can be used as a prophylactic agent and/or a therapeutic agent which are clinically applicable against thromboembolism such as myocardial infarction, cerebral thrombosis, thrombosis of peripheral artery or thrombosis of deep vein as a Fxa inhibitor. Moreover, the biphenylamidine derivatives of the invention can constitute pharmaceutihcal compositions with pharmaceutically acceptable carriers, and be administered orally or parenterally in various dosage form. Parenterally administration includes for example, administration by intravenous, subcutaneous, intramusclar, transdermal, intrarectal, transnasal and instillation methods. The dosage form of the pharmaceutical composition includes the following: For example, in the case of oral administration, tablets, pills: granules, powder, solution, suspension, syrup, or capsules, etc. can be used. As a method for producing a tablet, it can be formed by conventional techniques using a pharmaceutically acceptable carrier such as excipient, binder or disintegrant, etc. Also, the form of a pill, granules, or powder can be produced by the conventional techniques using excipient, etc. in the same manner as the tablet. The form of a solution, suspension or syrup can be produced by the conventional techniques using glycerol esters, alcohols, water or vegetable oils, etc. The form of capsule can be produced by filling a capsule made of gelatine, etc. with the granules, powder or a solution, etc. Among the agents for parenteral administration, in the case of intravenous, subcutaneous or intramuscular administration, it can be administered as injection. A injection can be produced by dissolving the biphenylamidine derivatives in water soluble solutions such as, for example physiological salt solution, or water insoluble solutions consisting of organic esters such as for example, propylene glycol, polyethylene glycol, or vegetable oils, etc. In the case of transdermal administration, for example, a dosage form as an ointment or a cream can be employed. The ointment can be produced by using the biphenylamidine derivative in the mixture of fats and oils or vasehlines, etc., and the cream can be produced by mixing the biphenylamidine derivative with emulsifiers. In the case of rectal administration, it may be in the form of suppository using a gelatine soft capsule, etc. In the case of transnasal administration, it can be used as an formulation consisting of a liquid or powdery composition. As a base of a liquid formulation, water, salt solution, phosphate buffer, or acetate buffer, etc. are used, and also, it may contain surfactants, antioxidants, stabilizers, preservatives, or tackifiers. A base of powdery formulation may include water-absorbing materials such as, for example, highly water-soluble polyacrylates, cellulose low-alkylethers, polyethylene glycol polyvinylpyrrolidone, amylose or pullulan, etc., or water-unabsorbing materials such as, for example, celluloses, starches, proteins, gums or cross-linked vinyl polymers. The water-absorbing materials are preferable. These materials may be mixed for use. Further, antioxidants, colorants, conservatives, preservatives or, antiseptic etc. may be added to the powdery formulation. The liquid or powdery formulation can administrated, for example, using a spray apparatus. In the case of eye drop administration, an aqueous or non-aqueous eye drop can be employed. In the aqueous eye drop, as a solvent, sterilized and purified water or physiological salt solution, etc. can be used. When only the sterilized and purified water is employed as a solvent, an aqueous suspended eye drop can be formed by adding a suspension such as surfactants or high-molecular tackifiers, or a soluble eye drop by adding solubilizers such as nonionic surfactants. In the non-aqueous eye drop, a non-aqueous suspended eye drop can be formed by using injectable non-aqueous solvents as a solvent. In the case of administering through the eyes by means other than eye drops, the dosage form such as eye-ointments, applicating solutions, diffusing agents or insert agents can be used. Further, in the case of the inhalation through nose or mouth, a solution or suspension containing a biphenylamidine derivative and a pharmaceutical excipient which is generally utilized is inhaled through, for example, an inhalant aerosol spray, etc. Also, a biphenylamidine derivative which is in the form of dry powder can be administered through inhalator, etc. which contacts directly with lung. To these formulations, if necessary, pharmaceutically acceptable carriers such as isotonic agents, preservatives, conservatives, wetting agents, buffers, emulsifiers, dispersions or stabilizers, etc. may be added. Also, if necessary, these formulations can be sterilized by the addition of a sterilant, filtration using a bacteria-retaining filter, or treatment with heat or irradiation, etc. Alternatively, it is possible to produce an aseptic solid formulation, which can be used to be dissolved or suspended in a suitable aseptic solution immediately before use. The dose of the biphenylamidine of the invention differs depending on kinds of disease, route of administration, or condition, age, sex or weight of the patient, etc., but generally, is about 1 - 500 mg/day/human body, preferably 10 - 300 mg/day/human body in the case of oral administration, while is about 0.1 - 100 mg/day/human body, preferably 0.3 - 30 mg/day/human body in the case of intravenous, subcutaneous, intramuscle, transdermal, intrarectal, transnasal, instillation or inhalation. When the biphenylamidine of the invention is used as a prophylactic agent, it can be administered according to well-known processes, depending on the respective condition. The present invention will be illustrated using the following Productive Examples, Embodiments, and Experiments. However, the scope of the invention is not restricted in any means by these examples. 3 1H-NMR (270MHz, CDCl) : δ2.30 (s, 1H), 3.89 (s, 3H), 4.64 (s, 1H), 6.89 (s, 1H), 7.26 (s, 1H), 7.39 (s, 1H). 85g of 3-nitro-5-methoxycarbonylbenzoic acid was dissolved in 200 ml of THF under a flow of nitrogen, and 43.4 ml of borane dimethylsulfide complex was added with stirring under ice-cooling. After stirring for 18 hours, 200 ml of water was added, and then 96g of potassium carbonate was added. It was extracted with ethyl acetate, and the organic layer was washed with salt solution. After drying with magnesium sulfate, the resulting solid was dissolved in 800 ml of ethyl acetate, 750 mg of 10% Pd/C was added, and stirring was continued under the flow of hydrogen. After the reaction was completed, it was subjected to the filtration, and then the filtrate was concentrated to produce 64g of the title compound. 3 1H-NMR (270MHz, CDCl) : δ1.81 (t, 1H, J=5.6Hz), 3.92 (s, 3H), 4.72 (d, 1H, J=5.6Hz), 7.93 (s, 1H), 7.98 (s, 1H), 8.29 (s, 1H). 34.3g of the compound obtained from Productive Example 1 was dissolved in 200 ml of THF, and 75g of hydroiodic acid was added with stirring under ice-cooling. A 100 ml solution containing 13.73g of sodium nitrite was added. After stirring at 0°C for 40 min., a 150 ml solution containing 34.6g of potassium iodide was added. After stirring at 40°C for 2 hours, 300 ml of water was added and was concentrated. It was extracted with ethyl acetate, and the organic layer was washed with salt solution. After drying with sodium sulfate, it was purified through silica gel column chromatography to produce 23.1g (42%) of the title compound. 6 1H-NMR (270MHz, DMSO-d) : δ7.6∼8.3 (m, 4H), 8.5 (brs, 2H). 20g of 3-bromobenzonitrile was dissolved in 100 ml of dry THF and, under a nitrogen atmosphere, 37.6 ml of triisopropoxyborane was added. This solution was cooled to -78°C, and 98.3 ml of 1.6M n-butyllithium hexane solution was dropped for 30 min. with stirring. After stirring at room temperature for 30 min., it was cooled to 0°C, 220 ml of 4M sulfuric acid was added. This solution was refluxed with heating overnight, and then cooled to 0°C again. 340 ml of 5M sodium hydroxide was added, and extracted with 200 ml of diethyl ether. The aqueous layer was removed, and 6M hydrochloric acid was added until the pH was 2. It was extracted twice with 300 ml of ethyl acetate, dried with magnesium sulfate, and then the solvent was removed. The resulting crude product was recrystallized from DMF-water to produce 11.6g (72%) of the title compound as acicular light-yellow crystals. 3 1H-NMR (270MHz, CDCl) : δ2.1 (brs, 1H), 3.96 (s, 3H), 4.84 (d, 2H, J=3.7 Hz), 7 5∼8.2 (m, 7H). 3.08g of the compound obtained from the above Productive Example 2 was dissolved in 500 ml of dry THF under the flow of nitrogen, and to this solution, 2.32g of the compound obtained from Productive Example 3, 2.18g of potassium carbonate, and 456 mg of tetrakis(triphenylphosphine) palladium were added and stirred with heating at 90°C, overnight. The reaction was quenched by adding water, extracted with ethyl acetate, and dried on magnesium sulfate, and then the solvent was removed. It was purified with silica gel column chromatography to produce 2.05g (73%) of the title compound as colorless crystals. According to the same process of Productive Example 4, compounds of Productive Examples 5 - 10 which are listed in table 2 were synthesized. 3 1H-NMR (270MHz, CDCl) : δ3.97 (s, 3H), 4.58 (s, 2H), 7.5∼7.9 (m, 5H), 8.1∼8.2 (m, 2H). To 1.0g of the compound obtained from the above Productive Example 4, 20 ml of diethyl ether was added to produce a suspension, and then 0.5 ml of phosphorus tribromide was dropped slowly. The reactant solution was stirred at room temperature for 19 hours, and subjected to extraction. The organic layer was washed with saturated salt solution, dried with sodium sulfate, and then the solvent was removed under vacuum to produce the title compound in the form of a light-yellow solid (1.2g. 98%) 2 GC - MS(M - N) = 264. 1.1g of the compound obtained from the above Productive Example 11 was dissolved in 33 ml of DMF, and 325 mg of sodium azide was added. After the reactant solution was stirred at room temperature for 2 hours, 80 mL of water and 120 mL of ethyl acetate were added to extract organic substances, and the aqueous layer was extracted twice with 100 mL of ethyl acetate. The extraction was washed with a saturated salt solution, dried with anhydrous sodium sulfate solution, and the solvent was removed under vacuum to produce light-yellow-colored, oily methyl 3-(3-cyanophenyl)-5-(azidomethyl)benzoate as a crude product. GC - MS(M - H) = 265 The Methyl 3-(3-cyanophenyl)-5-(azidomethyl)benzoate obtained as above was put in a flask, dissolved in 66 mL of ethanol, and after 1.1g of palladium-barium carbonate was added, the air in flask was displaced with hydrogen. Stirring was continued at room temperature for 6 hours, the catalyst was subjected to celite-filtaration, and the filtrate was concentrated and purified with silica gel chromatography to produce 794 mg of the title compound (Yield of the two steps: 90%). 3 1H-NMR (270MHz, CDCl) : δ1.0∼1.3 (m, 2H), 1.43 (s, 9H), 1.7∼2.0 (m, 3H) 2.6∼2.8 (m, 4H), 3.95 (s, 3H), 4.0∼4.2 (brs, 4H), 7.5∼7.7 (m, 2H), 7.9∼8.0 (m, 2H), 8.20 (s, 2H). 5.5g of the compound obtained from the above Productive Example 12 was dissolved in 150 ml of dry THF. To this solution, 7.92g of 4-aminomethyl-(N-t-butoxycarbonyl)piperidine was added and stirred, at room temperature, overnight. This reaction was quenched by pouring this solution into a 0.5M potassium bisulfate solution, and extracted with ethyl acetate. After drying with sodium sulfate, the solvent was removed to produce 10g of the title compound (potassium bisulfate salt, quantitative). According to the same process of Productive Example 13, compounds of Productive Example 14 which are listed in table 2 were synthesized. MS(M + 1) = 478 53 mg of the compound obtained from the above Productive Example 12 was dissolved in 2.0 ml of chloroform. To this solution, 57 mg of (N-t-butoxycarbonyl) isonipecotic acid, 27 mg of HOBt, and 48 mg of EDCI hydrochloride was added and stirred, at room temperature, overnight. This reaction was subjected to the cation exchange resin column SCX for the solid-phase extraction and the anion exchange resin column SAX for solid-phase extraction, manufactured by Barian company, and extracted with methanol with removal of impurity. The extract was concentrated to produce 100 mg of the title compound, quantitatively. According to the same process of Productive Example 15, compounds of Productive Examples 16 - 22 which are listed in table 2 were synthesized. Methyl 3-(3-cyanophenyl)-5-[N-[(N-t-butoxycarbonylpiperidin)-4-ylmethyl]-N-methylaminomethyl]benzoate: 3 1H-NMR (270MHz, CDCl) : δ1.0∼1.9 (m, 5H), 1.49 (s, 9H), 2.22 (s, 3H) 2.2∼2.3 (m, 2H), 2.5∼2.8 (m, 2H), 2.70 (t, 2H, J=12.0Hz), 3.57 (s, 2H), 3.96 (s, 3H), 4.0∼4.2 (m, 2H), 4.64 (s, 1H), 4.72 (s, 1H), 7.5∼7.7 (m, 2H), 7.72 (s, 1H), 7.85 (d, 1H, J=7.6Hz), 8.01 (s, 1H), 8.12 (s, 1H). 464g of the compound of the above Productive Example 13 was dissolved in 13 mL of dimethylformamide, 276 mg of potassium carbonate and 94 µL of methyl iodide were added, and after stirring for 6 hours, the extraction was carried out. The organic layer was washed with salt solution and dried with sodium sulfate, the solvent was removed under vacuum, and it was purified through silica gel chromatography to produce 289 mg of the title compound (Yield: 61%). According to the same process of Productive Example 23, compounds of Productive Examples 24 - 27 which are listed in table 2 were synthesized. 3 1H-NMR (270MHz, CDCl) : δ1.0∼2.0 (m, 5H), 1.45&1.46 (s, 9H), 2.15&2.21 (s, 3H), 2.5∼2.8 (m, 2H), 3.2∼3.3 (m, 2H), 3.96&3.97 (s, 3H), 4.0∼4.3 (m, 2H), 4.64&4.72 (s, 2H), 7.4∼8.0 (m, 6H), 8.1∼8.2 (m, 1H). 464g of the compound of the above Productive Example 13 was dissolved in 10 mL of dimethylformamide, and 277 µL of triethylamine was added. 92 µL of acetylchloride was added and stirring was continued for 2 hours. It was poured onto sodium hydrogencarbonate solution and extracted with ethyl acetate. The organic layer was washed with salt solution and dried with sodium sulfate, the solvent was removed under vacuum, and it was purified through silica gel chromatography to produce 349 mg of the title compound (Yield: 69%). According to the same process of Productive Example 28, compounds of Productive Examples 29 - 32 which are listed in table 2 were synthesized. MS(M + 1) = 560 Under the atmosphere of nitrogen, 2.0g of the compound of the above Productive Example 13 was dissolved in 20 mL of dry DMF, and this solution was cooled to 0°C. With stirring, 1.38 mL of triethylamine was added, and further, 0.70 mL of trifluoroacetic anhydride was added. After stirring under room temperature for 5 hours, water and ethyl acetate were added. The extraction with ethyl acetate was carried out, the organic layer was washed with diluted hydrochloric acid and sodium hydrogencarbonate solution and dried with magnesium sulfate, and the solvent was removed. The purification through silica gel column chromatography resulted in 1.46g (78%) of the title compound. 3 After 24g of sodium hydride (60% in oil) was suspended in 2.0 mL of dimethylformamide, 2.0 mL of dimethylformamide solution containing 154 mg of 1-t-butoxycarbonyl-4-(2-hydroxyethyl)piperazine was added, and it was stirred for 10 min. After cooling to -30°C, 143 mg of the compound of the Productive Example 11 dissolved in 2.0 mL of dimethylformamide was added, and stirred at -30°C to room temperature for 4 hours. It was poured onto aqueous saturated ammonium chloride solution, and extracted with ethyl acetate. The combined organic layer was washed with saturated salt solution and dried on magnesium sulfate. After the solvent was removed under vacuum, the purification with silica gel chromatography resulted in 21 mg (Yield: 10%) of the title compound. 1H-NMR (270MHz, CDCl) : δ1.45 (s, 9H), 2.4∼2.5 (m, 4H), 2.66 (t, 2H, j=5.9Hz), 3.4∼3.5 (m, 4H), 3.66 (t, 2H, j=5.8Hz), 3.97 (s, 3H), 4.65 (s, 2H), 7.5∼8.2 (m, 7H) 3 1H-NMR (270MHz, CDCl) : δ1.0∼1.3 (m, 2H), 1.7∼2.0 (m, 3H), 2.09 (s, 3H), 2.56 (td, 1H, J=12.8, 2.9Hz), 3.06 (td, 1H, J=13.2, 2.0Hz), 3.2∼3.5 (m, 2H), 3.83 (brd, 1H, J=13.5Hz), 3.97 (s, 3H), 4.65 (s, 2H), 4.5∼4.8 (m, 1H), 7.58 (t, 1H, J=7.8Hz), 7.6∼7.8 (m, 1H), 7.72 (s, 1H), 7.85 (d, 1H, J=7.9Hz), 7.90 (s, 1H), 8.03 (s, 1H), 8.17 (s, 1H) 400 mg of the compound of the Productive Example 9 was dissolved in 20 mL of methanol, and 20 mL of 2N hydrochloric acid was added with stirring under ice-cooling. After stirring at 0°C to room temperature for 7 hours, the concentration yielded the crude product of methyl 3-(3-cyanophenyl)-5-(piperidin-4-ylmethoxymethyl)benzoate. This product was dissolved in 20 mL of dichloromethane, and 3.0 mL of triethylamine was added. 460 µL of acetyl chloride was added with stirring under ice-cooling, the stirring was continued at 0°C to room temperature for 18 hours, it was poured onto saturated potassium hydrogensulfate solution, and extracted with ethyl acetate. The organic layer was washed with saturated sodium hydrogencarbonate solution and then saturated salt water, dried with magnesium sulfate. After the solvent was removed under vacuum, the purification with silica gel column chromatography resulted in 260 mg (Yield: 74%) of the title compound. 3 1H-NMR (270MHz, CDCl) : δ1.2∼1.4 (m, 2H), 1.46 (s, 9H), 1.6∼1.8 (m, 3H), 2.17 (d, J=11Hz, 2H), 2.96 (d, J=9Hz, 2H), 3.11 (s, 2H), 3.38 (d, J=6.3Hz), 3.96 (s, 3H), 4.60 (s, 2H), 7.5∼7.9 (m, 5H), 8.03 (s, 1H), 8.15 (s, 1H) Under the atmosphere of nitrogen, 100 mg of Methyl 3-(3-cyanophenyl)-5-(piperidin-4-yl-methoxymethyl)benzoate which is obtained as Productive Example 37 was dissolved in 5 ml of dry ethanol, and 56 mg of potassium carbonate and 69 µL of t-Butyl Bromoacetate are added and stirred at 60°C, overnight. The solvent was removed and the purification through silica gel chromatography to produce 10 mg (7.6%) of the title compound. MS(M + H) = 504 To the compound obtained from the Productive Example 35, 5 mL of trifluoroacetic acid was added at 0°C, and stirred for 30 min. The solvent was removed. Under the atmosphere of nitrogen, to this solution, 20 mL of dry methanol was added, and 300 mg of potassium carbonate and 250 µL of 2-bromoethanol were added and stirred at 60°C, overnight. The solvent was removed, and the purification with silica gel chromatography resulted in 220 mg (71%) of the title compound. 3 1H-NMR (270MHz, CDCl) : δ1.0∼1.3 (m, 2H), 1.46 (s, 9H), 1.7∼2.0 (m, 3H), 2.56 (td, 1H, J=12.8, 2.9Hz), 3.05 (td, 1H, J=13.2, 2.0Hz), 3.2∼3.5 (m, 2H), 3.83 (brd, 1H, J=13.5Hz), 4.65 (s, 2H), 4.6∼4.8 (m, 1H), 7.60 (t, 1H, J=7.8Hz), 7.6∼7.8 (m, 1H), 7.74 (s, 1H), 7.85 (d, 1H, J=7.9Hz), 7.90 (s, 1H), 8.03 (s, 1H), 8.16 (s, 1H) 1.43g of the compound of the Productive Example 9 was dissolved in 20 mL of methanol, and 2 mL of water was added. 1.54 mL of 4N lithium hydroxide solution was added and stirred at room temperature for 3 hours. After acidification by adding a saturated ammonium chloride aqueous solution, the extraction with ethyl acetate was carried out. The organic layer was washed with saturated salt water and dried with magnesium sulfate, the solvent was removed, and the purification through silica gel column chromatography results in 1.03g (Yield: 74%) of the title compound. 3 1H-NMR (270MHz, CDCl) : δ1.0∼1.3 (m, 2H), 1.46 (s, 9H), 1.7∼2.0 (m, 3H), 2.56 (td, 1H, J=12.8, 2.9Hz), 3.0 (brs, 4H), 3.14 (s, 3H), 3.2∼3.5 (m, 2H), 3.83 (brd, 1H, J=13.5Hz), 4.65 (s, 2H), 4.6∼4.9 (m, 1H), 7.60 (t, 1H, J=7.8Hz), 7.6∼7.8 (m, 1H), 7.74 (s, 1H), 7.86 (d, 1H, J=7.8Hz), 7.92 (s, 1H), 8.04 (s, 1H), 8.17 (s, 1H). 300 mg of the compound of the Productive Example 38 was dissolved in 10 mL of dichloromethane, 116 µL of oxalyl chloride and then 135 µL of pyridine were added at 0°C and it was stirred at 0°C for 1 hour. To this reaction solution, 40% dimethylamine solution was dropped and it was stirred at room temperature for 1 hour. Saturated sodium hydrogencarbonate solution was added and extracted with ethyl acetate. The organic layer was washed with saturated salt water, and dried with magnesium sulfate, and the solvent was removed. The resulting crude product was purified through silica gel column chromatography to produce 268 mg (Yield: 84%) of the title compound. 3 1H-NMR (270MHz, CDCl) : δ1.0∼1.3 (m, 2H), 1.46 (s, 9H), 1.7∼2.0 (m, 3H), 2.09 (s, 3H), 2.56 (td, 1H, J=12.8, 2.9Hz), 3.06 (td, 1H, J=13.2, 2.0 Hz), 3.2∼3.5 (m, 2H), 3.83 (brd, 1H, J=13.5Hz), 4.65 (s, 2H), 4.5∼4.7 (m, 1H), 7.60 (t, 1H, J=7.9Hz), 7.6∼7.8 (m, 1H), 7.70 (s, 1H), 7.85 (d, 1H, J=7.9Hz), 7.90 (s, 1H), 8.02 (s, 1H), 8.16 (s, 1H). 291 mg of the compound of the Productive Example 38 was dissolved in 10 mL of dichloromethane, 116 µL of oxalyl chloride and then 135 µL of pyridine were added at 0°C and it was stirred at 0°C for 1 hour. Then, 76 mg of N,O-dimethylhydroxylamine hydrochloride was added and stirred at room temperature for 1 hour. Saturated sodium hydrogencarbonate aqueous solution was added and extracted with ethyl acetate. The organic layer was washed with saturated salt solution, dried with magnesium sulfate, and the solvent was removed. The resulting crude product was dissolved in 10 mL of tetrahydrofuran, and under an atmosphere of nitrogen, at 0°C, 2.29 mL of methylmagnesium bromide was added. After stirring at 0°C for 40 minutes, diluted hydrochloric acid solution was added and extracted with ethyl acetate. The organic layer was washed with saturated salt water and dried with magnesium sulfate. The solvent was removed under vacuum and the purification through silica gel column chromatography resulted in 190 mg (Yield: 65%) of the title compound. 6 1H-NMR (270MHz, DMSO-d) : δ1.3 1.5 (m, 2H), 1.7 2.0 (m, 3H), 2.7 2.9 (m, 2H), 3.2 3.3 (m, 2H), 3.38 (d, 2H, J=6.3Hz), 3.91 (s, 3H), 4.64 (s, 2H), 7.69 (t, 1H, J=7.9Hz), 7.86 (d, 1H, J=7.9Hz), 7.99 (s, 1H), 8.02 (s, 1H), 8.07 (d, 1H, J=7.6Hz), 8.15 (s, 1H), 8.28 (s, 1H), 8.55&8.85 (brs, 1H), 9.19& 9.52 (s, 2H). 6.0g of the compound of the Productive Example 9 was dissolved in 60 mL of dichloromethane and 3.0 mL of methanol was added. The gas of hydrochloric acid was bubbled into the solution with stirring under ice-cooling. After stirring at 0°C for 30 minutes and then at room temperature for 20 hours, it was concentrated to a dry solid. 30 mL of saturated ammonia-ethanol solution was added, stirred at room temperature for 5 hours, and concentrated. The resulting crude product was purified using HP-20 column chromatography (30g, Eluent: water-methanol) to produce the title compound (4.89g, Yield: 99%). According to the same reaction of the above Example 1 except that HPLC (ODS, Eluent: water-methanol) was used instead of HP-20 column chromatography for isolation and purification, the compounds of Example 2 - 40 which are listed in table 3 were synthesized. 6 1H-NMR (270MHz, DMSO-d) : δ1.1∼1.4 (m, 2H), 1.7∼2.1 (m, 3H), 2.50 (s, 3H), 3.0∼3.5 (m, 4H), 3.8∼4.0 (m, 1H), 3.91 (s, 3H), 4.0∼4.2 (m, 1H), 4.64 (s, 2H), 7.74 (t, 1H, J=7.8Hz), 7.87 (d, 1H, J=7.6Hz), 8.00 (s, 1H), 8.03 (s, 1H), 8.07 (d, 1H, J=7.6Hz), 8.16 (s, 1H) 8.28 (s, 1H), 8.64&9.20 (brs, 1H), 9.25&9.54 (brs, 2H). To 4.79g of the compound of the Example 1 and 3.10g of ethylacetoimidate·monohydrochloride, 50 mL of ethanol was added. 5.25 ml of triethylamine was dropped with stirring under ice-cooling. After increasing the temperature from 0°C to room temperature, stirring was continued for 36 hours and the product was concentrated to form a dry solid. The purification with HPLC (ODS, Eluent: water-methanol) resulted in the title compound (4.37g, Yield: 82%). According to the same reaction of the above Example 41, the compounds of Example 42 - 57 which are listed in table 3 were synthesized. Further, using the same reaction as above except for using ethyl propioneimidate·monohydrochloride instead of ethylacetoimidate·monohydrochloride, the compounds of Examples 58 - 59 which are listed in table 3 were synthesized. Further, using the same reaction as above except for using ethyl hydroxyacetoimidate·monohydrochloride instead of ethylacetoimidate·monohydrochloride, the compounds of Example 60 which are listed in table 3 were synthesized. 6 1H-NMR (270MHz, DMSO-d) : δ1.2∼1.6 (m, 2H), 1.9∼2.2 (m, 3H), 2.31 (s, 3H), 3.0∼3.4 (m, 2H), 3.47 (d, 2H, J=5.9Hz), 3.9∼4.1 (m, 2H), 4.65 (s, 2H), 7.6∼7.8 (m, 3H), 8.0∼8.1 (m, 2H), 8.10 (s, 1H), 8.24 (s, 1H). 2.71g of the compound of the Example 41 was dissolved in 27 ml of 2N hydrochloric acid, stirred at 70°C for 24 hours, concentrated to form a dry solid, and isolated and purified using HPLC (ODS, Eluent: water-methanol) to produce the title compound (2.00g, Yield: 76%). According to the same reaction of the above Example 61, the compounds of Examples 62 - 68, 70 and 72 - 83 which are listed in table 3 were synthesized. 73 mg of the compound of the Example 68 was dissolved in 5 mL of DMF, and to this solution, 38 mg of 1-(3-dimethylaminopropyl)-3-ethylcarbodiimide hydrochloride, 20 mg of glycine, and 50 mg of triethylamine were added and stirred at room temperature, overnight. The solvent was removed and the isolation and purification using HPLC (ODS, Eluent: water-methanol) results in the title compound (25 mg, Yield: 30%). According to the same reaction of the above Example 69, the compounds of Example 71 which are listed in table 3 were synthesized. The substance for analysis was dissolved in water or water wherein a suitable concentration of organic solvents (DMSO, ethanol or methanol) had been added, as a specimen. To 70 µL of the specimen serially diluted with water, 90 µL of 100 mM Tris buffer (pH 8.4), 20 µL of 50 mM Tris buffer (pH 8.4) containing 50 mU/mL human FXa, and 2 mM substrate (Daiichi Chemical S-2765) were added and incubated for 30 min., 50 µL of 50% acetic acid was added, and the absorbance (A405) were determined. As a blank, Tris buffer was added instead of FXa while, as a control, water was added instead of the specimen. The 50% inhibition activity (IC50) was determined as the indication of FXa inhibiting activity. The inhibiting activity of human FXa by the present compound was listed in table 4. 50 To 70 µL of the specimen serially diluted with water, 90 µL of 100 mM Tris buffer (pH 8.4), 20 µL of 50 mM Tris buffer (pH 8.4) containing 1 U/mL human thrombin, and 2 mM substrate (Daiichi Chemical S-2238) were added and incubated for 30 min., 50 µL of 50% acetic acid was added, and the absorbance (A405) were determined. As a blank, Tris buffer was added instead of thrombin while as a control, water is added instead of the specimen. The 50% inhibition activity (IC) was determined as the indication of thrombin inhibiting activity. The inhibiting activity of human thrombin by the present compound is listed in table 4. To 100 µL of normal human plasma (Ci-Trol®) manufactured by DADE, 100 µL of the specimen was added, and incubated at 37°C for 1 minute. To this solution, 100 µL of APTT reagent (manufactured by DADE) which was retained at 37°C was added, and after incubating at 37°C for 2 minutes, 100 µL of 25 mM calcium chloride solution was added, and the coagulation time was determined using the coagulation measurement apparatus manufactured by AMELUNG. The coagulation time when physiological salt solution was added instead of the analyte is used as a control, the concentration of the specimen corresponding to the 2-fold elongation of this coagulation time (CT2) is calculated, and this value is used as the indication of the anticoagulation activity. The human APTT elongation activities of the present compounds were listed in table 4. 2 4 2 4 The substance for analysis was dissolved in distilled water as specimen. To 50 µL of the specimen serially diluted, 50 µL of enzyme solution, wherein human acetylcholine esterase (manufactured by Sigma, C-5400) had been dissolved in distilled water at 0.1 U/ mL was added. To this solution, 50 µL of the solution which was prepared by dissolving 5,5'-dithiobis (manufactured by Nacarai Tesque, 141-01) in phosphate buffer (0.1M NaHPO-NaHPO, pH 7.0) at 0.5 mM is added and mixed, and reacted with 50 µL of the solution wherein acetylthiocholine iodide (Wako Company, 017-09313) had been dissolved in phosphate buffer at 3 mM, at room temperature. AS a control, distilled water was added instead of the substance for analysis and the absorbance (A450) was determined over time. As a blank, phosphate buffer was added instead of the enzyme solution and the 50% inhibition activity (IC50) was determined. The human AChE inhibiting activity of the present compound was listed in table 4. BA(%) = (AUC po)/(AUC iv) × (Dose iv)/(Dose po) × 100 The substance for analysis was dissolved in distilled water (for oral administration; 10 mg/kg) or physiological salt solution (for intravenous administration; 3 mg/kg) to prepare a solution for administration. This solution was administered to fasted ICR mice (male, 6 weeks aged), the whole blood was extracted from the heart under ether anesthesia at 5 min. (intravenous administration group only), 15 min., 30 min., 1 hr., 2 hr., and 4 hr. after the administration, and the plasma was separated by the centrifugation (3,500 rpm, 30 min., 4°C) to produce the specimen (n = 4). Using the above method for determining FXa inhibiting activity, the calibration curve for the substance for analysis was previously prepared and the concentration of the substance for analysis in the specimen was determined. The lower area of the concentration in plasma - time curve (AUC) was calculated and then the bioavailavility in the mouse (BA) was calculated according to the following formula: The bioavailability for the compound of the invention in the mouse was listed in table 4. Industrial Applicability Table 4 Fxa inhibition activity IC&lt;sub&gt;50&lt;/sub&gt; (µM) Thrombin inhibition activity IC&lt;sub&gt;50&lt;/sub&gt; (µM) APTT CT2(µM) AChE inhibition activity IC&lt;sub&gt;50&lt;/sub&gt; (µM) Mouse BA (%) Compound of Example 61 0.063 > 1000 0.76 49 10 Compound of Example 70 0.19 > 1000 2.0 140 11 Compound or Example 72 1.2 > 1000 11 >250 13 Compound of Example 73 1.7 > 1000 13 >250 ND Compound of Example 74 0.59 > 1000 5.4 150 13 Compound of Example 81 0.21 820 2.8 130 7 Compound of Example 82 0.13 > 1000 1.9 170 11 Compound of Example 83 0.16 > 1000 1.9 760 12 A biphenylamidine derivative, and pharmaceutically acceptable salt thereof, of the invention have an effect of inhibiting FXa activity, and can be used as a prophylactic agent and/or a therapeutic agent which are clinically applicable against thromboembolisms such as myocardial infarction, cerebral thrombosis, thrombosis of peripheral arteries or thrombosis of deep veins as a FXa inhibitor.
In the field of communication it is known to provide entities of different functionality in a communication system that may comprise one or more networks. An entity within the meaning of the present application and claims is a device or a plurality of devices for providing a particular functionality, e.g. a single unit or node, or a collection of units or nodes that act together. One known type of entity is a gateway entity that acts as a gate between one or more entities on one side and one or more entities on the other side. For example, gateway entities can be provided at the transition between two different networks, for allowing communication between the entities of the two networks. Another example of a gateway entity is in a context where a plurality of application components are provided in a redundant structure (also referred to as a high availability system or fault tolerant systems), connected to a gateway entity that offers entities of an outside network access to the application components, where one application component can take over when another fails. A basic problem with such gateway entities is their complexity. Namely, the gateway entities are designed to process messages being sent between the entities on either side, such that the gateway entities must be able to understand the protocol being used for the messages. For example, in the case of a gateway entity between two different networks, the gateway entity should be arranged to implement every protocol used for messages passed by the gateway entity. In redundant systems, further problems occur in the gateways. Due to the complexity of telecommunication networks, there are various reasons for possible failures, which may occur in the network components themselves, e.g. the hardware or the software running on those components but may also be triggered by environmental effects on the network components, preventing users from receiving the offered services. Service retainability is the key to success of these networks, which means that even during the failure process, when a backup process takes over to provide for continuous service, there is no or only a minimal impact on the user receiving the service. Offering telecom grade high availability in general entails hardware and software components that are designed to have or support high availability functions. These kinds of platforms have telecom grade operating systems and specifically written applications to make use of high availability functions. The design and implementation process of such operating systems and applications are long and expensive. There are many high availability solutions that can be categorized as being stateless or stateful depending on the state of the application preserved during failover so that they can continue smoothly, or those states have to be re-built after failover. Stateful solutions are more appropriate for smooth services, because re-building states may take a considerable amount of time possibly leading to service disruption or degradation. For example a stateful high availability system may comprise one or more primary and backup components and additional mechanisms to ensure that states of the primary components are replicated to the backup components. The most widely known system uses the hot-standby or 1+1 redundancy scheme, in which a primary and backup component work in a mated pair relationship and the primary component serves all requests, while the backup component waits to take over when the primary component goes down due to a failure. During normal operation application states from the primary component are periodically copied to the backup component to have an up-to-date version in case the primary component stops. The failover process changes the role of the backup component to be the primary component as long as the failed primary component has recovered. Another example is a fault tolerant system, in which, two or more identical components work parallel with the same input data. The outputs of these components are compared to identify if there is a faulty one among them. These dual or multiple modular redundant systems have redundant hardware setups and are inherently stateful as each component processes the same data in exact parallelism with other components. Therefore, if one component fails, it can be disabled, while the others can provide the service immediately as there is no switchover time or other delay in its simple failover mechanism. However, the approaches discussed above that are built up following the primary-backup principle have to apply additional mechanisms to be able to replicate states in the backup component. State replication imposes several requirements, e.g. states need to be consistent and up-to-date in corresponding components, the mechanism that is responsible for moving states has to be resilient in itself. Therefore, for state replication, both the operating system and the application have to be designed to support this feature. The operating system has to implement and handle resilient databases and processes, manage failover of these ones and also support various high availability functions, like fault detection and failover control logic. Further, applications have to be specifically coded to assist the state replication. Therefore, this implies a long and expensive design and implementation process and a complex system with a long time to market cycle. Moreover, neither the operating systems nor the applications are portable among different platforms. Furthermore, dual or multiple modular redundant systems require exact parallel processing in each component, which generally is extremely hard to achieve. It is not only that instructions have to be processed simultaneously in terms of clock cycle in different processors but randomness is also introduced by both the operating system (e.g., port selection, interrupts, task scheduling) and the applications (e.g., adding random fields) that have to be controlled to achieve exact parallel operation. Moreover, the systems require additional mechanism to decide on the correct value when there is a comparison mismatch at the outputs. This support comes from the board (hardware) itself or from software components, those are proprietary extensions that are unknown in detail. To fulfill these requirements special hardware and operating systems are usually needed rendering fault tolerant systems very costly. EP 1 599 099 A1 describes improvements in message-based communications. Here, a method of communicating information between an intermediate element and a source element in a message-based communication system in which request messages are sent from the source element and in response a corresponding response message is sent from a destination element is provided. An exchange of messages is known as a transaction in SIP, wherein a transaction comprises all messages from the first request message up to a final response message. The proxy is a stateful proxy determining whether the received message is part of the current transacation, e.g. by matching a transaction identifier for the current message with the information stored in a transaction context. Further, another proxy is provided, which is stateless, i.e., it does not maintain a transaction context and thus does not require a context storage means. U.S. Pat. No. 6,360,270 B1 describes hybrid and prediction admission control stragegies for a server. Here, and admission control system for a server including an admission controller that receives a stream of messages from one or more clients targeted for the server is described. The admission controller relays the messages to the server in a stream that corresponds to a number of sessions underway between the clients and the server. The admission controller processes individual ones of the arriving messages based upon the indications provided by the resource monitor and a determination of whether the arriving messages correspond to session already underway with the server. For example, a transaction list identifies any session.
FIELD This disclosure is generally directed to recognizing voice commands for voice responsive devices. BACKGROUND A voice responsive device (VRD) is an electronic device that responds to audible or voice commands spoken by users. Examples of VRDs include digital assistants, televisions, media devices, smart phones, computers, tablets, appliances, smart televisions, and internet of things (IOT) devices, to name just some examples. One of the challenges with processing voice commands is that these commands are often spoken and thus received at different volumes. Sometimes a user will speak loudly, other times softly, and different users speak differently. As such, it can become challenging to recognize and process voice commands with such wide variances in audio quality or volume. SUMMARY Provided herein are system, apparatus, article of manufacture, medium, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for a voice command recognition system (VCR). An embodiment include a method. The method may include receiving a voice command directed to controlling a device, the voice command including a wake command and an action command. An amplitude of the wake command is determined. A gain adjustment for the voice command is calculated based on a comparison of the amplitude of the wake command to a target amplitude. An amplitude of the action command is adjusted based on the calculated gain adjustment for the voice command based on the comparison of the amplitude of the wake command to the target amplitude. A device command for controlling the device is identified based on the action command comprising the adjusted amplitude. The device command is provided to the device. Another embodiment includes a system that may include a memory and at least one processor communicatively coupled to the memory. The processor may be configured to receive a voice command directed to controlling a device, the voice command including a wake command and an action command. An amplitude of the wake command is determined. A gain adjustment for the voice command is calculated based on a comparison of the amplitude of the wake command to a target amplitude. An amplitude of the action command is adjusted based on the calculated gain adjustment for the voice command based on the comparison of the amplitude of the wake command to the target amplitude. A device command for controlling the device is identified based on the action command comprising the adjusted amplitude. The device command is provided to the device. A further embodiment includes a non-transitory computer-readable medium having instructions stored thereon that, when executed by at least one computing device, causes the computing device to perform operations. These operations may include receiving a voice command directed to controlling a device, the voice command including a wake command and an action command. An amplitude of the wake command is determined. A gain adjustment for the voice command is calculated based on a comparison of the amplitude of the wake command to a target amplitude. An amplitude of the action command is adjusted based on the calculated gain adjustment for the voice command based on the comparison of the amplitude of the wake command to the target amplitude. A device command for controlling the device is identified based on the action command comprising the adjusted amplitude. The device command is provided to the device. BRIEF DESCRIPTION OF THE FIGURES FIG. 1 illustrates a block diagram of a voice command recognition system (VCR), according to some embodiments. FIG. 2 illustrates a block diagram of a multimedia environment, according to some embodiments. FIG. 3 illustrates a block diagram of an example media device, according to some embodiments. FIG. 4 is a flowchart illustrating example operations for providing a voice command recognition system (VCR), according to some embodiments. FIG. 5 is a flowchart illustrating example operations for providing a voice command recognition system (VCR) with beep suppression, according to some embodiments. FIG. 6 illustrates an example computer system useful for implementing various embodiments. The accompanying drawings are incorporated herein and form a part of the specification. In the drawings, like reference numbers generally indicate identical or similar elements. Additionally, generally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears. DETAILED DESCRIPTION Provided herein are system, apparatus, device, method, medium, and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for recognizing voice commands. A voice responsive device (VRD) is an electronic device that responds to audible or voice commands (also called audio commands) spoken by users. Examples of VRDs include digital assistants, televisions, media devices, smart phones, computers, tablets, appliances, smart televisions, and internet of things (IOT) devices, to name just some examples. One of the challenges with processing voice commands is that these commands are often spoken and thus received at different volumes and from varying distances relative to a receiving microphone. Sometimes a user will speak loudly, other times softly, and different users speak differently. Users may also be standing near the microphone or across the room far away from it, which can produce variances in the quality and loudness of the sounds detected or received by the microphone. As such, it can become challenging to recognize and process audio commands with such wide variances in audio quality or volume. FIG. 1 illustrates a block diagram of a voice command recognition system (VCR) 102, according to some embodiments. Voice command recognition system (VCR) 102 may recognize and process an audible or voice command 104 for operating a voice responsive device (VRD) 106. In some embodiments, VCR 102 may normalize or smooth out the audio characteristics of a voice command 104 for improved audio processing. For example, in some embodiments, VCR 102 may adjust the amplitude of a voice command 104 based on variances between the received characteristics of the voice command 104 relative to target characteristics for processing the voice command 104. These audio adjustments may enable VCR 102 and/or other audio input / output systems to more quickly and/or accurately process the adjusted voice command (relative to the received non-adjusted command). The audio processing may include, for example, identifying a device command 108, converting text-to-speech, performing language translations, or performing other audio processing functionality. In an embodiment, a user 110 may speak into a microphone (mic) 112 to operate VRD 106. Mic 112 may be configured to record or receive audio from a user 110 and/or the environment of the user 110, which may include background noises. Mic 112 may provide the received audio (e.g., speech, noise, and/or other audio) to VCR 102. VCR 102 may receive the voice command 104, identify different sub-commands or speech portions within the received audio, adjust audio characteristics, and identify a device command 108 for operating a VRD 106. In an embodiment, a voice command 104 may include different portions or sub-commands, such as a wake command 114 (also called a wake word) and an action command 116. Wake command 114 may be one or more initial words that signal that a VRD action or operational command 116 is to follow. For example, for the AMAZON ECHO (e.g., VRD 106), a voice command 104 may be "ALEXA, turn on the family room lights," of which "ALEXA" may be the wake command 114 and "turn on the family room lights" may be the action 116 command. VRD 106 may be an electronic device that responds to audible or voice commands 104 spoken by a user 110. Examples of a VRD 106 include, but are not limited to, digital assistants, televisions, media devices, smart phones, computers, tablets, appliances, and internet of things (IOT) devices, to name just some examples. In an embodiment, VRD 106 may be connected to one or more other devices (e.g., sound system, speaker, light, television, thermostat, home security, computer, IoT device), which are operational through voice commands 104 issued to VRD 106. For example, the AMAZON ECHO (e.g., VRD 106) may be connected to a sound bar which may be controlled through the AMAZON ECHO (e.g., turn on/off, volume up/down, increase bass, adjust treble, change audio environment setting, etc.). One of the challenges with processing voice commands 104 is that the audio received from a user 110 can vary in its volume, quality, or other audio characteristics. A user 110 may speak in a loud or soft voice depending on their mood or other environmental circumstances. For example, the user 110 may increase their voice while speaking if they are excited or there is background noise (such as a police siren), or decrease their voice if their baby is sleeping or they are tired. Such volume changes or other alterations in the user 110's voice while speaking the voice command 104 may make it difficult to accurately process the user's speech or voice and recognize the action command 116 in the voice command 104. In an embodiment, a gain processor 118 may normalize or apply audio characteristic adjustments to voice command 104 to aid in speech or audio processing. In an embodiment, gain processor 118 may apply a gain adjustment 120 to increase (or decrease) the loudness of at least a portion of the voice command 104. While gain is identified as an exemplary audio characteristic that is adjusted by VCR 102, it is understood that in other embodiments, other audio characteristics of voice command 104 (such as bass, treble, pitch, speed, etc.) may be adjusted in a similar manner as described herein, and the audio adjustments are not limited to gain, but may include any audio characteristic(s). In an embodiment, VCR 102 may apply audio adjustments to the action command 116 portion of voice command 104 based on an audio analysis of the wake command 114 portion of voice command 104. In an embodiment, gain processor 118 may calculate, measure, or determine a wake amplitude 122 of wake command 114. In an embodiment, amplitude may be measure of a strength or level of energy or sound pressure. Gain processor 118 may compare wake amp 122 to a target amplitude 124. Target amp 124 may be an ideal, goal, or minimum measure of amplitude for audio that is to undergo voice or speech processing. In an embodiment, the closer an amplitude of audio is to target amp 124, the more accurate audio processing may be performed on the audio. Gain processor 118 may calculate a gain adjustment 120. In an embodiment, the gain adjustment 120 may be a difference between wake amp 122 and target amp 124. For example, if wake amp 122 is -30 decibels (dB) and target amplitude is -25 dB, the gain adjustment 120 may be 5 dB. In an embodiment, gain processor 118 may apply the gain adjustment 120 to an action amplitude 126 (e.g., amplitude of action command 116) to generate a gained action command 128. In an embodiment, gain processor 118 may use just-in-time computing processes to apply gain adjustment 120 to action command 116, such that the audio adjustment is performed in real-time without first computing or identifying action amp 126 of action command 116. This real-time application of gain adjustment 120 to action command 116 may save computing resources and time, reducing latency while also increasing the accuracy of speech processing on gained action command 128. In an embodiment, VCR 102 may then identify a device command 108 for operating VRD 106 from the gained action command 128. In an embodiment, identifying device command 108 may include converting gained action command 128 to text, and comparing the text to a set of valid operational commands for VRD 106. FIG. 1 illustrates one example embodiment and configuration of audio processing by VCR 102. However, in other embodiments, the functionality and processing of VCR 102 as described herein may be organized differently. For example, mic 112 and/or VCR 102 functionality may be integrated within VRD 106 as a single device. Or, for example, mic 112 and/or VCR 102 functionality may be integrated into a mobile device or remote control associated or communicatively coupled to VRD 106). Or, for example, VCR 102 functionality may exist on a cloud network that communicates with the remote control and/or VRD 106 communicatively coupled to mic 112. In an embodiment, VCR 102 may track historical data 130 which may be used to apply a correction 132 to the audio adjustments of gain processor 118. Historical data 130 may include a history of the various gain adjustments 120 applied to voice commands 104 by VCR 102. In an embodiment, the historical data 130 may be organized based on a user or remote control or other device from which the voice command 104 is received. In some embodiments, VCR 102 may also be configured to differentiate between different users 110 using one or more speaker detection processes. For example, based on a cadence or timbre of the voice or speech of user 110, the VCR 102 may use one or more machine learning algorithms to classify and identify the user 110. In some embodiments, VCR 102 may be configured to differentiate between different users 110 using data from one or more sensors in a remote control that a user 110 may have handled or may be handling. For example, based on a velocity or speed at which the user 110 picks up the remote control or a movement pattern of the remote control, VCR 102 may use one or more machine learning algorithms to classify and identify the user 110. For example, the one or more machine learning algorithms may include classification algorithms, such as, but not limited to, k-Nearest Neighbors, decision trees, naive bayes, random forest, or gradient boosting. Additionally, the one or more machine learning algorithms may be used to detect certain velocities and/or movement patterns and associate such velocities and/or movement patterns with a given user. In an embodiment, historical data 130 may include movement patterns corresponding with different users 110. As such, when the velocities and/or movement patterns are detected again in the future, VCR 102 may identify a particular user 110 accordingly. The machine learning algorithms may further include an association algorithm, such as, but not limited to, an apriori algorithm, eclat algorithm, or a frequent-pattern growth (FP-growth) algorithm. Based on identifying which user 110 is interacting with the remote control, the VCR 102 may modify a time period for evaluating the RMS amplitude of the voice command versus time. For example, VCR 102 may increase or decrease the time period for evaluating the voice command for each individual user based on the cadence and/or timbre. For example, for some users that speak with a slower cadence than other users, VCR 102 may increase the time period for evaluating the voice command, whereas, for some users that speak with a faster cadence, VCR 102 may decrease the time period for evaluating the voice command 104. As noted above, the adjustments (such as gain adjustment 120 or cadence or other audio adjustments) may be tracked in a database or other storage as historical data 130, which may be organized on a per user basis. For example, historical data 130 may include a first set of data for audio adjustments applied to voice commands 104 from user A, and a second set of data for audio adjustments applied to voice commands 104 from user B. In some embodiments, historical data 130 may be stored locally at VRD 106 or VCR 102, or accessed from the cloud or via other network capabilities. In an embodiment, VCR 102 may calculate a correction 132 for user A. VCR 102, or another machine learning or cloud-based process, may compare the historical audio adjustments for user A against target amp 124. And if the gained action command 128 for user A is statistically (average, mean, median, etc.) 6 dBs below target amp 124, then correction 132 may be 6dBs. Then, for example, when a subsequent voice command 104 is identified as being received from user A, gain processor 118 may adjust the action command 116 by applying both gain adjustment 120 and correction 132. However, a subsequent command from user B would not include correction 132 for user A. Or, for example, correction 132 may be calculated across multiple or all users 110 of the system. In some embodiments, historical data 130 may be used to identify and/or discard (or not record in historical data 130) outliers of voice commands 104. For example, if a user 110 screams into the mic 112, this may register an outlier relative to normal spoken voice commands 104 from user 110. Less or no gain may be applied to a screamed / loud voice command 104. Similarly, if the user 110 is speaking from across the room, this may require a large outlier in terms of the gain adjustment 120 applied. Both of these outliers may be identified based on comparing the gain adjustment 120 to a median gain adjustment from historical data 130, and may not be used to calculate correction 132. As noted above, the VRD 106 may be voice responsive and operable to recognize and process voice commands 104 spoken by users 110. A voice command 104 typically includes a wake command 114 followed by an action command 116. In some embodiments, an audio receiver 134 of VCR 102 may output an audible notification or beep 136 indicating the voice command 104 was received or that the VRD 106 is listening for an action command 116. In an embodiment, beep 136 may be an audible notification or sound that is output through a speaker of VRD 106, a remote control, a mobile device, or another device with audio output capabilities. In an embodiment, the beep 136 may be output upon identification of the wake command 114, indicating the VRD 106 is ready to receive an action command 116. However, some users 110 may speak fast or continuously, and as such, if user 110 speaks the action command 116 immediately after the wake command 114, without waiting for beep 136 before speaking the action command 116, then the beep 136 may be received by mic 112 at the same time as the action command 116, and thus interfere with the ability to accurately recognize the action command 116 (because beep 136 may be received and interpreted as part of voice command 104). Accordingly, in some embodiments, audio receiver 134 may be configured to detect whether the voice command 104 is a continuous stream of speech such that the user 110 does not pause for any significant length of time (e.g., at least long enough for beep 136 to be output) between speaking the wake command 114 and speaking the action command 116. To determine whether the voice command is a continuous stream of speech, audio receiver 134 may analyze the voice command to detect bursts of energy in a specified period of time. For example, audio receiver 134 may analyze an average root mean square (RMS) amplitude of the voice command 104 versus time, and the audio receiver 134 may determine that the voice command 104 is a continuous stream of speech based on such analysis. For example, in some embodiments, audio receiver 134 may analyze a ratio of a peak-to-average amplitude of successive frames of the voice command using an exponential-decay smoothing. As an example, the audio receiver 134 may use a sampling rate of 16 k/sec and each frame may include 256 samples. In some embodiments, a ratio greater than or equal to one (1) may indicate that continuous speech is present, whereas a ratio less than one (1) may indicate that continuous speech is not present. Based on whether the audio receiver 134 detected the continuous stream of speech, the audio receiver 134 may a remote control 210 or VRD 106 to selectively provide or not provide beep 136. For example, if a continuous stream of speech is detected, audio receiver 134 may lower the volume on, mute, or otherwise suppress the beep 136. In another embodiment, audio receiver 134 may not send a beep 136 signal for audio output. Or, for example, if the continuous stream of speech is not detected, VRD 106 may automatically output beep 136 as it was configured to do, or audio receiver 134 may provide the beep 136 signal for output. In this way, audio receiver 134 ensures that the audible notification (i.e., beep 136) does not interfere with recognizing the action command 116 following the wake command 114 in the spoken voice command 104 when the user 110 does not pause between speaking the wake command 114 and speaking the action command 116. In some embodiments, in addition to suppressing the audible notification when the voice command is determined to be a continuous stream of speech, the audio receiver 134 may cause the remote control and/or the VRD 106 to provide an alternative non-audio notification to the user 110 responsive to detecting the wake command 114. For example, a visual or haptic notification may be provided to the user 110 via the remote control (using any well-known haptic module in the remote control), and/or the VRD 106. In this way, the user 110 may still be notified that VRD 106 or remote control received and/or recognized the wake command 114 and is processing the voice command 104 or awaiting the action command 116. FIG. 2 Solely for purposes of convenience, and not limitation, embodiments of this disclosure are described with respect to an example multimedia environment 202 shown in . However, this example application is provided solely for illustrative purposes, and is not limiting. Embodiments of this disclosure are applicable to any VRD in any application and/or environment, as will be understood by persons skilled in the relevant art(s) at least based on the teachings contained herein. FIG. 2 illustrates a block diagram of a multimedia environment 202, according to some embodiments. In a non-limiting example, multimedia environment 202 may be directed to streaming media. However, this disclosure is applicable to any type of media (instead of or in addition to streaming media), as well as any mechanism, means, protocol, method and/or process for distributing media. The multimedia environment 202 may include one or more media systems 204. A media system 204 could represent a family room, a kitchen, a backyard, a home theater, a school classroom, a library, a car, a boat, a bus, a plane, a movie theater, a stadium, an auditorium, a park, a bar, a restaurant, or any other location or space where it is desired to receive and play streaming content. User(s) 232 may operate with the media system 204 to select and consume content. Each media system 204 may include one or more media devices 206 each coupled to one or more display devices 208. It is noted that terms such as "coupled," "connected to," "attached," "linked," "combined" and similar terms may refer to physical, electrical, magnetic, logical, etc., connections, unless otherwise specified herein. Media device 206 may be a streaming media device, DVD or BLU-RAY device, audio/video playback device, cable box, and/or digital video recording device, to name just a few examples. In embodiments of this disclosure, media device 206 may be a voice responsive device (VRD) that responds to voice commands spoken by users 232. Display device 208 may be a monitor, television (TV), computer, smart phone, tablet, wearable (such as a watch or glasses), appliance, internet of things (IoT) device, and/or projector, to name just a few examples. In some embodiments, media device 206 can be a part of, integrated with, operatively coupled to, and/or connected to its respective display device 208. Each media device 206 may be configured to communicate with network 218 via a communication device 214. The communication device 214 may include, for example, a cable modem or satellite TV transceiver. The media device 206 may communicate with the communication device 214 over a link 216, wherein the link 216 may include wireless (such as WiFi) and/or wired connections. In various embodiments, the network 218 can include, without limitation, wired and/or wireless intranet, extranet, Internet, cellular, Bluetooth, infrared, and/or any other short range, long range, local, regional, global communications mechanism, means, approach, protocol and/or network, as well as any combination(s) thereof. Media system 204 may include a remote control 210. The remote control 210 can be any component, part, apparatus and/or method for controlling the media device 206 and/or display device 208, such as a remote control, a tablet, laptop computer, smartphone, wearable, on-screen controls, integrated control buttons, audio controls, or any combination thereof, to name just a few examples. In an embodiment, the remote control 210 wirelessly communicates with the media device 206 and/or display device 208 using cellular, Bluetooth, infrared, etc., or any combination thereof. The remote control 210 may include a microphone 212 for receiving voice commands from users 232. Data representing such voice commands may be transmitted to the media device 206 using any wireless means, such as RF, WiFi, cellular, infrared, etc. Also or alternatively, the media device 206 may include a microphone (not shown) for receiving voice commands from users 232. The remote control 210 may also include a speaker 213 and a sensor 215, e.g., an accelerometer or a gyroscope, which are further described below. FIG. 2 The multimedia environment 202 may include a plurality of content servers 220 (also called content providers or sources 220). Although only one content server 220 is shown in , in practice the multimedia environment 202 may include any number of content servers 220. Each content server 220 may be configured to communicate with network 218. Each content server 220 may store content 222 and metadata 224. Content 222 may include any combination of music, videos, movies, TV programs, multimedia, images, still pictures, text, graphics, gaming applications, advertisements, programming content, public service content, government content, local community content, software, and/or any other content or data objects in electronic form. In some embodiments, metadata 224 comprises data about content 222. For example, metadata 224 may include associated or ancillary information indicating or related to writer, director, producer, composer, artist, actor, summary, chapters, production, history, year, trailers, alternate versions, related content, applications, and/or any other information pertaining or relating to the content 222. Metadata 224 may also or alternatively include links to any such information pertaining or relating to the content 222. Metadata 224 may also or alternatively include one or more indexes of content 222, such as but not limited to a trick mode index. The multimedia environment 202 may include one or more system servers 226. The system servers 226 may operate to support the media devices 206 from the cloud. It is noted that the structural and functional aspects of the system servers 226 may wholly or partially exist in the same or different ones of the system servers 226. The media devices 206 may exist in thousands or millions of media systems 204. Accordingly, the media devices 206 may lend themselves to crowdsourcing embodiments and, thus, the system servers 226 may include one or more crowdsource servers 228. U.S. Patent No. 9,749,700 filed November 21, 2016 For example, using information received from the media devices 206 in the thousands and millions of media systems 204, the crowdsource server(s) 228 may identify similarities and overlaps between closed captioning requests issued by different users 232 watching a particular movie. Based on such information, the crowdsource server(s) 228 may determine that turning closed captioning on may enhance users' viewing experience at particular portions of the movie (for example, when the soundtrack of the movie is difficult to hear), and turning closed captioning off may enhance users' viewing experience at other portions of the movie (for example, when displaying closed captioning obstructs critical visual aspects of the movie). Accordingly, the crowdsource server(s) 228 may operate to cause closed captioning to be automatically turned on and/or off during future streamings of the movie. This crowdsourcing example is described, for example, in and titled "Automatic Display of Closed Captioning Information." The system servers 226 may also include an audio command processing module 230. As noted above, the remote control 210 may include the microphone 212 and the speaker 213. The microphone 212 may receive audio data from users 232 (as well as other sources, such as the display device 208). The speaker 213 may provide audible notifications (such as beeps) to the user 232. As noted above, in some embodiments, the media device 206 may be voice responsive, and the audio data may represent voice commands from the user 232 to control the media device 206 as well as other components in the media system 204, such as the display device 208. In some embodiments, the audio data received by the microphone 212 in the remote control 210 is transferred to the media device 206, which is then forwarded to the audio command processing module 230 in the system servers 226. The audio command processing module 230 may operate to process and analyze the received audio data to recognize the user 232's voice command. The audio command processing module 230 may then forward the voice command back to the media device 206 for processing. FIG. 3 U.S. Application Ser. No. 16/032,868 filed July 11, 2018 In some embodiments, the audio data may be alternatively or additionally processed and analyzed by an audio command processing module 316 in the media device 206 (see ). The media device 206 and the system servers 226 may then cooperate to pick one of the voice commands to process (either the voice command recognized by the audio command processing module 230 in the system servers 226, or the voice command recognized by the audio command processing module 216 in the media device 206).). An example of such operation is described in and titled "Local and Cloud Speech Recognition" (Atty. Docket No. 3634.1060001). FIG. 2 In some embodiments, one or more of the functionalities as described above with respect to VCR 102 may be performed by the audio command processing module 230 in system servers 226 as illustrated in . For example, audio command processing module 230 may perform an automatic gain adjustment of the voice command 104 to better ensure that the action command 116 in the voice command 104 is accurately recognized. To achieve this, VCR 102 may analyze an amplitude of the wake word or wake command 114 in the voice command 104, and determine the nature of an automatic gain adjustment 120 to be applied to the wake command 114 so as to satisfy a minimum amplitude requirement associated with the speech to text conversion process as may be indicated by target amp 124. For example, the audio command processing module 230 may measure a difference between the amplitude of the wake command 114 and a target amplitude 124 to determine the automatic gain adjustment 120 needed. In an embodiment, target amplitude 124 may include a range of acceptable values. In some embodiments, the audio command processing module 230 may predict that the audio characteristics, such as loudness or gain, of the action command 116 portion of the voice command 104 may be at a same amplitude as the wake command 114, and as such, apply the automatic gain adjustment 120 calculated based on the wake command 114 to the action command 116 to reach the target amplitude 124. In this way, the audio command processing module 230 may proactively apply the automatic gain adjustment 120 to the action command 116, rather than waiting to analyze the action command 116 itself, which could result in extra delays. As a result, the audio command processing module 230 provides for reduced processing time for converting the voice command 104 from speech to text. In some embodiments, the audio command processing module 230 may also use historical information or data 130 of previous voice commands to determine the automatic gain adjustment 120. For example, the audio command processing module 230 may analyze automatic gain adjustments applied to previous voice commands and predict what automatic gain adjustment should be for a current or subsequent voice command 104 based on the automatic gain adjustment applied to previous voice commands 104. In an embodiment, this predicted gain may be applied to subsequent wake commands 114 and/or action commands 116. Furthermore, in some embodiments, the audio command processing module 230 may analyze historical data 130 to remove any anomalies. For example, the audio command processing module 230 may compare the amplitude of the wake commands 114 in the historical data 130 and identify instances where the wake amplitude 122 of a current wake command 114 is higher or lower in comparison to other wake amps 122 in historical data 130. For example, in some embodiments, the audio command processing module 230 may remove or separately store the anomalies using a median filter to smooth out the historical information or historical data 130. As an example, the audio command processing module 230 may determine that an amplitude of the wake words may generally fall within a range, e.g., - 18 dB to -22 dB, and the audio command processing module 230 may determine that any wake words with wake amplitudes 122 falling outside that range are anomalies and should be discarded from the historical data 130. By removing any outliers from historical data 130, the audio command processing module 230 may prevent applying too much or too little gain to the present or a subsequent action command 116 in a voice command 104 when using historical data 130. The audio command processing module 230 may also apply the automatic gain adjustment 120 in response to detecting that the voice command is a continuous stream of speech, as discussed above. In this way, the audio command processing module 230 may ensure that the automatic gain adjustment is being applied to actual voice commands spoken by users 232, rather than other ambient noise. In an embodiment, the audio command processing module 230 may include a feedback loop to improve the automatic gain adjustment 120. For example, the audio command processing module 230 may analyze the amplitudes of both the wake word and the command in voice commands, and determine that the amplitude of the command is historically lower (or higher) than the amplitude of the wake word. Using this information, the audio command processing module 230 may determine a difference in the amplitudes of both the wake word or wake command 114 and the action command 116, scale the automatic gain adjustment 120 or compute a correction 132 for the action command 116 so as to compensate for the difference in the amplitudes of the wake word and the action command 116. The feedback loop may also include an analysis of the amplitude of the voice command after the automatic gain adjustment to determine an accuracy of the automatic gain adjustment. For example, the analysis may include comparing a post-adjustment amplitude of the voice command to the target amplitude. Using this information, the audio command processing module 230 may determine a correction factor 132 for adjusting the automatic gain adjustment applied to the action command. For example, the audio command processing module 230 may determine a difference between the post-adjustment amplitude and the target amplitude and compare this difference to a threshold value. When the difference exceeds the threshold value, the correction factor 132 may indicate that the automatic gain adjustment should be reduced (or increased) accordingly. As discussed above, the audio command processing module 230 may be configured to differentiate between different users 232 using one or more speaker detection processes. Based on which user 232 is interacting with the remote control 210, the audio command processing module 230 may use the historical information of that particular user to determine the gain to be applied to the command portion of the voice command. In some embodiments, when the audio command processing module 230 does not recognize a particular user 232, the audio command processing module 230 may apply a gain adjustment to the command based on any of the embodiments discussed herein. In some embodiments, the audio command processing module 230 may analyze a given voice command to determine whether it is an actual voice command issued by a user 232. For example, the audio command processing module 230 may determine whether the voice command is an actual voice command based on whether content of the voice command matches an intent from among a plurality of intents that may be executed by the media device 206. In the event that the content of the voice command does not match an intent from among the plurality of intents, the audio command processing module 230 may remove the voice command from the historical information. In this way, voice commands that may have been received from, for example, the display device 208, an inadvertent activation of the remote control 210, or the like do not impact the automatic gain processes described herein. FIG. 3 illustrates a block diagram of an example media device 306, according to some embodiments. Media device 306 may include a streaming module 302, processing module 304, storage/buffers 308, and user interface module 307. As described above, the user interface module 307 may include the audio command processing module 316. The media device 306 may also include one or more audio decoders 312 and one or more video decoders 314. Each audio decoder 312 may be configured to decode audio of one or more audio formats, such as but not limited to AAC, HE-AAC, AC3 (Dolby Digital), EAC3 (Dolby Digital Plus), WMA, WAV, PCM, MP3, OGG GSM, FLAC, AU, AIFF, and/or VOX, to name just some examples. Similarly, each video decoder 314 may be configured to decode video of one or more video formats, such as but not limited to MP4 (mp4, m4a, m4v, f4v, f4a, m4b, m4r, f4b, mov), 3GP (3gp, 3gp2, 3g2, 3gpp, 3gpp2), OGG (ogg, oga, ogv, ogx), WMV (wmv, wma, asf), WEBM, FLV, AVI, QuickTime, HDV, MXF (OP1a, OP-Atom), MPEG-TS, MPEG-2 PS, MPEG-2 TS, WAV, Broadcast WAV, LXF, GXF, and/or VOB, to name just some examples. Each video decoder 314 may include one or more video codecs, such as but not limited to H.263, H.264, HEV, MPEG1, MPEG2, MPEG-TS, MPEG-4, Theora, 3GP, DV, DVCPRO, DVCPRO, DVCProHD, IMX, XDCAM HD, XDCAM HD422, and/or XDCAM EX, to name just some examples. FIGS. 1 2 Now referring to both and , in some embodiments, the user 232 may interact with the media device 206 via, for example, the remote control 210. For example, the user 232 may use the remote control 210 to interact with the user interface module 307 of the media device 206 to select content, such as a movie, TV show, music, book, application, game, etc. The streaming module 302 of the media device 206 may request the selected content from the content server(s) 220 over the network 218. The content server(s) 220 may transmit the requested content to the streaming module 302. The media device 206 may transmit the received content to the display device 208 for playback to the user 232. In streaming embodiments, the streaming module 302 may transmit the content to the display device 208 in real time or near real time as it receives such content from the content server(s) 220. In non-streaming embodiments, the media device 206 may store the content received from content server(s) 220 in storage/buffers 308 for later playback on display device 208. FIG. 4 FIG. 4 FIG. 1 is a flowchart 400 illustrating example operations for providing a voice command recognition system (VCR), according to some embodiments. Method 400 can be performed by processing logic that can comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions executing on a processing device), or a combination thereof. It is to be appreciated that not all steps may be needed to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown in , as will be understood by a person of ordinary skill in the art. Without limiting method 400, method 400 is described with reference to elements of . At 410, a voice command directed to controlling a device is received, the voice command comprising a wake command and an action command. For example, VCR 102 may receive voice command 104, including both wake command 114 and action command 116, via mic 112. At 420, an amplitude of the wake command is determined. For example, gain processor 118 may measure, calculate, or otherwise identify a wake amplitude 122 corresponding to the wake command 114. In an embodiment, the wake amp 122 may be identified prior to receiving the action command 116. At 430, a gain adjustment for the voice command may be calculated based on a comparison of the amplitude of the wake command to a target amplitude. For example, gain processor 118 may calculate gain adjustment 120 based on comparing wake amp 122 to target amp 124. In an embodiment, the gain adjustment 120 may be mathematical function (addition, subtraction, multiplication) of a number to bring wake amp 122 closer to or within a range of values corresponding to target amp 124. At 440, an amplitude of the action command is adjusted based on the calculated gain adjustment for the voice command based on the comparison of the amplitude of the wake command to the target amplitude. For example, gain processor 118 may apply gain adjustment 120 to action amp 126 or action command 116 and generate gained action command 128. In another embodiment, gain processor 118 may also apply a correction 132, if available. At 450, a device command for controlling the device based on the action command comprising the adjusted amplitude is identified. For example, VCR 102 may identify a device command 108 corresponding to the gained action command 128. In an embodiment, gained action command 128 may be converted to text, and the text may be used to identify the device command 108. At 460, the device command is provided to the device. For example, the device command 108 may be "turn on lights" which may be provided to VRD 106. VRD 106 may be communicatively coupled to living room lights. VRD 106 may then execute the device command 108 communicate to turn on living room lights. FIG. 5 FIG. 5 FIG. 1 is a flowchart 500 illustrating example operations for providing a voice command recognition system (VCR) with beep suppression, according to some embodiments Method 500 can be performed by processing logic that can comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions executing on a processing device), or a combination thereof. It is to be appreciated that not all steps may be needed to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown in , as will be understood by a person of ordinary skill in the art. Without limiting method 500, method 500 is described with reference to elements of . At 510, a wake command for operating a voice responsive device is detected, the voice responsive device being configured to provide an audible confirmation responsive to the detection of the wake command. For example, VCR 102 may receive audio through a mic 112. From the audio, VCR 102 may detect a wake command 114 associated with operating VRD 106. The wake command 114 may be one or more words that signal that an operational or action command 116 is to follow for VRD 106. At 520, audio received from microphone for one or more sounds. For example, upon detecting wake command 114, VCR 102 may continue monitoring mic 112 for any immediately subsequent sounds or energy bursts or amplitudes. At 530, one or more sounds received subsequent to the detection of the wake command are determined to indicate a continuous stream of speech. For example, audio receiver 134 may determine that there is no pause, or gap, in speech from user 110 after speaking wake command 114. In an embodiment, audio receiver 134 may monitor mic 112 for a threshold period of time (e.g., 1 second of ½ second) to determine if additional speech is received from user 110 during the threshold. If additional speech is detected, e.g., based on the amplitude of the received sounds, then audio receiver 134 may determine that a continuous stream of speech is being provided. At 540, the audible confirmation is suppressed responsive to determination of the continuous stream of speech. For example, audio receiver 134 may turn down or turn off the volume on beep 136 which VCR 102 or VRD 106 may be configured to audibly output. In another embodiment, audio receiver 134 may not transmit the beep 136 signal for output upon detecting continuous speech. At 550, an action command issued to the voice responsive device after the wake command is detected from the one or more sounds. For example, VCR 102 may continue to receive and process speech from user 110, detecting action commands 116 for operating VRD 106, without the interferences of beep 136. In an embodiment, the audible beep 136 may be replaced with a visual beep 136, such as a light, or a text message, or pop up on a screen indicating the wake command 114 has been detected. In another embodiment, the beep 136 may be audibly output upon a detection or completion of receiving action command 116, in parallel with or prior to executing the action command 116. FIG. 6 Various embodiments may be implemented, for example, using one or more well-known computer systems, such as computer system 600 shown in . For example, the media device 106 may be implemented using combinations or sub-combinations of computer system 600. Also or alternatively, one or more computer systems 600 may be used, for example, to implement any of the embodiments discussed herein, as well as combinations and sub-combinations thereof. Computer system 600 may include one or more processors (also called central processing units, or CPUs), such as a processor 604. Processor 604 may be connected to a communication infrastructure or bus 606. Computer system 600 may also include user input/output device(s) 603, such as monitors, keyboards, pointing devices, etc., which may communicate with communication infrastructure 606 through user input/output interface(s) 602. One or more of processors 604 may be a graphics processing unit (GPU). In an embodiment, a GPU may be a processor that is a specialized electronic circuit designed to process mathematically intensive applications. The GPU may have a parallel structure that is efficient for parallel processing of large blocks of data, such as mathematically intensive data common to computer graphics applications, images, videos, etc. Computer system 600 may also include a main or primary memory 608, such as random access memory (RAM). Main memory 608 may include one or more levels of cache. Main memory 608 may have stored therein control logic (i.e., computer software) and/or data. Computer system 600 may also include one or more secondary storage devices or memory 610. Secondary memory 610 may include, for example, a hard disk drive 612 and/or a removable storage device or drive 614. Removable storage drive 614 may be a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup device, and/or any other storage device/drive. Removable storage drive 614 may interact with a removable storage unit 618. Removable storage unit 618 may include a computer usable or readable storage device having stored thereon computer software (control logic) and/or data. Removable storage unit 618 may be a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, and/ any other computer data storage device. Removable storage drive 614 may read from and/or write to removable storage unit 618. Secondary memory 610 may include other means, devices, components, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system 600. Such means, devices, components, instrumentalities or other approaches may include, for example, a removable storage unit 622 and an interface 620. Examples of the removable storage unit 622 and the interface 620 may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB or other port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface. Computer system 600 may further include a communication or network interface 624. Communication interface 624 may enable computer system 600 to communicate and interact with any combination of external devices, external networks, external entities, etc. (individually and collectively referenced by reference number 628). For example, communication interface 624 may allow computer system 600 to communicate with external or remote devices 628 over communications path 626, which may be wired and/or wireless (or a combination thereof), and which may include any combination of LANs, WANs, the Internet, etc. Control logic and/or data may be transmitted to and from computer system 600 via communication path 626. Computer system 600 may also be any of a personal digital assistant (PDA), desktop workstation, laptop or notebook computer, netbook, tablet, smart phone, smart watch or other wearable, appliance, part of the Internet-of-Things, and/or embedded system, to name a few non-limiting examples, or any combination thereof. Computer system 600 may be a client or server, accessing or hosting any applications and/or data through any delivery paradigm, including but not limited to remote or distributed cloud computing solutions; local or on-premises software ("on-premise" cloud-based solutions); "as a service" models (e.g., content as a service (CaaS), digital content as a service (DCaaS), software as a service (SaaS), managed software as a service (MSaaS), platform as a service (PaaS), desktop as a service (DaaS), framework as a service (FaaS), backend as a service (BaaS), mobile backend as a service (MBaaS), infrastructure as a service (IaaS), etc.); and/or a hybrid model including any combination of the foregoing examples or other services or delivery paradigms. Any applicable data structures, file formats, and schemas in computer system 600 may be derived from standards including but not limited to JavaScript Object Notation (JSON), Extensible Markup Language (XML), Yet Another Markup Language (YAML), Extensible Hypertext Markup Language (XHTML), Wireless Markup Language (WML), MessagePack, XML User Interface Language (XUL), or any other functionally similar representations alone or in combination. Alternatively, proprietary data structures, formats or schemas may be used, either exclusively or in combination with known or open standards. In some embodiments, a tangible, non-transitory apparatus or article of manufacture comprising a tangible, non-transitory computer useable or readable medium having control logic (software) stored thereon may also be referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer system 600, main memory 608, secondary memory 610, and removable storage units 618 and 622, as well as tangible articles of manufacture embodying any combination of the foregoing. Such control logic, when executed by one or more data processing devices (such as computer system 600 or processor(s) 604), may cause such data processing devices to operate as described herein. FIG. 4 Based on the teachings contained in this disclosure, it will be apparent to persons skilled in the relevant art(s) how to make and use embodiments of this disclosure using data processing devices, computer systems and/or computer architectures other than that shown in. In particular, embodiments can operate with software, hardware, and/or operating system implementations other than those described herein. It is to be appreciated that the Detailed Description section, and not any other section, is intended to be used to interpret the claims. Other sections can set forth one or more but not all exemplary embodiments as contemplated by the inventor(s), and thus, are not intended to limit this disclosure or the appended claims in any way. While this disclosure describes exemplary embodiments for exemplary fields and applications, it should be understood that the disclosure is not limited thereto. Other embodiments and modifications thereto are possible, and are within the scope and spirit of this disclosure. For example, and without limiting the generality of this paragraph, embodiments are not limited to the software, hardware, firmware, and/or entities illustrated in the figures and/or described herein. Further, embodiments (whether or not explicitly described herein) have significant utility to fields and applications beyond the examples described herein. Embodiments have been described herein with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined as long as the specified functions and relationships (or equivalents thereof) are appropriately performed. Also, alternative embodiments can perform functional blocks, steps, operations, methods, etc. using orderings different than those described herein. References herein to "one embodiment," "an embodiment," "an example embodiment," or similar phrases, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of persons skilled in the relevant art(s) to incorporate such feature, structure, or characteristic into other embodiments whether or not explicitly mentioned or described herein. Additionally, some embodiments can be described using the expression "coupled" and "connected" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments can be described using the terms "connected" and/or "coupled" to indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled," however, can also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The breadth and scope of this disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. receiving a voice command directed to controlling a device, the voice command comprising a wake command and an action command; determining an amplitude of the wake command; calculating a gain adjustment for the voice command based on a comparison of the amplitude of the wake command to a target amplitude; adjusting an amplitude of the action command based on the calculated gain adjustment for the voice command based on the comparison of the amplitude of the wake command to the target amplitude; identifying a device command for controlling the device based on the action command comprising the adjusted amplitude; and providing the device command to the device. 1. A computer implemented method, comprising: determining a previous gain adjustment based on historical information identifying calculated gain adjustments for a plurality of previous voice commands; and applying the previous gain adjustment to the wake command. 2. The method of clause 1, further comprising: 3. The method of clause 2, wherein the previous gain adjustment is a median of the calculated gain adjustments for the plurality of previous voice commands. 4. The method of clause 2, further comprising: determining that the calculated gain adjustment for the voice command is an anomaly, wherein the calculated gain adjustment for the voice command is excluded from the historical information when processing a subsequent voice command. determining the amplitude of the action command prior to the adjusting; and generating a feedback loop based on a difference in the amplitude of the wake command and the amplitude of the action command prior to the adjusting and the amplitude of the action command relative to the target amplitude. 5. The method of clause 1, further comprising: 6. The method of clause 5, further comprising: calculating a correction based on the difference, wherein the correction is applied to an amplitude of a subsequent action command of a subsequent voice command. calculating a first correction for a first user; and calculating a second correction for a second user different from the first user. 7. The method of clause 6, wherein calculating the correction comprises: detecting the wake command prior to receiving the action command, wherein the device is configured to output an audible beep upon detecting the wake command; determining that the voice command comprises a continuous stream of speech after detecting the wake command; suppressing the audible beep based on the continuous stream of speech determination; and detecting the wake command from the voice command. 8. The method of clause 1, further comprising: a memory; and receiving a voice command directed to controlling a device, the voice command comprising a wake command and an action command; determining an amplitude of the wake command; calculating a gain adjustment for the voice command based on a comparison of the amplitude of the wake command to a target amplitude; adjusting an amplitude of the action command based on the calculated gain adjustment for the voice command based on the comparison of the amplitude of the wake command to the target amplitude; identifying a device command for controlling the device based on the action command comprising the adjusted amplitude; and providing the device command to the device. at least one processor coupled to the memory and configured to perform operations comprising: 9. A system, comprising: determining a previous gain adjustment based on historical information identifying calculated gain adjustments for a plurality of previous voice commands; and applying the previous gain adjustment to the wake command. 10. The system of clause 9, wherein the operations further comprise: 11. The system of clause 10, wherein the previous gain adjustment is a median of the calculated gain adjustments for the plurality of previous voice commands. 12. The system of clause 10, wherein the operations further comprise: determining that the calculated gain adjustment for the voice command is an anomaly, wherein the calculated gain adjustment for the voice command is excluded from the historical information when processing a subsequent voice command. determining the amplitude of the action command prior to the adjusting; and generating a feedback loop based on a difference in the amplitude of the wake command and the amplitude of the action command prior to the adjusting and the amplitude of the action command relative to the target amplitude. 13. The system of clause 9, wherein the operations further comprise: 14. The system of clause 13, wherein the operations further comprise: calculating a correction based on the difference, wherein the correction is applied to an amplitude of a subsequent action command of a subsequent voice command. calculating a first correction for a first user; and calculating a second correction for a second user different from the first user. 15. The system of clause 14, wherein calculating the correction comprises: detecting the wake command prior to receiving the action command, wherein the device is configured to output an audible beep upon detecting the wake command; determining that the voice command comprises a continuous stream of speech after detecting the wake command; suppressing the audible beep based on the continuous stream of speech determination; and detecting the wake command from the voice command. 16. The system of clause 9, wherein the operations further comprise: receiving a voice command directed to controlling a device, the voice command comprising a wake command and an action command; determining an amplitude of the wake command; calculating a gain adjustment for the voice command based on a comparison of the amplitude of the wake command to a target amplitude; adjusting an amplitude of the action command based on the calculated gain adjustment for the voice command based on the comparison of the amplitude of the wake command to the target amplitude; identifying a device command for controlling the device based on the action command comprising the adjusted amplitude; and providing the device command to the device. 17. A non-transitory computer-readable medium having instructions stored thereon that, when executed by at least one computing device, cause the at least one computing device to perform operations comprising: determining a previous gain adjustment based on historical information identifying calculated gain adjustments for a plurality of previous voice commands; and applying the previous gain adjustment to the wake command. 18. The non-transitory computer-readable medium of clause 17, wherein the operations further comprise: 19. The non-transitory computer-readable medium of clause 18, wherein the previous gain adjustment is a median of the calculated gain adjustments for the plurality of previous voice commands. 20. The non-transitory computer-readable medium of clause 19, wherein the operations further comprise: determining that the calculated gain adjustment for the voice command is an anomaly, wherein the calculated gain adjustment for the voice command is excluded from the historical information when processing a subsequent voice command. Other aspects and/or embodiments of the present invention are set out in the following numbered clauses
Emotional intelligence (EI) is most often defined as the ability to perceive, use, understand, manage, and handle emotions. People with high emotional intelligence can recognize their own emotions and those of others, use emotional information to guide thinking and behavior, discern between different feelings and label them appropriately, and adjust emotions to adapt to environments. Table of Contents Image through Pixabay Have you ever discovered your feelings? How often have you stepped into others' shoes and experienced their feelings? How well do you understand what you feel and why you feel so? Psychological Intelligence is our mind's ability to perceive, manage, and express feelings successfully in reality. Like I (Four Lenses).Q., psychological intelligence differs from one person to another. While some people are gifted by birth in the way they comprehend and handle individuals, others might need help to develop their psychological abilities. Getting fluent in the language of emotions helps us sustain our relationships both personally and expertly. The term "Emotional Intelligence" was very first released in a paper by Michael Beldoch in 1964, however ended up being popular after Daniel Goleman's 1995 book "Emotional Intelligence Why it can matter more than IQ." A healthy, understanding, and friendly person is more mentally conscious than an unempathetic and demotivated person. The studies of Daniel Goleman illustrated an emotionally intelligent individual to have: The capability to recognize own emotions The ability to relate to others' feelings The ability to actively listen to others The ability to actively participate in social interaction and comprehend the nonverbal hints of habits The capability to manage one's ideas and feelings The ability to successfully manage emotions and express them in a socially acceptable way The ability to receive criticisms favorably and benefit from them The power to forgive, forget, and proceed logically The number of of the above qualities can you connect to yourself? In this short article, we will attempt to uncover the useful implications of psychological intelligence and talk about how to utilize it for wholesome and healthy living. These science-based workouts will not just boost your capability to understand and deal with your feelings however will likewise offer you the tools to promote the emotional intelligence of your clients, trainees or employees. This Article Includes: Can E.I. Be Found Out? "Our feelings have a mind of their own, one which can hold views rather independently of our logical mind." Daniel Goleman (1995, p. can do simply as well as others by discovering it. The only thing needed is the inspiration to discover and the intention to use it in reality. Emotional Intelligence can be acquired and enhanced at any point in life (Goleman, 2014). Learning psychological intelligence abilities requires a resourceful environment where we can visualize the locations, or the aspects of E.I. 4 Stages of Learning Emotional Intelligence 1. Insight Any learning starts when we know that there is something in us that requires to be changed or enhanced, and we are prepared to make those modifications occur. Psychological intelligence has five components in it: Self-awareness the knowledge of what we feel and why we feel so Self-regulation the ability to reveal our feelings in the proper way Inspiration the internal drive to change the method we feel and express Empathy the capability to relate to others' feelings and see the world from their point of view Social skills the power to interact successfully and develop strong connections in your home or in the work environment. starts with acquiring insight into which aspect of E.I. we should work on. Some of us might have solid social abilities but lack in self-regulation while others might be high up on motivation however poor in self-regulation. The knowing process begins with the knowledge of which aspect of E.I. to develop. Evaluation The next action is attempting to measure where we stand on each of the E.I aspects. E.I. tests are extensively offered online, or if you are seeking training in a professional setup, there will be materials supplied to you for examining your emotional intelligence. Here are a few assessments and psychological intelligence tests that we can take for assessing our E.I. More on E.I. evaluations and resources in the approaching sections. 3. Training Evaluation opens us to a variety of choices to pick from. Depending upon what part of emotional intelligence we require to deal with, we can choose what sort of training would fit us the best. A low rating in inspirational and social communication elements can be enhanced by organizational training. courses and workshops for staff members who are eager to develop their interpersonal abilities. E.I. training improves: Communication abilities and the power to comprehend nonverbal hints of interaction (for instance body movement, facial expression, the tone of words, etc) Group performance, specifically at the office and preserving a high group spirit Organizational skills and managing schedules more efficiently Work inspiration and the power to accept feedback and criticism positively Leadership abilities 4. Four Lenses.
https://anaheim.leadershipsuccessnow.com/page/emotional-intelligence-training-live-virtual-four-lenses-escondido-ca-ao5DMxfT76L9
It's great. Teaching an adult esl class means finding appropriate materials for older women. I can use any or most of these lessons for the beginning intermediate women after I check the topic of the worksheet. Judy G., Teacher Ewa Beach, HI See more testimonials Submit your own Get Free Trial Close We found 447 reviewed resources for Harmony Focus on Harmony 9th - 12th Harmony is the focus of this band lesson. Upper graders play the Star Spangled Banner, while focusing on harmony, chords, and musical voice. This lesson includes several suggested cross-curricular activities. Get Free Access See Review Independent Harmonies 5th - 12th Independent harmonies, homophonic music, intervals, and melody are all part of music theory and practice. Prepare your budding musicians for the big time with these activities focused on playing with accompaniment. This lesson is... Get Free Access See Review Parallel Harmonies 4th - 8th What is a parallel harmony? You and your class can explore musical terminology through song. You define what a parallel harmony is, the class identifies parallel harmony in music they hear, and then you all sing some examples together.... Get Free Access See Review Adding Tonic and Dominant Harmony to an Original Mi-Re-Do Melody 3rd - 5th Students add tonic and harmony to mi-re-do compositions. They review the steps in the Solfege method and create their own melody with these elements. They perform their song to the class. Get Free Access See Review In Perfect Harmony: Teaching the World to Sing 1st First graders listen to music as the impetus to learn about the concept of Japanese harmony as it is understood in Japanese culture. They use the New Seeker's song, "I'd Like to Teach the World to Sing," to compare America and Japan... Get Free Access See Review The Rhythm of Life - Episode 3 - Harmony 11th - Higher Ed Students complete a unit on musical harmony. They listen to examples of three part harmonies, watch a video, complete a data sheet, prepare a report on the history of harmony, and complete a multimedia report on music for the movies. Get Free Access See Review Fifth Grade Music: Lesson 3 - Unison and Harmony 5th Fifth graders sing the syllables and pitches of the C major scale, and sing Do - Re - Mi in two-part harmony. They discuss the definition for round and canon, and contrast gamelan music of the Spice Islands with reggae from Jamaica. Get Free Access See Review The Dove That Flew Away 5th Prior to playing their Tutti instruments, fifth graders practice harmony, pitch, and rhythm. They sing, clap, and echo the musical pattern they are going to play, focusing on harmony and pitch. They then practice the same song on their... Get Free Access See Review Que IIueva or It's Raining 3rd Get out your Orff instruments and the text, Spotlight on Music, it's time to instruct the class in playing and performing music. They'll practice pitch, rhythm, and harmony by clapping and singing, then apply what they've learned as they... Get Free Access See Review Spring Has Come 3rd Using their Orff instruments, your third graders can learn to play as a music ensemble. They work together and in small groups to practice melody, harmony, and rhythm for the song, "Spring Has Come." This lesson is intended for use with... Get Free Access See Review I'll Rise When the Rooster Crows 4th You've found some instructional ideas on how to teach music, using Orff instruments. The class practices rhythm, melody, and harmony as the play each part of the song. For use with the Mcmillian/Mcgraw-Hill text, Spotlight on Music. Get Free Access See Review When I Was Young 4th The woodblock, xylophone, and metallophone are the focus of this Orff ensemble music arrangement. Kids practice playing their instruments keeping rhythm, pitch, and harmony as the focus of the lesson plan. They echo the teacher by... Get Free Access See Review Hill an' Gully 6th Sixth graders will practice pitch, harmony, form, rhythm, and syncopation by singing, clapping, and beating music patterns on their tutti instruments. They do this and then practice the song, "Hill an' Gully" found in the text, Spotlight... Get Free Access See Review The Carnival is Coming 5th Your musicians need to practice before they can play like masters. They'll practice by singing and clapping out the musical patterns, rhythms, and harmony for the song "The Carnival Is Coming." They will then play this song on their... Get Free Access See Review Critical and Creative Expression 8th Budding musicians discuss African culture and the musical term, polyphony. The lesson is broken into several tables that help identify pattern, rhythm, harmony, and melody. A question sheet is also included. Get Free Access See Review Harmony with Drones 6th - 9th A drone is the simplest of all harmonies, you'll play or sing an example and your class will follow along. They listen to, and then play or sing drones. Two pieces containing drones are included for you to print, pass out, and play. Get Free Access See Review Simple Chordal Harmony 7th - 12th Practice makes perfect, especially when playing a musical instrument. This lesson examines simple chordal harmony, melody, and playing with accompaniment. Learners listen to a series of songs that exemplify the concept then play a... Get Free Access See Review Symphony No. 6 9th - 12th Students describe music in terms related to basic elements such as melody, rhythm, harmony, dynamics, timbre, form and style. Then they identify who Tchaikovsky was as well as his famous works. Students also identify the main theme on... Get Free Access See Review Jewish Folk Song: Ya Ba Bom 9th Singing is a wonderful way to express an idea of any kind. This lesson is written expressly for use in directing a high school chorus. They work on using four-part harmony, expression, and melodic intervals while singing a Jewish folk... Get Free Access See Review Voice-Leading for Roots 9th - 12th Students discover how to write four-part harmony between two chords whose roots are a perfect fourth (or the inverted perfect fifth) apart. They identify the common tone between chords and analyze the bass movement of two chords. Get Free Access See Review Hands out For World Peace 1st - 6th Students discuss the meanings of peace and harmony and research how international organizations work to promote peaceful relations between nations. After listing methods to promote peace, students trance their hands in a pattern of... Get Free Access See Review A Shape-Note Singing Lesson 3rd - 8th Students discover the shape-note method of singing. In this reading and notating music instructional activity, students learn the four shapes of the shape-note method and the tradition of Sacred Harp singers. Students sing the shapes of... Get Free Access See Review Sing it Well! 4th - 5th Learners research world culture by collaborating on a performance with their class. In this harmony lesson, students practice using singing techniques to accompany the voices of their classmates and create melodies from a list of... Get Free Access See Review Elements of Music 4th - 6th Young scholars identify three of the essential elements of music: rhythm, melody and harmony. They discover a simple song which will illustrate these three elements separately and bring them together in a final form. They analyze and...
https://www.lessonplanet.com/search?keywords=Harmony&term_id=8319&type_ids%5B%5D=357917
How to Pass a Written Exam Written exams make up the majority of your exams in college, which in turn make up the majority of your grade point average. Your performance on written exams will be directly reflected in your grades. Basically, they’re a pretty big deal. How to Improve Your Performance on a Written Exam There are two skills you must master in order to improve your performance on a written exam: Time management Deduction Both are essential skills that will not only give you a better chance of doing well on an exam, but they will also improve your academic performance overall. Time Management When taking a written exam, first allocate enough time for you to complete all the questions. Briefly review the test to determine the number of questions and how long each of them should take. Reviewing the test in advance gives you the option to determine which questions you know well, and which questions will require more effort. There are three categories by which you can divide the sets of questions during an exam but even though this takes time the end result is usually worth it. The three categories include – - Questions you know well (easy questions) - Questions you know better (moderately hard) - Questions you know nothing about (hard questions) Always answer the questions in order of difficulty, not necessarily the order that they are in on the test. Avoid leaving questions blank. An incomplete or incorrect answer will always be better than no answer at all. Deduction The power of deduction is especially useful for moderate or difficult questions. Most people refer to this as an “educated guess,” because you use your education and contextual clues to determine the most likely answer. Deduction is a powerful skill that can be practiced. Once you’ve answered all the easy and moderate questions, you can potentially use those answers to deduce the more difficult questions.
https://thatcollegekid.com/how-to-pass-a-written-exam/
On March 17, scientists reported finding the earliest echoes of the Big Bang. The long-sought evidence supports the idea that the universe inflated in a flash. A scientific theory, called inflation, held that during the first trillionth of a trillionth of a trillionth of a second after the Big Bang, the universe grew outward faster than the speed of light. It soon stretched out farther than any telescope can see. Cosmologists are astronomers who study the early universe. They first introduced the theory of inflation more than 30 years ago. Since then, it’s become an important part of the explanation for how the universe began. Inflation helps answer some questions raised by the Big Bang. One is why the universe looks the same in every direction. Another is why it isn’t clumpier in some directions. (Inflation would have smoothed everything out. It’s just like what happens when blowing up a party balloon.) However, scientists couldn’t be sure inflation happened. They lacked solid evidence. The new discovery provides that evidence. It identified the lingering effects of inflation on the oldest light in the universe. “We now have a much stronger belief that we understand the early universe than we did yesterday,” Sean Carroll told Science News on the day of the news announcement. An astrophysicist at the California Institute of Technology in Pasadena, he studies the role of energy and other physical phenomena affecting stars and other objects in space. He did not work on the new study. But dozens of other scientists did. They couldn’t travel back in time to the Big Bang; it was 13.8 billion years ago. But they also didn’t have to. According to that inflation theory, the Big Bang sent waves rippling through the stuff of space. Known as “gravitational waves,” they would alternately squeeze and stretch the fabric of space. So their passage should have left a mark on the farthest reaches of the known universe. Scientists had sought those telltale marks. For their search, the scientists used a telescope at the South Pole. It’s called BICEP2 (short for the Background Imaging of Cosmological Extragalactic Polarization). By gauging the temperature of deep space, this telescope works almost like a giant thermometer. Scientists built it deep in Antarctica. The region’s cold, dry and stable air is perfect for peering deep into space — and back into time. For 50 years, scientists have known that energy in the form of microwave radiation lingered long after the Big Bang. BICEP2 studies this type of light. The telescope records the behavior of photons. Those particles transport radiation, like this microwave signal. John Kovac led the new search for the Big Bang’s echoes. An astronomer, he works at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. His large team has just published a series of papers online. These report finding twists and turns in the patterns of the microwave photons. Kovac’s group now concludes that gravitational waves are the only plausible explanation for that. These scientists will to continue to go over their data. They want to make sure their results didn’t arise from a problem in the telescope or some error in their analysis. And at least eight other telescopes will continue to look for similar patterns in that early light, called the cosmic microwave background radiation. For now, many scientists are thrilled by the news. Among them is Scott Dodelson, at the Fermi National Accelerator Laboratory in Batavia, Ill. Confirmation of gravitational waves offers new opportunities for scientists to test more ideas about the nature of the universe, the astrophysicist told Science News. “This opens up a whole new window,” he explains — “a whole new research area.” Power Words astronomy The area of science that deals with celestial objects, space and the physical universe as a whole. People who work in this field are called astronomers. astrophysics An area of astronomy that deals with understanding the physical nature of stars and other objects in space. Big Bang The rapid expansion of dense matter that, according to current theory, marked the origin of the universe. It is supported by physics’ current understanding of the composition and structure of the universe. cosmic microwave background radiation Remnant energy (in the form of heat) from the Big Bang and that should exist throughout the universe. It is estimated to be about 2.725 degrees above absolute zero. cosmology The science of the origin and development of the cosmos, or universe. photon A particle representing the smallest possible amount of light or other electromagnetic radiation. radiation Energy, emitted by a source, that travels through space in waves or as moving subatomic particles. Examples include gamma rays, visible light, infrared energy and microwaves.
https://www.snexplores.org/article/waves-birth-time
1.1 Further to previous reports presented to this Scrutiny Committee in relation to sustainable economic development, and in which the concept of the ‘Steady State economy’ has been introduced, this report presents an economic perspective on steady state theory and compares and explores this in relation to current GM economic approaches the principles set out in the Greater Manchester Strategy. 2.2 In response to this request, Appendix 1 presents a detailed economic overview of Steady State Economics, comparison of this with other economic models including that which is closest to the current GM approach, and compares these in the context of the Greater Manchester Strategy. The report has intentionally been prepared from an Economist perspective. Environmental Strategy to discuss the report and will be given opportunity to present their views at a meeting to be held on 20th June 2012.  Counter the tension that exists between the sustainability that steady state economics (SSE) advocates and the Council’s economic policy; and provide a comparison of a steady state model with other economic models.  To explain how the economic model that the city works under ensures the economy grows in ways that minimise negative impact on the environment. 1.2 The report addressed the first two points in tandem, before exploring how existing policy, under which the Council operates, aims to create the right conditions for sustainable economic growth based on a connected, talented and greener city. 2.1 The current recession has resulted in the UK facing one of its most turbulent economic times in recent history. Although output is now returning (slowly) to previous levels, evidence suggests that employment levels may not return to 2008 levels in Greater Manchester until 2015 (GMFM 2011). The immediate priority for Manchester in the aftermath of the recession and in a climate of declining public sector employment is the creation of new private sector jobs. 2.2 Manchester, given its scale, economic diversity, connectivity, and quality of life, remains best placed to deliver this in the North of England over the next decade, and will also be a major contributor to overall UK prosperity in terms of economic growth and the Government’s objective of rebalancing the UK economy and reducing overreliance on London and the Southeast. 2.3 However, Manchester continues to encounter a series of critical challenges, including: an economy that is not as productive as it could or should be; too many residents out of work; ill health; and land and property remains underutilised where there are significant opportunities to grow the economy. 2.4 These challenges are also set against other external factors which will drive, or inhibit, the city’s success and the future prosperity of its residents. The issues we face, such as climate change, the need for security of energy and food supply, and rising need for resource efficiency, are global in scope and not confined to Manchester or the UK. 2.5 To meet these challenges, economic policy must ensure that we leave later generations at least as well off as us in terms of social welfare, with at least as wide a set of choices as we have today. This will require delivering economic growth, at the same time as promoting social welfare, sound stewardship of the environment; and delivering stronger connections between the benefits of growth and residents in our most deprived communities. 2.7 Tackling the environmental challenges (and opportunities) presented by climate change will require effective action at international and national as well as local levels. One reaction amongst some commentators to the current economic crisis has been that places and communities should turn their backs on economic growth and adopt a ‘steady state’ economy (SSE), or at least consider alternative models for growth. 2.8 Although every Manchester community benefitted to some extent during the more prosperous years of the last decade, we still have a long way to go in eradication deep-rooted deprivation in some neighbourhoods. Others have added that unfettered growth puts unsustainable pressure on our climate and natural resources; and assert that the existing ‘growth model’ will not deliver prosperity in some potentially many of the communities which we serve. How can policy support a better balance between delivering economic growth now, without compromising the needs of future generations; and work within environmental limits? 2.10 We need strategy to deliver a sustainable economic future. However, as the remainder of this report sets out, however, even it were desirable there are no realistic prospects of developing an SSE in Manchester – as international and national policy is not geared to this goal, making any meaningful impact minimal, and seriously disadvantaging the city’s economic performance, to the detriment of its residents. 3.1 It is not possible to specify that Manchester follows any one specific economic model to the letter. Models are, after all, an abstraction of reality. Therefore, this section provides an overview and critique of the steady state economic model and compares it with the endogenous growth model – which bears many similarities with Manchester’s economic policy – and holds that investment in human capital, innovation and knowledge are significant contributors to long-term economic growth and sustainable development. 3.2 Economic growth in terms of a modern economy is an increase in the production and consumption of goods and services. It is facilitated by increasing population, increasing per capita consumption, or both, and it is indicated by rising real GDP. 3.3 However, some economists suggest that there are limits to economic growth, predicting that in the long-run, population growth pushes wages down, natural resources become increasingly scarce, and the division of labour approaches the limits of its effectiveness. Each of these factors is driving greater attention to ensuring sustainable economic growth. In a steady-state economy (SSE), natural resources are consumed at a fixed, sustainable rate and the quality of the environment is maintained at a level that protects the health of human individuals, species, and ecosystems. 3.4 A steady state does not necessarily imply zero economic growth. Economic growth can take place so long as the productivity of natural and environmental resources is increased through technological advance. Rather than productivity (output per employee) being the focus of attention, environmental resource productivity (output per unit resource used) and environmental impacts take centre stage in order for there to be significant economic growth. 3.5 Under a steady-state model economic growth would most likely be reduced relative to the historical experience since in the past, environmental resource use faced far fewer constraints, since firms will not have paid the ‘full costs’ including negative externalities such as pollution, use of scarce resources, and so on. 3.6 The main challenges to achieving an SSE lie in a number of key areas, primarily on how a SSE can be made to work in practice – notwithstanding some form of global collective solution or regulation – to avoid ‘first-mover disadvantage’. It is doubtful that the world’s largest economies and cities would unilaterally adopt a steady state strategy given the current challenging global economic climate. 3.7 There is no political consensus for change to SSE across the UK with ‘slower growth’ lacking appeal to a majority of the electorate; and whether or not an SSE is so burdensome as to cause a larger ‘moral harm’, i.e. restricting poorer areas opportunities for growth and chances of raising living standards. 3.8 SSE models are often challenged as underestimating the potential for technological progress and the extent to which gains in efficiency can overcome the limits to growth; in other words that the economy can be ‘dematerialised’ or ‘decoupled’ so that it grows without using more and more resources. Proponents of decoupling cite transition to an information economy as proof of decoupling. Evidence shows that economies have achieved some success at relative decoupling, with for example, the amount of carbon dioxide emitted per £ of economic production decreasing over time in the UK. 3.9 It also remains unclear how a SSE approach would work within places that are witnessing rapid growth in population. Over three-quarters of the Northwest’s population growth was in Greater Manchester during the course of the last decade; and is projected to grow further over the next decade, especially within Manchester. Clearly any future ‘constraints to growth’ would place pressure on the welfare of the city’s residents should we not ensure a supply of housing that meets the demands of a growing economy and raises people’s quality of life. 3.10 Greater Manchester nor Manchester has the power to impose a steady state economy in isolation from the national or global economy and the prevailing policy environment. Any unilateral imposition of an SSE would seriously disadvantage the city’s economic performance, to the detriment of its residents. It is more meaningful to focus on ensuring that Manchester’s economic strategy and policies are directed towards creating more sustainable growth, both socially and environmentally. 3.11 Early neoclassical models of growth were first devised by Nobel Prize winning economist Robert Solow over 40 years ago. Neoclassical economists believe that to raise an economy’s long-term trend rate of growth requires an increase in the labour supply and also a higher level of productivity of labour and capital. Endogenous growth economists however, believe that improvements in productivity can be linked directly to a faster pace of innovation and extra investment in human capital. They stress the need for government and private sector institutions, which successfully nurture innovation, and provide the right growth incentives for individuals and businesses to be inventive. 3.12 Whilst population growth and increased income also put pressure on supplies of resources, market prices for goods and resources increase – and this in turn provided additional opportunities and incentives for innovation and technology to use new sources of resource. 3.13 Endogenous models also predict positive externalities and spill-over effects from development of a high valued-added knowledge economy, which is able to develop and maintain a competitive advantage in growth industries in the global economy. Other positive spill-overs include the development of new innovations and products that improve quality of life and wellbeing; and can address the challenge of climate change adaptation and mitigation of environmental impacts. 3.14 However, the rate of technological progress should not be taken as a constant in the growth model – government policies can permanently raise a country’s growth rate if they lead to more intense competition in markets and help to stimulate product and process innovation. Endogenous growth theorists are strong believers in the potential for economies of scale from new capital investment to be experienced in nearly every industry and market. 3.15 Implications from the applications of an endogenous growth model are for policies which embrace openness, competition, change, and innovation in order to promote growth and prosperity. Conversely, policies which have the effect of restricting or slowing change (for example steady-state), or by protecting or favouring particular industries or firms are likely over time to slow down growth to the overall disadvantage of businesses and residents. 3.16 There are important factors which steady-state thinking brings to the fore, for example in terms of influencing behaviour change and energy security etc. However, there needs to be a more pragmatic approach to driving investments in the type of growth which Manchester needs in order to deliver a low-carbon and resource efficient economy; and a more prosperous society. 3.17 The following section aims to provide an explanation of how the economic model that the city works under ensures that Manchester grows in ways that minimise negative impact on the environment; and creates the right conditions for sustainable economic growth based around a more connected, talented and greener city. Whilst this is not a detailed sustainability appraisal the issues highlighted have a strong ‘fit’ with Forum of the Future’s five capitals model which provides the overarching context in which sustainable economic growth can best be understood and promoted. 4.1 Manchester, through the Greater Manchester Strategy (GMS) and the city’s Community Strategy, has set an ambitious vision for 2020 to secure long-term growth of the city and to enable the area to fulfil its economic potential, whilst ensuring that residents are able to share in and contribute to that prosperity. 4.2 A precondition for success is sustainable economic growth, which in turn requires higher productivity from a better-functioning labour market, reducing dependency on public services, and ensuring all part of Greater Manchester and its people enjoy the opportunities a stronger economy brings. 4.3 The GMS aims to secure Manchester’s place as one of Europe’s premier cities, synonymous with creativity, culture, sport and the commercial exploitation of a world-class knowledge base within the context of a low carbon economy and a commitment to sustainable development. 4.4 A critical opportunity for Manchester is to support businesses to help to secure the transition to a low carbon economy and culture and to ensure our businesses are equipped to adapt to climate change. Decoupling economic growth from ever-increasing carbon emissions can make a real contribution to increasing Manchester’s productivity and lead to opportunities that enhance prosperity. 4.5 The commitment across Greater Manchester to deliver a reduction in collective carbon emissions by 2020 should be seen as representing a significant opportunity to deliver substantial growth in low carbon businesses and supply chains, with significant economic gains for businesses leading the design and implementation of new products and services. 4.6 Reducing energy demand and increasing efficiency is also part of the wider sustainable consumption and production agenda aimed at using our increasingly scarce resources in a more sustainable way. Greater Manchester is establishing a Low Carbon Hub to integrate the delivery of a range of carbon-reduction measures, combining the knowledge of Universities with the innovation and investment from local businesses, and providing a focus for Government to work with the city and its partners to accelerate a transition to a low carbon economy. 4.7 Greater Manchester has established a Skills & Employment Partnership which brings together the Local Enterprise Partnership together with education & training providers and government agencies to help marshall skills delivery to meet the needs of our employers and communities, providing a platform from which to build a responsive skills system. 4.8 The GMS makes a commitment to developing human capital by promoting and supporting Manchester’s ability to compete on the international stage for talent, investment, trade and ideas. It also aims to promote strong social capital where all people are valued and have the opportunity to contribute and succeed in life; and aims to create a city where every neighbourhood and every borough can contribute to our shared sustainable future. 4.9 Manchester’s Employment & Skills Action Plan sets out the priorities for delivering a sustainable and efficient labour market, helping to ensure that there is a sufficient supply of labour, with the right skills, enterprising attitudes and ambitions now and in the future to meet the demands of employers. It aims to achieve this through: increasing the number of Manchester residents that are working; increasing the competitiveness of Manchester residents by enhancing skills; and supporting business growth and maximising local socioeconomic benefit from sustainable business growth. 4.10 Significant demand for green jobs and skills is being generated through the creation of major programmes of activity designed to meet our carbon reduction targets of 48% by 2020 for Greater Manchester (from 1990) and 41% for Manchester (from 2005). By 2015 the programmes covered by the Low Carbon Economic Area (LCEA) – domestic retrofit, non-domestic retrofit and low carbon infrastructure – are estimated to deliver an additional 34,800 jobs in the built environment sector. 4.11 Skills will play a pivotal role in establishing Greater Manchester’s competitive edge in low carbon economies. Addressing this is the Low Carbon Skills and Employment programme of the LCEA. The priority is for Greater Manchester to gain the maximum benefit from low carbon industries by anticipating and fostering skills and employment opportunities for local providers, employers and individuals. 4.12 Without addressing this priority we risk having contractors, jobs and skills imported from elsewhere, with incomes and profits leaving the area. Built around the approach established through the LCEA, existing jobs will be safeguarded, and new ones created, in order to deliver the other Greater Manchester climate change programmes for green infrastructure, transport, and sustainable consumption and production. 4.13 Manchester is also pioneering the implementation and development of Community Budgets to support greater social mobility, inclusion and economic opportunity. These will accelerate work on tackling poverty and better life chances, improving outcomes in the early years, and reducing offending rates. 4.14 The approach will also help to generate new ways of working that significantly reduce demand for public services and particularly costly acute interventions; and save money by improving the way that Greater Manchester’s public service organisations invest and get a return on their investment. 4.15 The GMS is founded on the principles of creating a city where every neighbourhood and every borough can contribute to our shared sustainable future; and ensuring that we continue to grow into a fairer healthier, safer and more inclusive place to live, known for excellent, efficient, value for money services and transport choices. 4.16 It also places importance on: supporting the city’s housing market; securing further investment in digital infrastructure and transport networks that connect people to economic opportunity; securing a transition to a low carbon economy; and recognises the importance of generating a sense of place that values local amenities and the natural environment. 4.17 Greater Manchester, with Manchester at its core, has a spatial form which is basically well structured and can be further enhanced by careful location of new development and well planned improvements to transport systems. Increased density of well-connected people and firms is highly correlated with strong economic performance and most significantly, improved environmental outcomes. 4.18 Simply put, dense areas result in reduced CO2 levels, by reducing travel and facilitating more cycling and walking. Encouraging these beneficial effects means increasing intensification within existing built-up areas, with a focus on urban growth nodes, town centres and major transport hubs. 4.19 The focus of transport strategy in Greater Manchester has been to prioritise investment in support of sustainable economic growth, at the same time as seeking to support the transition to a low carbon economy and connect people to economic opportunity. 4.20 Greater Manchester’s third Local Transport Plan sets out a robust strategy aimed at ensuring that the transport network continues to support the economy to improve the life chances of residents and the success of business, as well as ensuring that carbon emissions from transport are reduced, and that the overall transport system promotes healthy lifestyles. 4.21 Finally, GMS calls for programme to increase the quality of life, sense of place and experience across the city region through excellence in public services, an improved public realm, higher levels of tree cover and green-space. It also calls for an across-the-board improvement in the development and management of the city region’s public realm, including design quality, cultural and heritage interpretation, signage and wayfaring and sustainability, particularly in the face of climate change and the urban heat island effects. 5.1 Manchester is of international and national significance for economic growth and is central to the UK’s efforts to rebalance its current overreliance on financial services and London. The Manchester Independent Economic Review (MIER) concluded that outside London and the Southeast, Greater Manchester is the area with the greatest potential to increase productivity. However, despite this potential, the need to focus on private sector-led growth is even more important in the current climate of continuing recession and reducing public sector spend. It is vital that stronger economies like Manchester’s are allowed to drive national growth. 5.2 Economic growth remains the only practical means of delivering employment for all, a rising standard of living for citizens, fostering greater opportunity, supporting and valuing diversity, social mobility, delivering a commitment to fairness; and securing a sustainable economic future. Unilateral imposition of steady state economics in Manchester alone will have negligible impact upon major global drivers such as climate change, whilst having a major negative impact upon Manchester’s prosperity and the wellbeing of its residents. 5.3 Given the greater sustainability of working and living in high density areas such as cities in terms of both use of natural resources and carbon emissions, the continued sustainable economic growth of Manchester, with specific support for the low carbon goods and services sector, would be a pragmatic economic position, wholly compatible with the GMS. Very interesting article. I have not seen anyone brave enough to tackle this issue (Growth gives jobs and prosperity. Environmental sustainability does not). Much better to rethink the situation. A human approach. Does prosperity as currently defined provide fundamental happiness. An Equity approach. If it is unfair for a small minority to hugely benefit from a growth economy, could a steady state economy produce a fair share of prosperity for everyone. A Physical approach. As Prof Albert Bartlett said, nothing grows forever. Better do something now because a bad situation ignored never makes it better. In other words a wild climate, decreasing oil supply being bought by other countries and the inherent risks in the current food supply system will surely come home to roost and with it a mean and nasty neighbourhood. A political approach. As things get worse, people will be angry and new forms of government will arise. We better start education the population now because there will be huge unsettling change coming. Democracy will only survive if people really know the best is being done for them and that does not seem to be the case at the moment. Also that the hard decisions are made on their behalf. The rich are always sheltered in the current model because of course, the rich and influential run the government. People won’t stand for that when things get bad, look at Greece.
https://manchesterclimatemonthly.net/about/read/council-report-manchesters-economy-in-the-context-of-environmental-sustainability/
Part 1 (Innovation) and Part 2 (Traction) of this article can be found under the provided links. 3. Team After a professional grant writer or consultant has successfully assessed the innovativeness and traction of a project, an in-depth look at the team should follow to cover the most important bases of a potential EIC Accelerator grant application. The Implementation part of the official EIC Accelerator template covers the team in great length which means that the team’s quality should be clarified ahead of time. The key areas to investigate when identifying the suitability of the team can be summarized as the team’s background, the relationships between team members and departments as well as the track record of the company as a whole and their individual team members in particular. 3.1 Team Background The founding and management team of the prospective EIC Accelerator applicant company should have a strong technical background that is balanced with the commercial expertise needed in order to succeed. The company profile should clearly outline the team’s experience, educational background, skills and strengths so that a strong case can be made with respect to the companies suitability to implement and execute the innovation developments. Having a team which exclusively consists of marketing and communication experts while all technical parts are outsourced would be insufficient while a purely technical team without any commercial members would likewise make it difficult to convince the proposal evaluators and investors. The team has to be somewhat balanced from a skill-level but it can be more R&D-heavy which makes sense for a DeepTech startup with ongoing development work and limited revenues. An advisory board is also an important part of a startups development but it should not be the exclusive source of commercial expertise in the company and should only be additional support for an already excellent team. The assessment of the startup’s or Small- and Medium-Sized Enterprise’s (SME) team should, first and foremost, focus on the key members inside the business and how they are positioned from a skill-perspective. 3.2 Team Relationships The team has to be investigated based on its individual participants and as a complete unit to make the case as to why the company can succeed. For this purpose, it should be highlighted how the team was formed, how the members are incentivised and their overall commitment to the companies success (ie. commitment is explicitly asked for in the EIC Accelerator template). It is very common that founders all have a similar background or are all originating from a similar environment which is why close care must be placed into assuring that a broad level of commercial and technical expertise is found in the management team. It should also be assessed how well connected the team is in respect to important stakeholders such as customers, regulators, commercial partners and other relevant parties who will be integral during the commercialisation and scaling process. The incentives should be analyzed in order to validate the long-term commitment of the team members with special attention to the companies ownership and its future projections in the eyes of additional equity investments and dilution. A company should have a core team which is committed and avoid having excessive numbers of interns or other low-commitment members such as freelancers or contractors. 3.3 The Team’s Track Record Out of all of the team’s most relevant aspects, the track record is still the most important and most looked-for factor. In the end, investors understand that a past track record of success is the best predictor of future success which is why an EIC Accelerator applicant will have to highlight this extensively. A writer or consultant should carefully assess if the background of the startup is impressive and if they are able to implement an ambitious DeepTech project under the EIC Accelerator. The evaluators will, in the application and during the pitch interview, carefully investigate the companies past accomplishments such as technology milestones, awards, prizes, secured financing (i.e. seed, angel, VC or grants) and other acknowledgements that have a high barrier to success. The track record can also extend to the past success of individual team members whereas one of the funders might have successfully scaled the production process as the head engineer of a large company or another founding member was able to found, scale and exit a profitable technology startup. Everything that can be used to highlight the teams track record should be assessed in such a way. Summary As the third pillar of the assessment of startups, the team quality can be analysed through the following sub-segments to facilitate a successful EIC Accelerator grant submission: - 3.1 Team background: Education, industry origin and balanced in-house expertise - 3.2 Team relationships: Founding team and stakeholder network - 3.3 Team track record: Past successes These tips are not only useful for European startups, professional writers, consultants and Small and Medium-Sized Enterprises (SME) but are generally recommended when writing a business plan or investor documents. Deadlines: Post-Horizon 2020, the EIC Accelerator accepts Step 1 submissions now while the deadlines for the full applications (Step 2) under Horizon Europe are: January 11th 2023(only EIC Accelerator Open) - March 22nd 2023 - June 7th 2023 - October 4th 2023 The Step 1 applications must be submitted weeks in advance of Step 2. The next EIC Accelerator cut-off for Step 2 (full proposal) can be found here. After Brexit, UK companies can still apply to the EIC Accelerator under Horizon Europe albeit with non-dilutive grant applications only - thereby excluding equity-financing. Contact: You can reach out to us via this contact form to work with a professional consultant. EU, UK & US Startups: Alternative financing options for EU, UK and US innovation startups are the EIC Pathfinder (combining Future and Emerging Technologies - FET Open & FET Proactive) with €4M per project, Thematic Priorities, European Innovation Partnerships (EIP), Innovate UK with £3M (for UK-companies only) as well as the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) grants with $1M (for US-companies only). Any more questions? View the Frequently Asked Questions (FAQ) section. Want to see all articles? They can be found here. For Updates: Join this Newsletter! by Stephan Segler, PhD Professional Grant Consultant at Segler Consulting General information on the EIC Accelerator template, professional grant writing and how to prepare a successful application can be found in the following articles:
https://seglerconsulting.com/assessing-an-eic-accelerator-applicant-for-innovation-traction-and-the-team-sme-instrument-part-3/
Job Description : Job Description:- Technical Skills: Experience with WebRTC platform including the SIP, RTP stack & SDP, RTCP, TCP, UDP, SIP, HTTPS, SSL/TLS protocols. Experience in VoIP products based on open source projects such as Asterisk, Freeswitch, and Kamailio. Integration of WebRTC to SIP using Jitsi (Jitsi Meet, Jitsi Video bridge) for Web and Mobile Applications. Knowledge WebRTC server technologies like the Janus Gateway server. Experience in developing center products and solutions & integrating third-party or open-source solutions. Strong competencies in data structures, algorithms, and software design& experience in integrating third-party or open-source solutions. Candidate Profile : Responsibilities: As a Lead WebRTC developer, you will lead a fast-pacing development team. Actively involved in all phases of our WebRTC product lifecycle. To write high performing code and will be participating in key architectural decisions. Working with cutting edge technologies and contribute to the development of unified communication systems including the signalling, session description and server-side logic. Continuously discover, evaluate, and implement new technologies to maximize development efficiency. Handling complex technical issues related to web app development & discuss solutions with the team. Additional Skills: Should have faced clients while working as a senior developer/lead. A person should be a hardcore developer with hands-on experience of coding for this duration. Should be a very good team player with a go-getter attitude, results-driven, adaptable, inspirational, organized and quality-focused. Must have the ability to take ownership of work and take it to the finish line. Good analytical, reasoning, logical and troubleshooting skills. Understanding of best coding standards & guidelines etc. Salary Range : 12.5-18.5 P.A.
https://www.techgig.com/jobs/Lead-WebRTC-Developer/65602088
What is caffeine? Caffeine is not a nutrient but a drug that is a mild stimulant of the central nervous system . In regular caffeine consumers, a partial or complete tolerance to most of caffeine effects often develops [32,36,38,39]. Chemical and Physical Properties Caffeine is an alkaloid–a natural, alkaline nitrogen-containing compound–with the chemical name 1,3,7-trimethylxanthine and chemical formula C8H10N4O2 . Pure caffeine is an odorless, white, crystalline powder of bitter taste, soluble in water, fats and alcohol . Production Caffeine can be extracted from the coffee beans , tea leaves , kola nuts , cocoa pods , guarana seeds , yerba maté . Caffeine can be also artificially synthesized . | | Chat 1. Caffeine Sources |SOURCE||AMOUNT OF CAFFEINE in mg (range)| |COFFEE| |Brewed, decaffeinated (8 oz, 237 mL)||5 (2-12)| |Espresso, restaurant style, decaffeinated (1 oz, 30 mL)||5 (0-15)| |Espresso, restaurant style (1 oz, 30 mL)||40 (30-90)| |Instant (8 oz, 237 mL)||70 (30-170)| |Drip coffee (8 oz, 237 mL)||100 (65-120)| |Brewed, Arabica (8 oz, 237 mL)||100 (70-120)| |Fast-food-size coffee (16 oz, 480 mL)||125 (100-330)| |Brewed, Robusta (8 oz, 237 mL)||150 (130-220)| |TEA| |Herbal and fruit tea||0| |Black tea, decaffeinated (8 oz, 237 mL)||5 (0-12)| |Iced tea (8 oz, 237 mL)||10 (5-50)| |Kombucha tea (8 oz, 237 mL)||25| |Green, black, white and oolong tea (8 oz, 237 mL)||40 (15-110)| |Other teas (8 oz, 237 mL)||Up to 70 (0-120)| |SOFT DRINKS| |Cola, soda, root beer; caffeinated (12 oz, 355 mL)||40 (30-120)| |ENERGY DRINKS (4-10 oz, 120-300 mL)||100 (50-280)| |1 can (4-16 oz, 120-480 mL); smaller cans do not necessarily contain less caffeine||80 (30-350)| |Caffeinated water (16.9 oz, 500 mL)||50-100| |CAFFEINATED ALCOHOLIC BEVERAGES (CABs)| |1 can (8-23.5 oz, 240-695 mL); smaller cans do not necessarily contain less caffeine||100 (20-350)| |OTHER BEVERAGES| |Coffee liqueur (1 jigger, 1.5 oz, 45 mL)||4| |Caffeinated vodka (1 jigger, 1.5 oz, 45 mL)||10| |Hot chocolate (6 oz, 180 mL)||4| |Milk with cocoa (1 cup, 237 mL)||5| |FOODS| |Chocolate cake (1 piece, 3.5 oz, 100 g)||0-6| |Milk chocolate (1 oz, 28 g)||6| |Dark chocolate, 70-85% cacao (1 oz, 28 g)||23| |Ice creams and yogurts – caffeinated (8 oz, 237 mL)||50 (8-85)| |Mints with caffeine (1 mint)||10-100| |Chewing gum – caffeinated (1 piece)||50 (40-100)| |Chocolate chips (1 cup)||105| |Dark chocolate-coated coffee beans (28 pieces, 40 g)||335| |MEDICATIONS and SUPPLEMENTS [34,35]| |Analgesics, diuretics, weight-loss pills, stimulants with caffeine (1 tablet or capsule)||Up to 400| |Workout supplements (1 serving)||Up to 400| Chart 1 sources: [10,11,12,13,14,15,16,223] Picture 1. Caffeine content of common beverages Caffeine Absorption, Distribution, Metabolism and Elimination Absorption 99% of caffeine is absorbed in the stomach and small intestine within 45 minutes of ingestion [2,22,23]. Some caffeine from a chewing gum, chewable tablets and lozenges can be absorbed in mouth . The caffeine dose, taking alcohol or oral contraceptives along with caffeine, exercise, age or sex do not significantly affect the caffeine absorption rate but taking caffeine with food can slow it . Caffeine Distribution and Blood Concentration After absorption, caffeine is distributed throughout the body tissues, but it does not accumulate in them [2,29]. Caffeine may appear in the blood within 5 minutes and reach its peak blood level within 15-120 minutes of consumption [25,27]. Ingestion of 1.1 mg of caffeine per kilogram of body weight may result in blood caffeine levels 0.5-1.5 mg/liter blood . Drinking of 1 cup of coffee with different caffeine contents may result in blood caffeine levels ranging from 0.25 to 2 mg/liter . Metabolism Most of the consumed caffeine is broken down in the liver to theophylline, theobromine, paraxanthine and 1,3,7-trimethyluric acid with the help of the enzyme CYP1A2 [24,25]. Only about 1% of caffeine is excreted unchanged in the urine . Elimination Half Life The average elimination rate (clearance) of caffeine from the human body in adults is 155 mg/kg body weight/hour; in newborns it is ~30 mg/kg/h; it may reach adult levels at about 4th month of life [25,26]. The caffeine blood half life–the time in which 50% of caffeine is eliminated from the blood–in healthy non-smoking adults is about 3-8 hours [16,24,27]; in smokers it may be shorter by 30-50% [2,25] and in newborns may be longer than 80 hours . Caffeine elimination half time may be prolonged when large amounts of caffeine are consumed (16 hours in one case) , in the last trimester of pregnancy (up to 15 hours), in women taking oral contraceptives (by ~50%), in regular alcohol drinkers (by ~70%) and in individuals with liver cirrhosis (up to 96 hours) [2,25]. Caffeine Effects and Mechanism of Action Caffeine effects depend on the individual genetically determined caffeine sensitivity and tolerance , caffeine dose, expectancy of the effects [55,56,57] and eventual drugs consumed along with it [22,24]. Caffeine effects (400 mg or 2 cups of coffee) may appear within less than 1 hour and last for 3-6 hours or more . Caffeine inhibits the inhibitory effects of the neurotransmitter adenosine and thus acts as a mild stimulant of the central nervous system [22,30]. Acute caffeine consumption stimulates the release of norepinephrine (noradrenaline) and epinephrine (adrenaline) in the body, which results in increased breakdown of the body fats into fatty acids [33,40], increased synthesis of glucose (gluconeogenesis), breakdown of glycogen into glucose (glycogenolysis) and dilation of bronchi [27,31,33]. Caffeine in a dose 6 mg/kg body weight (2-3 cups of coffee) can increase epinephrine release during exercise by about 40% . In some studies, caffeine increased the basal or resting metabolic rate [41,42,43,44], but it did not in others . According to some studies, caffeine may have a thermogenic effect [40,41,52,53,54], but according to others it does not . Results of studies about caffeine effect on the body temperature are inconclusive . In regular caffeine consumers a partial or complete tolerance to the abovementioned caffeine effects often develops [32,36,38,39]. Moderate and Excessive Caffeine Intake Consumption of up to 200-300 mg of caffeine (~2 cups of coffee) per day is considered moderate intake and of more than 500-600 mg (> 5 cups of coffee) per day heavy or excessive intake [48,49]. Possible Caffeine Benefits - Increased alertness and vigilance, probably more in tired in sleepy individuals than those who are already alert [58,59,60,61,62,134,135]. - Shorter reaction time [64,65,67] - Improved athletic performance during endurance exercise lasting more than 30 minutes (running) [28,45,68,103,138,139,140] - A slight increase of the analgesic effect of aspirin, acetaminophen and ibuprofen [72,73,74,75], for example in tension headache . NOTE: Improved mood and performance often reported by regular caffeine consumers may be due to reversal of withdrawal symptoms by consuming caffeine rather than by the effect of caffeine itself . In regular caffeine consumers a partial or complete tolerance to most of caffeine effects often develops [32,36,38,39]. There is INSUFFICIENT EVIDENCE about the beneficial effects of caffeine consumption athletic performance during short-term exercise, such as sprints or lifting , attention deficit hyperactivity disorder (ADHD) , asthma [85,86], cognitive function in Alzheimer’s disease [191,192], depression , diabetes mellitus type 2 [22,23,187,205], gallstones [87,88], gout [83,84], hepatitis C or liver cirrhosis , improving breathing in preterm infants with apnea [150,174], leg cramps due to narrowed arteries (intermittent claudication) , liver cancer , memory [16,67], migraine headache [176,207,209] ,muscle soreness during exercise , obsessive-compulsive disorder (OCD) , orthostatic hypotension [78,80], postprandial hypotension (a drop of blood pressure after meals) , seizures , skin itching , stroke or weight loss . Caffeine and Sleep Caffeine may help overcome sleepiness after awakening (sleep inertia), possibly by increasing blood cortisol levels [37,38,160]. Caffeine, generally in amounts greater than 200 mg (~1-2 cups of coffee), consumed up to 8 hours before bed, may delay the sleep onset, shorten the sleeping time and decrease the sleep quality, more likely in occasional than regular users [16,25]. Caffeine, Brain, Memory and Behavior The consumption of 250 mg of caffeine (1-2 cups of coffee) in a single dose can constrict the brain arteries and decrease the brain blood flow by up to 30% [188,189]. It is not yet clear if this increases the risk of stroke or transient ischemic attack [189,190]. Caffeine consumption probably does not have any significant effect on memory while studying [16,67]. A good sleep or daytime nap can have better effect on the learning performance than consuming caffeine . In one 2006 study, there was no association between caffeine consumption and impulsiveness, sociability, extraversion or trait anxiety [130,193]. Caffeine and Exercise Performance In 2004, caffeine was removed from the World Anti-Doping Agency (WADA) list of prohibited substances . Caffeine in doses 3-9 mg/kg of body weight may modestly increase the endurance performance and decrease fatigue during physical exercise lasting for more than 30 minutes [28,45,68,103,138,139,140]. In some studies, caffeine consumption in doses 1-9 mg/kg body weight (1-7 cups of coffee) 60 minutes before exercise was associated with better short-term (<90 seconds) anaerobic physical performance, such as sprints or weight lifting [145,146,147,148,149] but in others was not [103,142,143,144]. It is not clear which caffeine dose has the optimal effect on physical performance; repeated bouts of coffee may decrease it [45,68,138]. Caffeine in high doses (8 mg/kg body weight or ~5 cups of coffee) may help restore glycogen stores after exercise . At doses higher than 3 mg/kg, caffeine may increase heart rate during exercise . Caffeine does not likely cause hyperthermia or heat intolerance during exercise in a hot environment [91,93]. Caffeine added to sport drinks does not seem to increase the risk of gastrointestinal symptoms during exercise . It is still not clear by which mechanism caffeine could increase physical performance. Possible mechanisms: increasing caffeine doses are associated with increased blood epinephrine levels [154,155,156], calcium availability in the cells and glucose absorption . Caffeine, Appetite and Weight Loss In some studies [25,163,164], caffeine consumption was associated with lower appetite, but in others it was not [37,54,162]. In several short-term studies taking caffeine-ephedrine supplements, but neither ephedrine or caffeine alone, was associated with weight loss of 1-2 pounds per month, but the long-term effect of caffeine on weight loss is not known [165,166,167,168]. These supplements are not approved as weight-loss pills in the U.S., since they may have serious side effects, including death . In some studies [41,163], caffeine alone in doses 150-300 mg/day was associated with weight loss, but in others it was not . There is insufficient evidence about the effectiveness of black or green tea in promoting weight loss [169,170]. Caffeine, Frequent Urination and Dehydration In several studies, consumption of up to 226 mg of caffeine (~2 cups of coffee) did not result in any significant increase in urine excretion (diuresis) [69,91]. Acute ingestion of at least 240 mg of caffeine (1-2 cups of coffee or 5-6 cups of tea) may temporarily increase the urine excretion, but the tolerance to this caffeine effect may develop in as little as 1 day [92,177]. In conclusion, caffeinated beverages consumed by healthy individuals in usual doses do not likely cause dehydration and can be even used for rehydration [28,69,91,92,141]. In individuals with overactive bladder or urge incontinence , or a non-infectious bladder inflammation called interstitial cystitis or painful bladder syndrome (PBS), caffeine consumption may trigger urination urgency and increase urination frequency [179,180]. Coffee, Caffeine and Heart In some studies, moderate coffee drinking (3-5 cups/day) was associated with a lower risk of cardiovascular disease [49,70,76, 185], but in others it was not [22,181]. It is not clear is it caffeine or some other substance in coffee that might be associated with a lower risk of heart disease. Regular caffeine consumption may increase the risk of heart attack in genetically predisposed individuals who metabolize (break down) caffeine slowly [183,184]. Caffeine and Blood Pressure In non-regular caffeine consumers with or without hypertension, 250 mg of caffeine (2-3 cups of coffee) can temporarily (for several hours) increase the blood pressure by up to 15 mm Hg within 1 hour of consumption [63,116,186]. After few days of caffeine consumption, a partial or complete tolerance to caffeine-induced increase of the blood pressure usually develops, especially in those who consume more than 3 cups of coffee per day [25,117,126,186,191]. In individuals with hypertension, long-term coffee consumption does not seem to be associated with an increase of blood pressure [63,136]. Currently, it is not clear, if regular caffeine consumption increases the blood pressure to the harmful levels. If you have high blood pressure, ask your doctor how much caffeine you may consume. Caffeine and Diabetes 2 In various systematic reviews of epidemiological studies, regular caffeine or coffee consumption was associated with a lower risk of diabetes type 2 [70,159,182,191] but, according to one study, only in individuals who had previously lost weight . Possible preventative mechanisms of drinking coffee on diabetes include an increase of insulin sensitivity by caffeine and inhibition of glucose absorption by chlorogenic acid (a coffee ingredient) . In various controlled clinical trials, decreased insulin sensitivity was observed after co-ingestion of carbohydrate meals and caffeine (200-500 mg, comparable with 1-5 cups of coffee) in healthy individuals [105,208] and in those with diabetes type 2 [158,210]. Possible mechanism: caffeine stimulates the release of epinephrine, which decreases the sensitivity of insulin. In conclusion, the current evidence is not strong enough to recommend consuming caffeine as a preventative measure for diabetes. Coffee and Gastrointestinal Tract Coffee stimulates the release of the hormone gastrin and gastric acid secretion [101,194]. Caffeine may damage the gastric and duodenal lining but does not likely cause peptic ulcer; it can increase pain in an established ulcer, though [77,196,197]. Caffeine, coffee and tea may decrease the pressure in the lower esophageal sphincter and trigger acid reflux and heartburn [101,102,198]. Decaffeinated coffee can also trigger heartburn . In several studies there was no association between moderate coffee or tea consumption, dyspepsia and gastric emptying or bowel transit time [101,153,200]. However, according to one 2009 study, caffeine may accelerate gastric emptying . Caffeine may promote the motility of the sigmoid colon and rectum . In individuals with irritable bowel syndrome, caffeine may trigger diarrhea or constipation . Caffeine intake does not seem to be associated with diverticular disease . Coffee intake may induce gallbladder contractions and may cause pain in individuals with established gallstones , but it does not seem to increase the risk of gallstones or other gallbladder disease in healthy individuals [87,204]. Caffeine and Parkinson’s Disease In some epidemiological studies, moderate coffee consumption was associated with a decreased risk of Parkinson’s disease in men [22,78,191]. There is lack of evidence about the effect of caffeine in improving fatigue in individuals with Parkinson’s disease . Caffeine and Cancer Several studies suggest that regular caffeine consumption may reduce the risk of liver [22,191,216,217] and endometrial cancer [191,217]. Some, but not all, studies suggest that coffee or caffeine may have a protective effect on kidney and colorectal cancer . Caffeine consumption does not seem to increase or decrease the risk of breast, pancreatic, ovarian or gastric cancer . Caffeine consumption may increase the risk of bladder cancer in men . Caffeine as an Analgesic Caffeine may stimulate the release of beta-endorphins, which are endogenous opioids . It is not clear if caffeine alone has any significant analgesic effect [25,171]. Caffeine in doses 100 mg or higher may slightly increase the perceived analgesic effect of certain painkillers, such as aspirin, paracetamol and ibuprofen , in treating headache, post-operative dental pain or pain after birth [72,73,75]. Caffeine does not seem to relieve ischemic pain (angina pectoris) in coronary artery disease . There is insufficient evidence about the effectiveness of caffeine in relieving migraine headache [176,207,209,214]. Caffeine Safety Caffeine is LIKELY SAFE for most adults when consumed in usual amounts found in beverages and foods . Caffeine is POSSIBLY SAFE for most children when used in amounts usually found in beverages or foods (up to 160 mg per day in a 10 year old child) [115,175]. Caffeine as a food additive is considered Generally Recognized As Safe (GRAS) by the U.S. Food and Drug Administration (FDA) until used in cola-type beverages [3,4] and in alcoholic beverages in amounts up to 0.02 percent (200 ppm) but not automatically when used in other foods. In general, moderate caffeine intake (3-5 cups of coffee or up to 400 mg caffeine per day) does not seem to increase the risk of cardiovascular disease (heart attack or irregular heart rhythm) and cancer . It is currently not clear if caffeine consumption increases the risk of osteoporosis and bone (hip) fractures [22,24,123,133,222]. Pregnancy Caffeine is POSSIBLY SAFE during pregnancy when used in doses up to 200 mg/day (1-2 cups of coffee) [21,22,115]. Drinking caffeinated beverages during pregnancy, even in high amounts, does not seem to increase the risk of miscarriage, birth defects or growth retardation of the fetuses or children [2,96,191,219] but more studies are warranted . Caffeine withdrawal symptoms, such as irritability and vomiting, lasting for few days after birth, have been observed in infants whose mothers had been drinking coffee during pregnancy . Breastfeeding Caffeine is excreted in the breast milk in small amounts . Consumption of 2-3 cups of coffee probably does not cause adverse effects, but higher caffeine intake may cause irritability and poor sleeping in a breastfed child [20,22]. Acute Side Effects Caffeine consumption may cause [89,97]: - Anxiety, panic attack, depression, restlessness, sleeplessness [89,211] and worsening of premenstrual syndrome (PMS) [98,99] - Dry mouth, unusual thirst - Increased breathing and heart rate, pounding heart (palpitations) [71,100] - Stomach upset, heartburn, nausea, vomiting, diarrhea [96,100,101,102] Caffeine Intoxication or Overdose Consuming caffeine in a single dose as low as 250 mg, but usually in doses greater than 600 mg, may result in caffeine intoxication . On the other hand, consumption of up to 900 mg (0.9 g) caffeine through the day without any side effects has been reported . Symptoms and signs of caffeine intoxication may include [2,25,28,51,97,106,107,108]: - Headache - Nervousness, anxiety, jitters, restlessness, fear, insomnia, rambling flow of thoughts or speech - Facial flushing - Ringing in the ears (tinnitus), increased sensitivity to light (photophobia) - Thirst, stomach upset, abdominal pain, nausea, vomiting, diarrhea - Increased breathing (hyperventilation) and heart rate (tachycardia), irregular heart beat (arrhythmia), chest pain, high or low blood pressure - Fever - Increased urination (polyuria) - Dilated pupils - Seizures - Tremor, muscle twitching, paralysis or weakness due to hypokalemia - Depression, delirium, hallucinations, psychosis - Complications may include heart attack, stroke, muscle disintegration (rhabdomyolysis), acute lung damage, collapse or coma - References: [2,25,28,51,97,106,107,108] Possible metabolic changes in caffeine intoxication include hyperglycemia, ketosis, lactic acidosis or hyponatremia . Very high caffeine doses, for example, from caffeine-containing pills , 10 cups of coffee per day , 5 or more liters of caffeinated cola per day [110,111,112] may cause hypokalemia. Death from caffeine toxicity is rare and can occur when blood caffeine concentration exceeds about 100 mg/liter . Lethal dose of caffeine–the amount that would likely kill an adult–is 10-20 grams or 150-200 mg/kg body weight (~70-120 70 cups of coffee) [27,29,108,114]. Chronic Side Effects Regular consumption of high amounts of caffeine may cause or worsen: - Anxiety, restlessness, insomnia, tingling in limbs and around the mouth, pounding heart (palpitations), anorexia, nausea, vomiting, diarrhea, depression or seizures – a cluster of symptoms known as chronic caffeine intoxication or caffeinism, from > 1,000 mg caffeine (>cups of coffee) per day or, in some individuals, from as low as 250 mg caffeine (or 1-2 cups of coffee) per day [25,60,105]. - Frequent urination and other symptoms of benign prostate hyperplasia - Fibrocystic breast disease [105,122] - Migraine - Psychosis in healthy individuals and in individuals with schizophrenia - Restless leg syndrome [119,120] - Seizures in individuals with epilepsy [82,118] Who should avoid caffeine? Doctors may advise against consuming caffeine to [25,115]: - Children under 12 years of age - Individuals allergic to caffeine - Individuals suffering from anxiety, attention deficit hyperactivity disorder (ADHD), benign prostatic hyperplasia, bipolar disorder, chronic headache, glaucoma, GLUT-1 deficiency, heart attack (within 1 week thereafter), insomnia, high blood pressure, interstitial cystitis, irregular heart rhythm, irritable bowel syndrome (IBS), liver problems, osteoporosis, premenstrual syndrome (PMS), seizures (epilepsy), stomach ulcers or urinary incontinence In healthy persons, moderate caffeine intake (<400 mg/day) does not likely cause or increase the risk of cancer , dehydration [91,92], DNA errors (mutations) , electrolyte imbalance , elevated blood cholesterol , excessive sweating , heart disorders , high blood pressure , increased body temperature , inflammation or stroke . Caffeine Tolerance Caffeine consumption for 1-5 days may result in a partial or complete tolerance to some caffeine effects and side effects [25,32,51,124,125]. The susceptibility to develop caffeine tolerance may vary greatly among individuals and may be genetically determined . The tolerance for the following caffeine effects often occurs: anxiety, increase of blood pressure , increased heart rate and increased urination . Less often, the tolerance for caffeine-induced alertness and sleep disturbances develops . Caffeine tolerance can wear off in 20 hours to 4 days after caffeine consumption cessation [32,127]. Chronic caffeine consumers, because of the developed tolerance, may have no net benefits from caffeine, and that increased alertness and performance they experience are in fact a reversal of withdrawal symptoms (“withdrawal relief”) [65,66,128,129]. Caffeine Addiction A long-term caffeine consumer who experiences withdrawal symptoms, such as headache and tiredness, after abruptly stopping consuming caffeine is considered physically addicted to caffeine . People who are physically addicted to caffeine usually do not have any significant social or health problems related to caffeine . Caffeine Withdrawal Some individuals who regularly consume caffeine for as little as 3 days in the row in doses as low as 100 mg (1/2 cup of coffee) develop symptoms of caffeine withdrawal 3-36 hours after the last dose of caffeine [16,25,51,130,131,132]. Symptoms usually peak between 20-51 hours after the last caffeine dose, last from 2-9 days and may include [16,25,51,132]: - Headache - Apathy, depression, tiredness, weakness, fatigue, drowsiness or insomnia - Anxiety or irritability, difficulty concentrating - Increased heart rate - Nausea, vomiting or flu-like symptoms, such as stuffy nose - Muscle aches or stiffness Gradual caffeine withdrawal may result in fewer unpleasant symptoms than abrupt withdrawal . Caffeine withdrawal headache is hardly relieved by usual analgesics, but may be relieved by caffeine within 30 minutes of the headache onset . Caffeine Hypersensitivity Some people can experience jitteriness, sleeplessness and irritation of gastrointestinal tract after small amounts of caffeine, for example, after drinking 1 cup of coffee . Hypersensitivity results from low amount of enzymes that break down caffeine, which can be genetically determined . Caffeine Allergy and Intolerance In sensitive persons, caffeine ingestion may trigger allergic reaction with rash, hives, itching, difficulty breathing, tightness in the chest, or swelling of the face, lips and tongue . Caffeine intolerance as a medical term is not known, but individuals with irritable bowel syndrome (IBS) may experience worsening of symptoms (constipation or diarrhea) after caffeine consumption . Caffeine-Drug and -Nutrients Interactions Caffeine-Nutrients Interactions - Caffeine may slightly decrease calcium absorption [19,22]. - Caffeine added to sport drinks enhances the absorption of glucose in the small intestine . Caffeine-Alcohol Interactions - Caffeine does not affect the rate of alcohol absorption or elimination and does not affect the blood alcohol concentration . - Caffeine may decrease the feeling of sedation after alcohol intoxication, but it does not reduce the intoxication itself [17,18]. - Alcohol does not affect caffeine absorption, but it slows down its elimination; caffeine from alcoholic beverages does not likely cause unsafe blood caffeine levels, though . Other Caffeine-Drug Interactions - Caffeine in combination with ephedrine can have serious side effects including death [115,221]. - Caffeine may increase the effects and side effects of acetaminophen, albuterol, aspirin, clozapine, epinephrine and theophylline . - Caffeine may decrease the effects of lithium and diazepam . - Drugs that may increase caffeine effects include certain antibiotics (erythromycin, ciprofloxacin, norfloxacin) cimetidine, disulfiram, echinacea (herbal supplement) erythromycin, mexiletine, oral contraceptives [22,24,27]. - Smoking (nicotine) can decrease caffeine levels by stimulating caffeine elimination by 30-50% . 2 Responses to "Caffeine Effects, Half-Life, Overdose, Withdrawal" - thanks for useful informations………. - Thank you so usefull Leave a Reply Alcohol - Alcohol chemical and physical properties - Alcoholic beverages types (beer, wine, spirits) - Denatured alcohol - Alcohol absorption, metabolism, elimination - Alcohol and body temperature - Alcohol and the skin - Alcohol, appetite and digestion - Neurological effects of alcohol - Alcohol, hormones and neurotransmitters - Alcohol and pain - Alcohol, blood pressure, heart disease and stroke - Women, pregnancy, children and alcohol - Alcohol tolerance - Alcohol, blood glucose and diabetes - Alcohol intolerance, allergy and headache - Alcohol and psychological disorders - Alcohol and vitamin, mineral and protein deficiency - Alcohol-drug interactions - Moderate, heavy, binge drinking - Alcohol intoxication - Hangover - Alcohol poisoning - Alcohol and gastrointestinal tract - Alcoholic liver disease - Long-term effects of excessive alcohol drinking - Alcohol craving and alcoholism - Alcohol withdrawal - Fructose - Galactose - Glucose - Isomaltose - Isomaltulose - Lactose - Maltose - Mannose - Sucrose - Tagatose - Trehalose - Trehalulose - Xylose - Erythritol - Glycerol - Hydrogenated starch hydrolysates (HSH) - Inositol - Isomalt - Lactitol - Maltitol - Mannitol - Sorbitol - Xylitol - Fructo-oligosaccharides (FOS) - Galacto-oligosaccharides (GOS) - Human milk oligosaccharides (HMO) - Isomalto-oligosaccharides (IMO) - Maltotriose - Mannan oligosaccharides (MOS) - Raffinose, stachyose, verbascose - SOLUBLE FIBER: - Acacia (arabic) gum - Agar-agar - Algin-alginate - Arabynoxylan - Beta-glucan - Beta mannan - Carageenan gum - Carob or locust bean gum - Fenugreek gum - Galactomannans - Gellan gum - Glucomannan or konjac gum - Guar gum - Hemicellulose - Inulin - Karaya gum - Pectin - Polydextrose - Psyllium husk mucilage - Resistant starches - Tara gum - Tragacanth gum - Xanthan gum - INSOLUBLE FIBER: - Cellulose - Chitin and chitosan - Alanine - Arginine - Asparagine - Aspartic acid - Cysteine - Glutamic acid - Glutamine - Glycine - Histidine - Isoleucine - Leucine - Lysine - Methionine - Phenylalanine - Proline - Selenocysteine - Serine - Threonine - Tryptophan - Tyrosine - Valine - FATTY ACIDS - Saturated - Monounsaturated - Polyunsaturated - Omega-3 - Alpha-linolenic acid (ALA) - Eicosapentaenoic (EPA) and Docosahexaenoic acid (DHA) - Omega-6 - Arachidonic acid (AA) - Linoleic acid - Conjugated linoleic acid (CLA) - Short-chain fatty acids (SCFAs) - Medium-chain fatty acids (MCFAs) - Long-chain fatty acids (LCFAs) - Very long-chain fatty acids (VLCFAs) - Monoglycerides - Diglycerides - Triglycerides - Vitamin A - Retinol and retinal - Vitamin B1 - Thiamine - Vitamin B2 - Riboflavin - Vitamin B3 - Niacin - Vitamin B5 - Pantothenic acid - Vitamin B6 - Pyridoxine - Vitamin B7 - Biotin - Vitamin B9 - Folic acid - Vitamin B12 - Cobalamin - Choline - Vitamin C - Ascorbic acid - Vitamin D - Ergocalciferol and cholecalciferol - Vitamin E - Tocopherol - Vitamin K - Phylloquinone - Curcumin - FLAVONOIDS:
http://www.nutrientsreview.com/articles/caffeine.html
Q: Is it good to lift weights and do intervals for muscles? I would like to change my routine (lifting weights twice a week) to lifting weights once (on Mondays) and doing an interval training (on Thursdays). By interval training I mean running 1 minute at top speed and 2 minutes at low speed interchangeably (up to 15-20 minutes). Plus of course a few minutes of a warm-up beforehand and a cool-down afterwards. Am I right that this approach will still let me build lean muscle mass along with losing fat? I'm a bit bored with going to the gym on and on and wanted to combine it with some other activity but still getting the same results (getting more muscles). I am also aware that I have to avoid cardio as it burns muscles. A: In order to increase muscle mass, your body must be forced to adapt. A state of balance is called homeostasis. The essence for inducing a strength and/or hypertrophy increase requires disrupting homeostasis. The way we do this with strength training is by lifting weights in a manner that exceeds what our body is adapted to. There's a number of ways to do this. You can increase the weight on the bar, increase the number of repetitions, increase the number of sets, or a combination thereof. If you do not in some way alter the variables to place a stress on the body that disrupts homeostasis, no impulse is provided for it to adapt. After this disruption, the body will prepare for a repetition of this stress in the form of altering muscle tissue, bone density, vascularity and more. This occurs in the period of 48 to 72 hours after the workout, for a novice lifter. This means that any muscle group is ideally trained every 48 to 72 hours to maximize efficiency as a novice. After the adaptation, there is a period where one's performance is somewhat increased, making it possible to increase the workload by altering one or more of the parameters (weight, reps, sets, rest time). Doing this regularly in a way that doesn't exceed your capacity for recovery is what increases the baseline over time. After a workout, however, if no new stress is induced, you will gradually move back towards your previous baseline. When you stop working out, the body will no longer feel the need to maintain the metabolically more expensive muscle tissue and return to its old homeostasis. If you are working out only once per week, you'll have started back on that downwards slope before the next workout is done. Even if you do an extensive full-body workout on that one day, this is a frequency that is very sub-optimal for a natural trainee (read: someone who doesn't use steroids). Perhaps you'll still be able to make progress, but it will be much slower compared to working out each muscle group twice per week. In the worst case you will see no progress. Furthermore, trying to provide a stress that is enough to spur adaptation to avoid returning to baseline before the week is over will be difficult with just one workout. Total volume over a week is also important. You can't take the volume that you'd normally do in two workouts and try to cram it into one workout, because you would go well over your capacity for recovery, which might lead to injuries, or even a negative "return on investment". So I'd strongly advise you to stick to two workouts per week. This leaves us with the interval training. HIIT is cardio training and will mostly benefit endurance, fat burning, adaptations for oxidative stress etc. It is not a substitute for weightlifting to induce strength gain and/or hypertrophy. That said, just doing cardio alone is not enough to negate muscle gain. As long as you maintain a caloric surplus, with sufficient protein intake, muscle gain can occur. However, unless you are a beginner, or very overweight, gaining muscle while burning fat could prove challenging or impossible. Hypertrophy requires a caloric surplus while weight loss is only possible using a caloric deficit. Trying to do a "recomp", slowly reducing body fat while gaining some muscle, requires very precise calorie intake and the progress will likely be slow. It sounds like you want to make your workouts a bit more varied and include some HIIT for fat burn, or maybe save some time on your workouts. So maybe try this: do weightlifting twice a week, and do the HIIT training immediately after that. It will have some benefits: You will already be warmed up for the HIIT. The weightlifting will have depleted some glycogen and glucose from your body, shifting more quickly into the use of fat storage once you get to the HIIT. You'll already be in sportswear and will only have to shower once after the workout. Maybe details, but silly stuff like that can end up taking quite a bit of time and it's easy to forget to take it into account. Sticking to a program for your weightlifting that has some variation between the two days can also keep things interesting. When you try to improve on your previous workout every week, or have to work on periodization (if you're past beginner stage) the whole thing becomes a lot more interesting than just getting to the gym every week and just lifting "whatever". It becomes a game, where your opponent is you from the past week, every week. Good luck!
The Reactor Materials and Mechanical Design Group (RMMD) at Pacific Northwest National Laboratory (PNNL) advances fundamental materials science and provides the scientific basis for fusion and fission reactor materials development, light water reactor life extension, safe storage of spent fuel, and design of materials and systems for extreme operating conditions. RMMD is a world-class materials science and mechanical engineering organization with more than 40 highly qualified and experienced staff—including internationally recognized scientists and Fellows of professional societies—delivering impactful research for multiple sponsors. This postdoctoral associate will use molecular dynamics and data analytics to model atomic-level defect processes in complex oxides and titanium alloys. This position involves collaborating with other researchers performing computation at multiple scales to understand ion irradiation and thermal spike effects in alloys and the effects of microstructure on hydrogen and lithium diffusion in ceramics; validating the modeling with experimental observations; and developing new insights into the experimentally observed performance of alloys and ceramics under extreme operating conditions. The candidate must be capable of correlating atomic scale physical and chemical phenomena to the performance of materials. Initial job responsibilities will include running atomistic simulations and using data analysis algorithms on workstations and high-performance computer clusters. The incumbent will work closely with other scientists performing fundamental and applied research relating chemistry, stress, radiation damage, and elevated temperature to the performance of materials. An emphasis for this position is on the basic science study of nanoscale and mesoscale phenomena in materials subjected to elevated temperature and radiation damage. The computational work will be performed in collaboration with experimenters to refine models and provide guidance on data needs for future experiments. This position will involve research that covers a wide of range of physical and chemical phenomena and is interdisciplinary in nature. Activities in this group address materials research needs related to atomic transport relevant to corrosion, high-temperature degradation, stress corrosion cracking, hydrogen compatibility, and radiation effects in materials. The group publishes high quality research manuscripts, contributes oral and written papers at conferences, and participates in workshops on degradation mechanisms in materials. The position involves routine research activities as well as independent problem solving. Innovative approaches to materials research are prized. Responsibilities and Accountabilities: Early career professional who is building a professional reputation for technical expertise. Fully applies and interprets standard theories, principles, methods, tools and technologies within specialty. Independently sets up and runs simulations and performs data analysis. Contributes to technical content of proposals. Collaborates effectively with lab staff and researchers at other institutions. Prepares detailed technical reports, journal articles, and technical presentations. Minimum Qualifications Candidates must have received a PhD within the past five years (60 months) or within the next 8 months from an accredited college or university. Preferred Qualifications Strong computational background in molecular dynamics simulations of alloys or complex materials. Expertise in developing data-driven models Familiarity with computational packages for atomic-level simulations, such as VASP, LAMMPS, VMD, and Ovito Familiarity with Scikit-learn or Tensor Flow for machine learning. Expertise in programming languages, such as C, C++, Python or FORTRAN. Background in high performance parallel computing Knowledge of alloy degradation mechanisms is preferred. Ability to coordinate a variety of theory and modeling efforts between multiple institutions. Expertise in interpreting experimental results based on the findings of theory and modeling. Candidates must have received a PhD in Mechanical Engineering, Materials Science or a related discipline Equal Employment Opportunity Battelle Memorial Institute (BMI) at Pacific Northwest National Laboratory (PNNL) is an Affirmative Action/Equal Opportunity Employer and supports diversity in the workplace. All employment decisions are made without regard to race, color, religion, sex, national origin, age, disability, veteran status, marital or family status, sexual orientation, gender identity, or genetic information. All BMI staff must be able to demonstrate the legal right to work in the United States. BMI is an E-Verify employer. Learn more at jobs.pnnl.gov. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via https://jobs.pnnl.gov/help.stm Please be aware that the Department of Energy (DOE) prohibits DOE employees and contractors from having any affiliation with the foreign government of a country DOE has identified as a “country of risk” without explicit approval by DOE and Battelle. If you are offered a position at PNNL and currently have any affiliation with the government of one of these countries you will be required to disclose this information and recuse yourself of that affiliation or receive approval from DOE and Battelle prior to your first day of employment. Other Information Technical Expertise and Breadth of Knowledge: Early career researcher who is building a professional reputation for technical expertise. Fully applies and interprets standard theories, principles, methods, tools and technologies within specialty. Continues developing technical expertise and knowledge. Develops new skills. Technical Judgment : Provides solutions to an assortment of problems using conventional methods where causal relationships are progressively difficult to establish. Technical Leadership and Capability Development: Recommends technical approach to solve problems subject to approval by senior staff. Business Development: Contributes to technical content of proposals.
https://pnnl.jobs/richland-wa/post-doc-research-associate-computational-materials-science/463D56A841854587BD8EE8087C8A719B/job/
Enter the number of G-forces and the radius of rotation into the calculator to determine the velocity. - All Velocity Calculators - MPH to G Force Calculator - Centripetal Velocity Calculator - Centrifugal G Force Calculator G-Force To Velocity Formula The following equation is used to calculate the velocity from G-forces. V = SQRT( GF*g*r) - Where V is the velocity (m/s) - GF is the number of g-forces - g is the acceleration due to gravity (m/s^2) - r is the radius of rotation (m) How to Calculate Velocity from G-Force? Example Problem: The following example outlines the steps and information needed to calculate velocity from G Force First, determine the number of g-forces. In this example, the number of g-forces is found to be 6. Next, determine the radius of rotation. For this problem, the radius of rotation is found to be 10m. Finally, calculate the velocity using the formula above:
https://calculator.academy/g-force-to-velocity-calculator/
I recently picked up a copy of Mastering the Art of French Cooking by the late and absolutely great chef Julia Child in preparation for a book I was and still am considering to write While I’ve tried, so far, one of her recipes–the first one, in fact, for Potage Promentier (potato-leek soup)–what truly impresses me about this book is the the absolutely simple, clear and understandable English that Julia used in writing it. No wonder it became the revolutionary cookbook that changed American cooking and eating habits. Julia Child was an American, of course, who found herself in France with her husband while he was on a diplomatic mission. She soon mastered the French language but also French cuisine, whch she shares in Mastering the Art of French Cooking. Even if you have no intention of learning to cook French food, Julia’s book–at least the introduction in which she tells the story of her years in Paris–is a must read to see how beautiful simply written English can be. Back in her day, people communicated largely by letter since phones were still too expensive and still pretty scarce. People were foced to learn how to make themselves understood in writing. It clearly shows in this masterful work.
https://grammarsource.com/author/gary-mccarty/page/14/
How to Increase Employee Knowledge Retention It happens to all of us — you can sing along with a song you haven’t heard in years, yet when you go to login to your email you can’t remember the password. Why does this happen? What makes us remember some things, and forget others? According to cognitive psychology, memory formation involves two essential parts: encoding and retrieval. Put simply: encoding is when new information is “added” to your memory; retrieval is when you recall that information — i.e. when you remember it. When we forget something, it’s usually because the information wasn’t encoded properly. As learning professionals, we want to ensure employees don’t forget what they’ve learned in training. Let’s look at some ways we can ensure information is encoded correctly — and how we can improve retention of that knowledge. Increased Knowledge Retention = Increased ROI Unless we’re running training to check a box (and we shouldn’t be), we generally want people to remember & implement what they’ve learned. After all, you can’t apply information you can’t remember. Increasing the amount of information retained can also increase employees’ confidence, further improving the chance they apply what they’ve learned. 8 Tactics to Improve the Retention of Learning After Training Improving knowledge retention involves both encoding and retrieval; we need to ensure the information presented in training is successfully encoded in employees’ brains and we need to assist them in retrieving that information. Let’s look at some scientifically proven tactics for increasing the retention of knowledge learned after training. 1. Spread learning sessions out One way to improve knowledge retention is with distributed practice — in other words, spreading learning sessions out over time. We’ve all felt the adverse effects of en-masse learning: think of how burnt out you feel after a full-day training session, and how little you remember the next day. This happens because there is too much information for the brain to process — and it “weeds out” what it judges to be less important. However, when we distribute learning sessions, there is significantly less information for the brain to process. This makes it easier for knowledge to be encoded. And, by spreading the practice over time, employees also need to practice recalling information they learned at previous practice sessions. This helps further encode the information, and improve recall rates. 2. Include practice tests Practice testing is another method that improves retention. It works by helping learners practice remembering learned information. “testing can enhance retention by triggering elaborative retrieval processes. Attempting to retrieve target information involves a search of long-term memory that activates related information, and this activated information may then be encoded along with the retrieved target, forming an elaborated trace that affords multiple pathways to facilitate later access to that information.” However, for practice testing to work, you need to ensure there are low to no stakes for learners. In other words, it has to be a learning activity, not an evaluation. 3. Use microlearning Making learning sessions shorter can also increase retention. Shorter sessions help reduce the amount of information the brain has to process at one time, making it easier for knowledge to be encoded. Microlearning also has the side benefit of reducing overall training time per employee, which can significantly reduce training costs due to lost productive hours. 4. Repeat Information in Different Formats Studies have also shown that appealing to a variety of senses during training can help improve learning and retention. By offering the same content in a variety of formats, you help to engage different senses and increase learning. Repetition of material can also help improve learning and retention much in the same way that practice testing does; by practicing recalling information already learned. 5. Mix topics up Combining different subjects — or “interleaved practice”— is another way to help encode learning in long-term memory. Mixing different topics forces learners to practice recalling information and applying it to different situations. “Another possible explanation [for why interleaved practice is effective] is based on the distributed retrieval from long-term memory that is afforded by interleaved practice. [...] for blocked practice, the information relevant to completing a task [...] should reside in working memory; hence, participants should not have to retrieve the solution. [...] By contrast, for interleaved practice, when the next type of problem is presented, the solution method for it must be retrieved from long-term memory.” 6. Include hands-on aspects to training Encouraging active participation instead of just passive engagement with training materials can also help encode knowledge. Studies have shown that our ability to remember things we hear is significantly less compared to our ability to remember things we see and touch. As one study found, “organized psychomotor participation increases the learning of a given technological concept. It can be generalized that hands-on activities are effective learning experiences for any applicable concept.” 7. Build upon existing information Scientists have shown that it’s easier to build upon existing knowledge than to start from scratch. The brain remembers information better if it can link new info to knowledge already encoded. You can take advantage of this in your training by building courses into “levels” around existing knowledge. By linking the courses and using each subsequent course to build on previously gained knowledge you can help the brain connect information — making it easier to learn and to remember (retain). 8. Use games Multiple studies have shown that a game-based approach to training not only makes learning more enjoyable but can improve knowledge retention. "Participants assigned to the game condition scored significantly higher on a retention test." By making training more enjoyable, employees are more likely to pay attention and care about what they are learning — even if the subject is not exciting. Game-based learning also uses a similar structure as microlearning, which helps break large topics into manageable “bite-sized” chunks, increasing retention and comprehension. The Bottom Line To increase knowledge retention, your training has to effectively encode new information and help employees practice retrieving it. Use a combination of psychological techniques like distributed practice, practice testing, interleaved practice, levels, microlearning, and repetition to help improve the retention of knowledge after training ends. Sources:
http://www.think.launchfire.com/home/increase-employee-knowledge-retention
Challenges in securing connected vehicles We have already mentioned the challenge of connected vehicles, and how they can be attacked remotely or physically. There is an immense need to isolate remote connectivity with the internal communication in-vehicle. Usually, remote connectivity is limited to certain specific components in vehicles, eg the infotainment system should not have access to the in-vehicle networks such as the CAN and FlexRay. The AUTOSAR recommends that automotive cyber security architectural design must consider the issues of how to isolate, deploy and manage these connectivity interfaces in a secure way. What are the other challenges? Over-the-air updates Over-the-air refers to the technological way of delivering software and firmware updates to devices via wifi, mobile broadband and built-in functions in the device operating system (Infopulse 2019). The intelligent transportation system demands connectivity of vehicles and communication from vehicle-to-vehicle/infrastructure. Hence, connected and autonomous vehicles are the future. In general, vehicular connectivity is very similar to that of computers – they have a very complex software architecture and a variety of applications to enable some of the new enhanced features. As time goes by, this software needs to be updated with new bug fixes or security patches to prevent discovered vulnerabilities. In the automotive industry software updates are crucial as vulnerabilities could be very dangerous for the safety and security of passengers. The challenge is that every vehicle cannot be brought back to the garage each time there is a patch available. Many companies/researchers are working to find ways for secure over-the-air updates. Low computational power The computation power of vehicles is low due to environmental conditions, such as humidity, vibration and temperature. The embedded computers (ie ECUs) are designed for specific functionalities. Therefore, the computation power by design is less, which can be an advantage for the attackers as they can leverage the power of better computers. Also, as technology becomes more advanced and the vehicles become dated, this makes it even easier for attackers to hack a vehicle. Difficult to monitor It is not feasible to monitor the vehicle if it is not connected. Whenever there is a problem with your car, you need to go to the garage for possible diagnostics, which can be very inconvenient. What if the vehicle is connected all the time and all the updates and diagnostics are done remotely? Cost Software testing is considered one of the most expensive phases in software development. To make a vehicle secure, it is important to perform exhaustive testing. Companies would need to employ more people and change their entire development process in order to incorporate security from the very beginning. No safety without security Just one infected car on the road represents a potential hazard for all the surrounding vehicles, and each new security vulnerability exposes new safety issues. It is important to secure all the functionalities of a single car to protect the rest. Data privacy With the advances in autonomous vehicle technology, more and more personal information (such as ID, position, biometric information) will be recorded in the vehicle and uploaded to the cloud. It is a challenging task to protect the integrity and confidentiality throughout the data transmission to prevent it from being intercepted or accessed by an unauthorised entity. In the final course for this module, we expand on some of the above challenges and explore other issues, like artificial intelligence in connected and autonomous vehicles. References Infopulse (2019) How to Design Secure OTA Firmware and Software Updates for Modern Vehicles. [online] available from https://www.infopulse.com/blog/how-to-design-secure-ota-firmware-and-software-updates-for-modern-vehicles/ [17 December 2019] Further reading You may wish to read the following source: McAfee Labs Threats Reports.
https://www.futurelearn.com/courses/basics-of-automotive-cyber-security/1/steps/621513
1Ayantika Nath*, Department of Electronics and Communication, Usha Mittal Institute of Technology, S.N.D.T Women’s University, Mumbai, India. 2Shikha Nema, Department of Electronics and Communication, Usha Mittal Institute of Technology, S.N.D.T Women’s University, Mumbai, India. Manuscript received on April 20, 2020. | Revised Manuscript received on April 30, 2020. | Manuscript published on May 10, 2020. | PP: 1297-1302 | Volume-9 Issue-7, May 2020. | Retrieval Number: G5943059720/2020©BEIESP | DOI: 10.35940/ijitee.G5943.059720 Open Access | Ethics and Policies | Cite | Mendeley © The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Abstract: Cutting edge improved techniques gave greater values to Artificial Intelligence (AI) and Machine Learning (ML) which are becoming a part of interest rapidly for numerous types of researches presently. Clustering and Dimensionality Reduction Techniques are one of the trending methods utilized in Machine Learning these days. Fundamentally clustering techniques such as K-means and Hierarchical is utilized to predict the data and put it into the required group in a cluster format. Clustering can be utilized in recommendation frameworks, examination of clients related to social media platforms, patients related to particular diseases of specific age groups can be categorized, etc. While most aspects of the dimensionality lessening method such as Principal Component Analysis and Linear Discriminant Analysis are a bit like the clustering method but it decreases the data size and plots the cluster. In this paper, a comparative and predictive analysis is done utilizing three different datasets namely IRIS, Wine, and Seed from the UCI benchmark in Machine learning on four distinctive techniques. The class prediction analysis of the dataset is done employing a flask-app. The main aim is to form a good clustering pattern for each dataset for given techniques. The experimental analysis calculates the accuracy of the shaped clusters used different machine learning classifiers namely Logistic Regression, K-nearest neighbors, Support Vector Machine, Gaussian Naïve Bayes, Decision Tree Classifier, and Random Forest Classifier. Cohen Kappa is another accuracy indicator used to compare the obtained classification result. It is observed that Kmeans and Hierarchical clustering analysis provide a good clustering pattern of the input dataset than the dimensionality reduction techniques. Clustering Design is well-formed in all the techniques. The KNN classifier provides an improved accuracy in all the techniques of the dataset. Keywords: Unsupervised Clustering, Machine Learning Classifiers, Flask-app, UCI datasets.
https://www.ijitee.org/portfolio-item/g5943059720/
Industry and Traffic emissions – such as from trucks, buses, and automobiles are the biggest source of air pollution. Airborne by-products from vehicle exhaust systems cause air pollution and are a major ingredient in the creation of smog in some large cities. The major culprits from transportation sources are carbon monoxide (CO), nitrogen oxides (NO and NOx), volatile organic compounds, sulfur dioxide, and hydrocarbons. (Hydrocarbons are the main components of petroleum fuels such as gasoline and diesel fuel.) These molecules react with sunlight, heat, ammonia, moisture, and other compounds to form the noxious vapors, ground level ozone, and particles that comprise smog. The most common combustion engines are gasoline (petrol) engines and diesel engines. Modern gasoline engines have a maximum thermal efficiency of about 25% to 30% when used to power a car. In other words, even when the engine is operating at its point of maximum thermal efficiency, of the total heat energy released by the gasoline consumed, about 70-75% is rejected as heat without being turned into useful work, i.e. turning the crankshaft. Approximately half of this rejected heat is carried away by the exhaust gases, and half passes through the cylinder walls or cylinder head into the engine cooling system, and is passed to the atmosphere via the cooling system radiator. Some of the work generated is also lost as friction, noise, air turbulence, and work used to turn engine equipment and appliances such as water and oil pumps and the electrical generator, leaving only about 25-30% of the energy released by the fuel consumed available to move the vehicle. At idle, the thermal efficiency is zero, since no usable work is being drawn from the engine. At low speeds, gasoline engines suffer efficiency losses at small throttle openings from the high turbulence and frictional (head) loss when the incoming air must fight its way around the nearly closed throttle; diesel engines do not suffer this loss because the incoming air is not throttled. At high speeds, efficiency in both types of engine is reduced by pumping and mechanical frictional losses, and the shorter period within which combustion has to take place. Engine efficiency peaks in most applications at around 75% of rated engine power, which is also the range of greatest engine torque (e.g. in most modern passenger automobile engines with a redline of about 6,000 RPM, maximum torque is obtained at about 4,500 RPM, and maximum engine power is obtained at about 6,000 RPM). At all other combinations of engine speed and torque, the thermal efficiency is less than this maximum. A gasoline engine burns a mix of gasoline and air, consisting of a range of about twelve to eighteen parts (by weight) of air to one part of fuel (by weight). A mixture with a 14.7:1 air/fuel ratio is said to be stoichiometric, that is when burned, 100% of the fuel and the oxygen are consumed. Mixtures with slightly less fuel, called lean burn are more efficient. The combustion is a reaction which uses the air’s oxygen content to combine with the fuel, which is a mixture of several hydrocarbons, resulting in water vapor, carbon dioxide, and sometimes carbon monoxide and partially burned hydrocarbons. In addition, at high temperatures the oxygen tends to combine with nitrogen, forming oxides of nitrogen(usually referred to as NOx, since the number of oxygen atoms in the compound can vary, thus the “X” subscript). This mixture, along with the unused nitrogen and other trace atmospheric elements, is what we see in the exhaust. In the past 3–4 years, GDI (Gasoline Direct Injection) increased the efficiency of the engines equipped with this fueling system up to 35%. Currently, the technology is available in a wide variety of vehicles ranging from less expensive cars produced by Mazda, Ford and Chevrolet to more expensive cars produced by BMW, Mercedes-Benz, and Volkswagen Auto Group. Engines using the Diesel cycle are usually more efficient, although the Diesel cycle itself is less efficient at equal compression ratios. Since diesel engines use much higher compression ratios (the heat of compression is used to ignite the slow-burning diesel fuel), that higher ratio more to air pumping losses within the engine. Modern turbo-diesel engines are using electronically controlled, common-rail fuel injection, that increases the efficiency up to 50% with the help of geometrically variable turbo-charging system; this also increases the engines’ torque at low engine speeds (1200-1800RPM). The low efficiency and high release of polluting exhaust gas emissions is also the reason why combustion engines pollute fast. Carbon deposits on piston heads, valves, piston rings and EGR system not only reduce the engine efficiency, it also contributes to higher fuel consumption and even higher levels of harmful emissions. It is therefore important to use specialized cleaners regularly to keep your combustion system clean. Both products are suitable for as well gasoline as diesel engines to clean the combustions systems and bring the engine back to factory standards.
http://www.maxxlube-automotive.com/about-us/
A. Vinod Kumar replies: Strategic Deterrence has traditionally (especially during the Cold War) been associated with nuclear weapons - possession of capability to undertake unacceptable destruction and deterring the adversary by posturing the ability and intent to do so. The strategic environment of post-Cold War period, however, witnessed the advent of newer threats beyond the realm of nuclear deterrence. They include emergence of non-state actors, greater diffusion of weapons of mass destruction (WMD) technologies, as well as newer technological dimensions like cyber and missile defence. This is beside the attempts made by some nuclear aspirants to progress towards nuclear latency that could enable faster break-out to a weapon capability. As a concept, strategic deterrence denotes a politico-military posturing of capabilities (military power and technology) and doctrinal principles that represents the grand strategy of the nation. Unlike in the Cold War years when strategic deterrence centred on offensive nuclear forces proclaiming postures of aggression and conquest, the current scenario indicates a shift towards defensive strategies (and deterrence) that underlines a state’s eagerness to use a multitude of capabilities (including nuclear and non-nuclear platforms) to defend against a wide spectrum of threats. The consequent shift to defensive deterrence is marked by the increasing presence of defensive platforms like missile defence, as well as the centrality attained by cyber and diminishing primacy of nuclear weapons in strategic planning. Credible Minimum Deterrence is a composite posturing adopted by some nuclear-armed states (especially India and Pakistan) to convey a non-aggressive and defensive nuclear posture by projecting a nuclear arsenal that fulfils the bare needs of defence and security. Accordingly, it implies that the nuclear arsenals will be minimal enough to provide credible deterrence against adversaries.While ‘minimum’ (the number of warheads and delivery systems at a given point of time) can be dynamically driven by the strategic environment (perceived strength or build-up of rival arsenals), the question of ‘credibility’ is based on perceptions – whether the adversary has been ‘effectively deterred’ or whether the capability to impart ‘unacceptable damage’ has been convincingly conveyed to the adversary. India and Pakistan have both seen their deterrence goalposts being constantly shifted as a result of their mutual security dilemmas, as also the strategic modernisation pursued by China. Full Spectrum Deterrence is a concept that has been a subject of different interpretations depending on what the actor seeks to posture. In recent years, Pakistan has declared its reliance on a full-spectrum deterrence posture, which entails development of capabilities (nuclear weapons and delivery systems) of various descriptions to cover a ‘full spectrum’ - tactical nuclear weapons at the lowest level, a second-strike capability by equipping conventional submarines with nuclear-tipped missiles, and cruise missiles to beat the Indian missile defences. This could be seen as a refinement of Pakistan’s earlier postural conception of full-spectrum of theatres, namely sub-conventional, conventional and nuclear. While the latest objective is to enhance the credibility of the deterrent through systems for each of the threat scenarios, the overall consequence is the expansion of capabilities to cater to all the conceived theatres. Similarly, the North Korean posturing also talks about placing nuclear weapons as the pivot force to deal with the full spectrum of threats, described as ‘rounding off the combat posture’ in their documents.
https://idsa.in/askanexpert/credible-minimum-deterrence-strategic-deterrence-full-spectrum-deterrence
Having trouble viewing this email? View it as a Web page. There's never been a culture without art. Never been a culture without poetry. Never been a culture without music. They must be delivering something to us that we really need for our psyches.-Edward Hirsch Sleep - checkPresence - checkActivity - check Let's continue our look at S.P.A.C.E. (Sleep, Presence, Activity, Creativity, and Eating). This week the focus is Creativity. Studies have demonstrated the efficacy of artistic expression on immune system improvement and reduction of stress, fatigue, and pain. These studies have been done on many age groups and with different types of illness, from PTSD to cancer. Researchers are still trying to understand the reasons that the arts have such an impact on health and well-being, but emotional expression is clearly an important piece of the puzzle. Creative activities allow us to express our emotions and become more vital. And not just when we're the ones doing the painting, writing, dancing or singing, but also when we're the ones doing the appreciating. Researchers have found positive effects on both physical and mental health not only from creating but also from watching a film, going to an art museum or listening to a concert. Creative expression is not only found in traditional arts such as movies, music, painting, writing, poetry, dance, and theater. In fact, an important step toward becoming more creative is to observe and take part in the creativity that can be found in daily life. Researchers talk about two essential elements of a creative endeavor: novelty and usefulness. A teacher may concoct a new lesson plan for teaching (novel) that results in higher student performance (useful).
https://content.govdelivery.com/accounts/ORMARION/bulletins/1d9af87
English Language skills: IELTS: 6, STEP: 75, or TOEFL: (PBT: 500) (IBT: 61) Education: Master's Degree in Health Sciences or Health Management • Knowledgeable in Microsoft Office, SAS or SPSS statistica analysis, Qualitative Data Coding Software e.g. NVIVO Work Experience: Minimum of three (3) years experience in Clinical Research Process Scientific Methods or in a related field. • Identifies or designs appropriate instrumentation (e.g., surveys, knowledge assessments, interviews and observation protocols) for responding to research and evaluation questions. • Collects and reviews quantitative and qualitative data using appropriate techniques to ensure timeliness, reliability and validity • Conducts quantitative data analyses, through the use of descriptive statistics as well as more advanced statistical modeling or qualitative analyses, as appropriate. • Oversees the development and implementation of a comprehensive strategic research plan to ensure policies and technology systems are of the highest quality. • Designs, conducts analyses and provides key strategy and policy recommendations. • Liaises with internal and external investigators or research centers to maintain and improve the research collection systems. • Initiates policy and process design of research reports to meet a variety of needs and acts as a subject matter expert in data analysis and interpretation. • Coordinates and manages enhancements, standardization and maintenance of data systems across the departments. • Creates and implements work plans and provides input on analysis and systems development. • Analyzes internal and external data and performs a wide range of reports. • Participates with research teams in the execution of high-quality, multi-level, multi-method research and evaluations. • Assists and participates in preparation of research schedules, and manages research activities, including, but not limited to, interviews, focus groups and survey administration. • Attends meeting and contributes to research publications or presentations as appropriate. • Reviews written summaries of the research work as required. • Evaluates preliminary data using standard research techniques. • Provides research support for work being undertaken by other senior members of the department. You need to be registered in order to apply to this vacancy. Please click apply and proceed with registration if applicable.
https://careers.ksau-hs.edu.sa/Vacancies/Vacancy/bb8dff60-6fd4-e911-80dd-00155dcd2340
Over the past decade, two movements have profoundly changed the environment in which global health epidemiologists work: research integrity and research fairness. Both ought to be equally nurtured by global health epidemiologists who aim to produce high quality impactful research. Yet bridging between these two aspirations can lead to practical and ethical dilemmas. In the light of these reflections we have proposed the BRIDGE guidelines for the conduct of fair global health epidemiology, targeted at stakeholders involved in the commissioning, conduct, appraisal and publication of global health research. The guidelines follow the conduct of a study chronologically from the early stages of study preparation until the dissemination and communication of findings. They can be used as a checklist by research teams, funders and other stakeholders to ensure that a study is conducted in line with both research integrity and research fairness principles. In this paper we offer a detailed explanation for each item of the BRIDGE guidelines. We have focused on practical implementation issues, making this document most of interest to those who are actually conducting the epidemiological work. - epidemiology This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/. Statistics from Altmetric.com Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. Summary box Over the past decade, two movements have profoundly changed the environment in which global health epidemiologists work: research integrity and research fairness. Both ought to be equally nurtured by global health epidemiologists who aim to produce high-quality impactful research, yet bridging between these two aspirations can lead to practical and ethical dilemmas. In the light of these reflections, we have proposed the BRIDGE guidelines for the conduct of fair global health epidemiology, targeted at stakeholders involved in the commissioning, conduct, appraisal and publication of global health research. The guidelines follow the conduct of a study chronologically from the early stages of study preparation until the dissemination and communication of findings. They can be used as a checklist by research teams, funders and other stakeholders to ensure that a study is conducted in line with both research integrity and research fairness principles. In this paper, we offer a detailed explanation for each item of the BRIDGE guidelines. We have focused on practical implementation issues, making this document most of interest to those who are actually conducting the epidemiological work. Introduction Over the past decade, two movements have profoundly changed the environment in which global health epidemiologists work: research integrity and research fairness. On one hand, questionable research practices may lead to spurious findings if studies are ill-designed, poorly implemented, inappropriately analysed or selectively reported. On the other hand, local communities, institutions and researchers are too often side-lined from the formulation of research questions, the design and implementation of studies and the dissemination of findings. Taking advantage of weak or inexistent ethics institutions, bypassing local expert knowledge, ignoring local context, failing to develop in-country capacity are some of the practices which de-value global health epidemiology. The BRIDGE statement As we have argued in the BRIDGE statement paper,1 research integrity and research fairness need to be equally nurtured by global health epidemiologists who aim to produce high-quality impactful research. Yet bridging between these two aspirations can lead to practical and ethical dilemmas. In the light of these reflections, we have proposed guidelines for the conduct of fair global health epidemiology, targeted at stakeholders involved in the commissioning, conduct, appraisal and publication of global health research. The BRIDGE guidelines were developed by a Delphi consensus with global health practitioners from over 20 countries in 5 continents. Our aim was to bring together existing principles in one overarching guideline, with a focus on practical implications for global health practitioners. The outcome consists of a set of 6 standards and 42 accompanying criteria covering the following steps of a study: (1) study preparation; (2) study protocol and ethical review; (3) data collection; (4) data management; (5) analysis; (6) dissemination and communication. How to use this paper This paper is linked to the BRIDGE statement paper 1 that introduced the BRIDGE guidelines and described the justification and methodology for their development. The guidelines follow the conduct of a study chronologically from the early stages of study preparation until the dissemination and communication of findings. They can be used as a checklist by research teams, funders and other stakeholders to ensure that a study is conducted in line with both research integrity and research fairness principles. In this paper, we offer a detailed explanation for each item of the BRIDGE guidelines. We have focused on practical implementation issues, making this document most of interest to those who are actually conducting the epidemiological work. This document is not necessarily meant to be read linearly from start to finish, but should rather serve as a source of further reading for readers interested in more in-depth discussion and justification for each item. A glossary can be found in online supplemental file 1 for all terms that are underlined. The items Standard 1. Study preparation: carefully prepare the study, in partnership with local researchers, by taking into account existing knowledge and resources and engaging with key stakeholders 1.1. Plan and execute research in partnership with local researchers. When working in a setting where relevant epidemiological competences are limited or not available, consider what is in the study team’s remit to strengthen local capacity Global health research is rarely conducted by an organisation in isolation, but is the result of collaboration across different disciplines, expertise and countries. This often translates in research partnerships between institutes or organisations from high-income and low-income settings. These partnerships should always be established in a way that is highly advantageous for both parties.2 Fair epidemiological research means that the local relevance of the research should be determined in collaboration with local partners.2 3 Fair research partnerships also entail transparent and open communication between parties all throughout the research process from early planning stages to the communication of findings. On one hand, it is important that local human resources for health are not depleted to provide staff the research project (eg, nurses or laboratory staff).3 On the other hand, lack of existing local capacity should not be viewed as a reason to forego such partnerships. Rather, when working in a setting where relevant epidemiological competences are limited or not available, epidemiologists from high-income countries should consider what is in the study team’s remit to strengthen local capacity in order to meaningfully engage local researchers. This extent of this capacity strengthening should be commensurate to the scope of the research project and match the current professional needs and ambitions of people involved locally. Capacity strengthening activities may include, but are not limited to, establishing and/or strengthening ethics review committees, strengthening research capacity, developing relevant technologies, training of research or healthcare staff and education of the community involved in the research. These activities may be extended to more specialised domains of epidemiology. 1.2 Identify and engage key stakeholders throughout the study with approaches based on their needs, competences and expectations. Key stakeholders include representatives of affected populations and end-users of research A stakeholder is anyone who has a ‘stake’ or an ‘interest’ in a particular initiative. In global health research, stakeholders may include include: the members of community where research was conducted (the affected populations), the community at large (at national or global level), local implementers (eg, local government or healthcare workers), national policymakers and policy implementers from governmental and non-governmental organisations, the scientific community who can benefit from the research, drivers of the international policy including bilateral and multilateral agencies. Key stakeholders in a global health research study include the representatives of the affected populations and the end-users of the research. Stakeholders are increasingly claiming their right to be ‘engaged’—that is, informed, consulted and involved—in the decision-making processes of research which affect them. Identifying the relevant stakeholders will therefore be the first step of a research project. Depending on the scale of the research, this can be done fairly quickly and informally, in groups using a participatory approach, or more rigorously using structured approaches for stakeholder mapping.4 The method of engagement should be selected to best meet the needs, capacity and expectations of the relevant stakeholders as well as the strength of engagement sought, which can range from: (1) remain passive; (2) monitor; (3) advocate; (4) inform; (5) transact; (6) consult; (7) negotiate; (8) involve; (9) collaborate; (10) empower.5 1.3 Establish the knowledge gap by searching the literature (peer-reviewed publications and grey literature) as well as by consulting (local) experts, representatives of affected populations and end-users A systematic literature review provides a complete, exhaustive summary and appraisal of current literature on a specific topic. While this is recommended whenever possible, there may not always be time and resources available for such an exercise. In such cases, a literature review which thoroughly summarises the topic can suffice. A (systematic) literature may show that there is no knowledge gap to be filled and that the study is redundant. Alternatively, it may uncover useful sources of published information, which can form the basis of an analysis without the need for any new data collection—with all costs and burden to participants avoided. Even when new data collection remains necessary, experience in related studies may guide the design or indicate pitfalls to be avoided. Depending on the complexity of the study and the amount of information already available on the topic to be studied, an exploratory needs assessment with key stakeholders may be warranted. Such an exercise can also improve the understanding of the research topic by different stakeholders (community, health facility staff, local administration, central ministry, other governmental bodies, donors, etc) and point towards the disciplines that need to be included in the study. Affected populations should also be consulted to ensure that their perspectives are fairly represented.2 3 6 1.4 Develop research questions and objectives in consultation with research partners and expected end-users Research questions should be jointly formulated by all research partners involved.2 The most relevant research questions are those which address the specific local issues. End-users should therefore be consulted early on in the study design to ensure that the research questions respond to their information needs. This can help ensure that the proposed research is in line with existing national research agenda or priorities.7 1.5 Select study design and research methods to best fulfil the study objectives and give due consideration to multidisciplinary approaches Before embarking on any global health epidemiological study, researchers should consider whether they have incorporated the right disciplines to answer the proposed research questions. Global health research questions are often complex and multilayered as the issues at hand often involve many stakeholders. This requires the collaborations between disciplines beyond biomedical sciences.8 As a result, in global health, epidemiological studies are, more often than not, conducted along-side or integrated with other quantitative (eg, economics and mathematical modelling, machine learning) or qualitative disciplines (eg, anthropology, sociology, political sciences). Multidisciplinary research is well suited to study multiple types of outcomes and provides a holistic understanding of causal pathways. While quantitative methods quantify change over time and associations (along with an estimate of the role of chance), qualitative methods are most suited to understand people’s judgements, perceptions and preferences therefore providing insights into reasons behind changes or associations, or lack thereof. 1.6 Before embarking on primary data collection, assess whether existing data could be used, fully or partly, to fulfil the research objectives During the planning stage, it is important to reflect on the need for primary data collection. What level of investment is permissible and justified and to what extent can (re-)analysis of existing quantitative data and an appropriate mix of qualitative and quantitative methods address the knowledge gap? These considerations need to consider how to make best use of funds available while not burdening people with unnecessary data collection. The past decade has seen a huge increase in the amount of publicly available data for research which offers tangible opportunities to forgo primary data collection in favour of secondary analyses of existing data. First, as discussed in criterion 6.6, open data sharing initiates have resulted in numerous repositories where data can be accessed for re-analyses. Second, many nationally representative health surveys are available for re-analysis, thanks to the efforts of organisations such as UNICEF and the United States Agency for International Development who have made many datasets from their Multiple Indicator Cluster Surveys (https://mics.unicef.org/) and Demographic and Health Surveys (https://dhsprogram.com/) openly accessible online. Lastly, health service data are also increasingly available to global health researchers, as health management and information systems data are digitalised through the DHIS2 platform (https://www.dhis2.org/) and other similar efforts. 1.7 Ensure data ownership and publication agreements have been agreed by all research partners Agreements for data ownership, storage and access should be made during the preparatory phase of study. Data sharing agreements detail the understanding between the data provider and data receiver with regard to what data are shared and associated conditions of use. Within the frame of research, they should include provisions on the right to publish results. As mentioned in criterion 1.6, global health research often entails the re-analysis of health service data. In such cases, it is advisable that those who share the data also request researchers’ compliance with the terms of a bespoke data-sharing agreement. Much global health research is conducted in academia where peer-reviewed scientific publications remain the primary metric for career progression. This in itself is the source of much research unfairness, and the frequent (conscious or unconscious) bypassing of local researchers in the preparation of scientific publications. To counter that, it is important that fair agreements are made early on in the research process, considering the professional development of all partners involved equally. While it is very difficult to agree authorships before the work really gets going (because one rarely knows exactly how much each potential author will contribute), the principles of authorship can still be agreed during preparatory stages. The International Committee of Medical Journal Editors provide guidance on authorship9 and are endorsed by the vast majority of scientific journals. Yet they have a heavy emphasis on actual writing of the manuscript and it has been argued that this systematically disadvantages researchers from low-income and middle-income countries in global health research partnerships.10 1.8 Agree on work plans and governance structures with all study partners. Allocate adequate time, financial and human resources to all phases of the study It is important that decision-making processes are clarified before the actual research starts. The roles and responsibilities of all parties involved should be transparently and fairly agreed in writing, so that study team members are on the same page in terms of expectations and contribution. This should including decision making in the event of disagreement. The RACI models are a useful way to do this, focusing on the four responsibilities most typically used: responsible, accountable, consulted and informed.11 In larger multistakeholder studies, oversight bodies may be needed to advise and oversee the study conduct. Clear study plans should be developed to ensure that adequate time, financial and human resources are available for all phases of the study. All team members should have a valid role and adequate resources in the project to fulfil that role. Trained and experienced local health professionals may possess the perfect skills mix for a given research position but their recruitment needs to be balanced with the potential health system weakening risk of depleting the local human resources for health. While it should go without saying that local research teams should be fairly remunerated for their contribution,3 it is important to pay special attention to the working conditions of those in the lower echelons of the research hierarchy. This includes field staff (eg, interviewers, supervisors and field data editors). Mitigating the precarious nature of their freelance casual labour should be considered where possible, ideally with mid-term to long-term solutions such as long-term contracts and opportunities for career progression. This can take the form of online courses and qualifications that can be embedded into their roles within the research team, depending on their needs and aspirations. Long-term contracts for field staff should also consider employment benefits such as health insurance and pension planning (as appropriate for the context) and budget for these in the financial planning and budget plans for a study. Precise budget estimates are not always part of a grant application process, but careful estimation of the different costs during the application stage is beneficial for the practical implementation of a research project. It is important to take currency fluctuations, inflation and uncertainty into account when preparing the budget, especially in fragile and conflicted affect settings. Standard 2. Protocol development: prepare a detailed research protocol and ensure it has been approved by relevant ethical review boards if it includes research concerning human participants 2.1 Prepare a detailed research protocol in consultation with all research partners The study protocol describes in detail all steps of a proposed study. The two primary purposes are funding acquisition or ethical approval. A number of templates are available to guide protocol writing.12 13 While the protocol writing may be led by one party in research partnerships, it is important that all parties are engaged and given a fair opportunity to contribute. All parties should explicitly agree with all their roles and responsibilities in the protocol. 2.2 Write a clear and comprehensive analysis section The study protocol may provide an overview of the planned analyses by describing the purpose of the study, the primary hypotheses, the design and the source population and a general description of the chosen analytical strategy. Depending on the complexity of the analyses, it may be advisable to write a stand-alone analysis plan. The purpose of a statistical analysis plan (SAP) is to ensure transparency and to minimise type I and type II errors resulting from the analysis strategy (eg, multiple testing, choice of confounders, etc) thus affecting inferential reproducibility. A minimum set of items to be included in a SAP for randomised clinical trials is available.14 This is not yet the case for observational studies, but a recent paper has been published suggesting a modification of the recommended SAP format for clinical trials to fit observational studies.15 Broadly, the suggested items to cover in a statistical analysis plan include: Primary and secondary research questions and hypotheses, as well as details of the primary and secondary outcome measures, and how these relate to the study objectives. Sampling procedure and recruitment/retention methods, detailing the sampling method, the planned recruitment rate, the likely rate of loss to follow-up, interim analyses and stopping guidance (where applicable). Sample size justification, including a description of the power and sample size calculations detailing the outcome measures on which these have been based, as well as any assumptions made underlying the power calculation and justification for these assumptions. Considerations about multiple testing, explaining how false positive findings as a result of repeated subgroup analyses will be minimised. Potential confounders and effect modifiers should be defined and approaches on how to address the effect of confounders and effect modifiers specified. Analysis strategy, describing how results of this study will be analysed, including the use of statistical and/or mathematical models. 2.3 Consider studying the effect of locally relevant equity dimensions With its focus on ‘achieving equity for all people’, global health acknowledges that social determinants have a major impact on health.16–19 From an epidemiological perspective, this implies disaggregating analyses in order to reveal patterns that may be masked by aggregate data. Factors that may affect health opportunities and outcomes include place of residence, race, ethnicity, culture, language, occupation, gender/sex, religion, education, socioeconomic status and social capital—as described by the PROGRESS acronym and framework.20 Sex/gender have been the subject of much attention21–27 as gender is known to ‘intersect’28 with other social determinants, creating interdependent systems of disadvantage.27 While intersectionality originates from gender studies, it is increasingly being proposed as a framework to study health equity in public health.29–31 There are a number of practical and statistical considerations related to studying equity in global health epidemiology. First, a thorough understanding of the local context is crucial to identify the relevant equity dimensions. Second, researchers need to be very mindful of the causal mechanisms they intend to study when using equity variables and cognisant of the potential for spurious results when using proxy variables.32 Race and sex/gender are particularly challenging ones, with both biological (hereditary, genetic) and social (differentials in access to care) mechanisms. This further emphasises the importance of working within multidisciplinary frameworks (with either biological or sciences or both in these examples) to ensure a comprehensive understanding of the issues at hand (ref statement paper). Third, the choice of equity dimensions will have many practical implications for study design, ranging from sample size (more equity dimensions usually means more confounders and interactions33 34 and therefore a larger sample size), sampling procedures (choice of sampling frame and definition of inclusion and exclusion criteria), research instruments and field procedures (which should be culturally appropriate and safe as described in criteria 3.2, 3.4 and 3.6). 2.4 When conducting multidisciplinary research, describe the purpose and strategies to integrate different analytical methods in the protocol As described in criterion 1.5, addressing today’s global health challenges frequently requires the involvement of different scientific disciplines, including but not limited to medicine, epidemiology, social sciences, economics and environmental sciences. Protocol writing is a team effort which requires the expertise of all disciplines involved.35 36 Multidisciplinary research protocols should describe the purpose of combining different disciplines and also include strategies to integrate relevant qualitative and quantitative analytical methods.37 Multidisciplinary research is typically conducted through several iterations of analyses—where analyses within one discipline are initially conducted independently from and then dependent on each other. To maximise the success of a multidisciplinary approach, study plans need to include regular moments of reflection with peers from the other disciplines throughout all study phases (especially in design, analysis and interpretation).37 38 2.5 Strive to make study protocols publicly available, either on a publicly accessible website or in appropriate study registers Public availability of research protocols is one of the cornerstones of research integrity as it helps prevent post hoc revisions of study aims. Protocols can be either placed in a publicly accessible website or uploaded in an appropriate studies register.39 40 An increasing number of journals now also offer the possibility to publish protocols, with the guarantee that study results will be published regardless of whether they show ‘positive’ or ‘negative’ results. This option is that it also enables a peer review of the research protocol. 2.6 For all data collection and data use concerning human participants, obtain ethical approval (or a waiver) ideally from all institutions and countries involved in the protocol. In case of multiple review and disagreement, the review of the country where the data are collected should take precedence It is not always easy to determine whether a study needs ethical review as the boundary between research and public health practice can be blurred. Indeed, a recent review of ethical guidelines for epidemiology has shown that not all epidemiological or public health studies require an ethics review.41 As a general rule, all studies involving primary data collection from human participants need to be reviewed ethically and scientifically by a competent and independent research ethics committees (REC) prior to the start of data collection.42 43 Studies which perform secondary analysis of existing data may also require ethical review if the analyses fall outside the scope of the informed consent provided (or if no informed consent was provided).43 Ethical review includes a thorough review of informed consent forms—ideally in the language in which they will be administered. Guidance for the formulation of informed consent forms can be found in the updated CIOMS 2017 guidelines.43 While each REC may have their own templates, generic templates are also available.12 The latest CIOMS Ethical Guidelines for Health-related Research involving Humans ask for dual ethical approval for studies conducted by partnerships involving high-income and low-income and middle-income countries ‘at the site of the sponsor as well as locally’.43 The intention is to prevent ‘ethics dumping’3—that is, the export of unethical research practices from high-income to low-income settings. However, it can also be argued that insisting on dual review perpetuates colonial notions that REC in low-income countries cannot be relied on. Certainly, ‘researchers from high-income settings should show respect to host country REC’3 and ‘research projects should be approved by a REC in the host country, wherever this exists, even if ethics approval has already been obtained in the high-income setting’.3 Difficulties can arise when ethical review is not possible at one site (for lack of local capacity or willingness to review a study conducted in a foreign country) or if reviews conflict with each other. As a general rule, the review of the country where the research is conducted should take precedence. The ethical review of research conducted in humanitarian emergencies deserves special attention here. In such settings, there is an intrinsic clash between ethical priorities: the research needs to be done swiftly, and participants are particularly vulnerable. A recent review suggests two useful strategies in such settings44: (1) pre-approved research protocol templates which can be quickly customised for use in individual emergencies45 and (2) ‘real-time responsiveness’, which is an iterative strategy of constant dialogue between ethics reviewers and researchers while studies are being conducted.46 2.7 When working in a setting without ethical review boards or review boards with limited epidemiological capacity, consider what is in the study team’s remit to strengthen their epidemiological capacity Epidemiological studies may take place in countries with insufficient capacity to assess the ethical aspects and/or scientific quality of the research. Adequate capacity to conduct and review biomedical research does not automatically translate into the same for epidemiological and multidisciplinary projects, and this should therefore be regarded as a specific need. Taking advantage of this situation is one of the worst forms of ‘ethics dumping’.3 Instead, epidemiologists should consider what is in the study team’s remit to strengthen their epidemiological capacity as part of broader capacity strengthen efforts (as described in criterion 1.1). 2.8 Explicitly state any open data access in the protocol submitted for ethical review and in the informed consent documents Funding bodies and publishers increasingly encourage public data sharing to maximise the return of investment on research, to increase transparency and accountability, to reduce the cost of duplicating data collection and to promote potential new data uses.47 Depending on the type of study and data collected, informed consent forms may include conditions of use and provisions for sharing with third parties. Any data sharing with third parties (whether fully open access or not) should be included in the protocol and informed consent documents. For collation of existing data to be used for secondary analyses, sharing with third parties should be agreed with data owners. The protocol should describe plans to publish data in online open access repositories (see criterion 6.6). Standard 3. Data collection: use valid and reliable instruments and reproducible methods while ensuring culturally appropriate procedures 3.1. Use valid and reliable research instruments Global health research relies on diverse types of data. Primary data are obtained by direct measurement using research instruments such as questionnaires, data extraction forms, interview guides, assessment by clinicians, laboratory and imaging techniques and global positioning system (GPS) and other devices. Studies can also rely on secondary analysis of existing data including health registries, routine operational information, weather and climate data, satellite information and census data. Research instruments should be valid and reliable.48 The development of research instruments requires skill. The design of, for example, a questionnaire is an iterative process in which the following steps can be distinguished: (1) definition and elaboration of the construct; (2) choice of measurement method; (3) selecting and formulating items; (4) scoring issues; (5) pilot testing; (6) field testing.49 This process relies on scientific literature, theory, empirical evidence and statistical techniques. Before developing a new questionnaire, researchers should perform a review of existing instruments and their properties. If an instrument already exists, using it saves time and makes results comparable to other studies. The choice of research instruments remains a domain full of trade-offs, and reducing the risk of biases and error requires considerable efforts, which have to be delivered within time and budget constraints.50 Data collection modes have evolved over the past decades.50 In the domain of surveys, for example, electronic methods are increasingly used, as a replacement of or in combination with face-to-face and telephone interviews. The advantages of computer-assisted methods include flexibility, reduced chance of error and possibly also of missing data, user-friendliness and time saving. This evolution also poses practical challenges (data capture design, data conversion, availability of internet, cost, training of field workers) as well as theoretical ones (unknown errors and biases resulting from new data collection modes).50 3.2. Ensure that research instruments are locally adapted and culturally appropriate Global health epidemiologists often study a range different communities and countries. Researchers must be cognisant of local cultural sensitivities and should be careful not to violate customary practices with their data collection procedures.3 51 In practice, time consuming, invasive or culturally insensitive data collection procedures can lead to non-response biases and measurement errors. It is well known that questions about sensitive topics, such as sexual practices, deaths or religious ideas can be difficult to handle for participants and data collectors.52 It is less obvious but equally important to consider that apparently harmless topics (eg, questions about food consumption) may also embarrass or upset informants.52 This further emphasises the importance of including (local) investigators with relevant skills who are experienced in dealing with such circumstances. 3.3. Provide concrete guidance for data collection in a document that is available to all data collection staff Standardising data collection processes helps to ensure that instruments maintain their validity and reliability 48 and contributes to methods reproducibility.53 In general, quantitative measurements are easier to standardise than qualitative judgements. Standard operating procedures (SOPs) and job aids can help ensure uniformity for various procedures (inclusion and examination of study participants, collection and storage of specimens for the laboratory, laboratory assays, data management and quality assurance).54 All guidance documents for data collection (field manual) should be developed with care so that they are legible, readable and comprehensible.54 Generic templates are available for several types of SOPs.54 All data collection guidance tools should be available whenever and wherever the people involved in data collection need them. 3.4. Select data collection staff according to technical as well as cultural criteria. Clarify the roles and responsibilities for each person involved and provide adequate training and support In small studies, the lead researchers may be able to interview or examine all participants. But when there are many study participants or when there are sociocultural or linguistic barriers, field workers (intermediary research assistants) may be needed. Depending on the scale of the study, a hierarchy of field staff including interviewers, supervisors and field data editors may have to be recruited. Many global health research projects are highly dependent on field workers. These fieldworkers may be the only people who directly engage with the study participants, hence need to be well trained and oriented to understand the study objectives, ethical issues and the instruments used. Their influence on informed consent and data collection processes should not be underestimated.52 3.5. Pilot test, and if possible, field test all research instruments prior to the start of effective data collection Pilot testing and field testing is recommended, regardless of the choice for an existing or a new research instrument. Pilot testing is intended to test the comprehensibility, relevance, acceptability and feasibility of the questionnaire in a small number of respondents, after which adaptations will follow. A pilot on the target population is crucial as only they can judge the comprehensibility and relevance of the questionnaire. In a pilot, after participants have answered all questions they should be asked about their experience in as much detail as necessary to enable changes.49 When an instrument is considered to be satisfactory, it can be applied to a larger sample of the target population. Whereas pilot testing entails an intensive qualitative analysis of the formulation of questions and the layout of the questionnaire, field testing entails quantitative analyses. As such, all data management steps are also included in field testing. Possible analyses include: patterns of missing items (did respondents not understand the question? Do their answers not fit the response options?) and distribution of item responses (if some categories are seldom used, then can be combined with others).49 In practice, despite their clear usefulness, pilot and filed tests remains problematic in epidemiology. Ideally, pilot and field testing should be integrated in grant applications and study timelines, but that is not always be possible, as many research funders do not support these financially (in terms of budget lines) and logistically (in terms of the time investment). Unfortunately, researchers may even find themselves in stalemate situation when preparing funding applications for large-scale studies, as funders (and particularly also external reviewers of these funders) may request to see pilot data before granting their funding. 3.6.Collect data in a respectful and safe manner, in an environment which safeguards the confidentiality of respondents When data collection is prepared and field workers are selected and trained, it is important to focus exclusively on technical aspects of using the research instruments and to reflect on how study participants and field workers can be protected from harm due to the study. Fieldwork is sometimes conducted in dangerous settings and associated with considerable risks.55 The gender of data collection teams is an important factor to consider. In many contexts, women can feel uncomfortable if they are interviewed by men or in the presence of their husbands and partners. In such settings, gender-segregated interviews are important part of ensuring a respectful and safe environment for participants. One trick for achieving this is to carry out women’s and men’s interviews simultaneously to keep men occupied while women participate in the study.56 Beyond gender, other sociodemographic characteristics (eg, socioeconomic or ethnic or religious backgrounds, etc) may lead to cultural hierarchies which make it difficult for people to relate to each other.56 A good understanding of the local context is necessary to ensure that the data collection can be as culturally sensitive as possible. 3.7. Put in place quality assurance and quality control mechanisms to ensure data accuracy, completeness and coherence Data accuracy refers to the degree to which data correctly estimate or describe the quantities or characteristics they are designed to measure.57 In this respect, data fabrication is a common concern in global health epidemiology and it appears to be widespread and very difficult detect.52 The chief concern is that field workers do not visit the sampled locations and fabricate data. There are a number of quality control activities that can be put in place to ensure accurate data. The use of electronic data collection offers a number of opportunities to check that the sampled locations were visited, including geo-positioning, attachment of photographs and monitoring the start and end date of the interview. Spot-checks and re-collecting data in a random sample (eg, 10%) of sampled units (eg, households or facilities) is another commonly used approach to ensure that data were correctly collected. However, the reality is often more nuanced than total data fabrication, with field workers deviating from the verbatim use of the questionnaire. This can be done for very valid reasons, for example, when field workers prefer using local terms and language, or exercise their own judgement when asking sensitive questions. Efforts to foster a safe an open dialogue with field workers, combined very good understanding of the local context and a willingness to adapt research processes (as advocated by the slow research movement51) are key for quality assurance. Data completeness is usually described as the amount of available data in a database compared with the amount that was expected to be obtained. Prompt review of research instruments by a field supervisor is important to ensure that missing data can be re-collected in time. Distinguishing between different types of missing information on the research instruments is a good way to ensure data completeness during data collection (eg, (1) the question could not be asked; (2) the respondent did not reply; (3) the respondent replied ‘do not know’).58 59 Coherence refers to the degree to which data are logically connected and mutually consistent.57 During surveys one way to ensure coherence is include cross-checks within number of questions which should be internally consistent. Electronic data collection offers the possibility of programmed consistency checks which notify (and can even prevent) data collectors from entering inconsistent values. Standard 4. Data management: manage data with reproducible procedures and ensure compliance with relevant data protection rules 4.1 Put in place data management procedures before effective start of data collection and provide concrete guidance in a document available to all data management staff A data management plan is essential to ensure the planning around data collection, storage and sharing are adequately planned for at the start of the research. Broadly, the suggested items to cover in a data management plan include: Data management overview: a description of the system(s) used, the data flow, the data management roles and responsibilities, the system for unique identification of individuals (or entities) and if relevant, the hierarchy and links between datasets and a codebook (c.f. criterion 4.3). Creation of database: description of data entry application (which in the case of electronic data collection will coincide with the data collection application), quality assurance and quality control mechanisms (c.f. criterion 4.4), database lock and statistical file creation. Data safety and security: relevant national/supranational legal framework(s); methods for back-ups, storage and archiving; data security protocol including access rights to ensure the anonymisation and privacy of data collected and processes for data sharing; procedures used to ensure national and international frameworks of data protection are adhered to. In addition, depending on the complexity of the study and the data management procedures, SOPs and job aids may be useful for data management staff. 4.2 Create and pretest a data entry application prior to effective start of data collection From the moment of effective data collection, it is important that the data management system is up and running adequately. To ensure this, it is important to test the system ahead of time. This testing may coincide with field testing (c.f. criterion 3.5) of data collection instruments. 4.3 Describe all variables in a codebook and consider preparing additional metadata documentation Metadata are a set of data which describe the data collected through research. Metadata serve as a reference for the team members involved in study and is essential to ensure the re-usability of data for future analyses. A codebook is the primary metadata document to link the questionnaire to the study database and includes information on all the variables in the database, which question (or other source) they were obtained from, codes and valid ranges, format of notation as well as variable definitions, especially for derived (calculated) variables. Another useful document is the annotated data collection form. It is best prepared before data entry and used during data entry. It is essentially a copy of the last or latest version of the data collection form with text boxes next to every entry indicating the variable name annotated data collection form. This should ideally not replace a codebook as it does not include the same level of detail, but can be an additional useful aid. There are numerous international efforts to harmonise metadata collected as part of research for multicentre studies.60–62 4.4 Put in place quality control mechanisms to ensure data accuracy, completeness and coherence Data accuracy refers to the degree to which data correctly estimate or describe the quantities or characteristics they are designed to capture.57 The most common method to ensure accuracy with paper-based data collection is double data entry. Alternative methods include partial data checks, which can be implemented in a number of ways. One option is to select a random proportion of data points (eg, 10%) from the database and to check them visually against the completed questionnaires. A less time-consuming variant is to randomly choose a number of respondents to check (rather than a number of data points) and to check all data for those respondents against the questionnaire. For electronically collected data, accuracy can be ensured by programming the database with precoded answer options, logical ranges for continuous data and skip logic. Data completeness is usually described as the amount of available data in a database compared with the amount that was expected to be obtained. Methods to check completeness includes tabulating the data in the database against the sampling list to ensure that all expected data are included. However, even when all sampled elements are included, certain variables may have missing entries. This can be checked by tabulating selected ‘critical’ variables (eg, those most important for analysis or most likely to be missing) to ensure that there are no systematic and patterns of missingness. Ideally, this should be done at regular time points throughout the implementation of the study to ensure that mid-course corrective measures can be put in place. Coherence refers to the degree to which data are logically connected and mutually consistent.57 Coherence has four important subdimensions: (1) within a dataset, (2) across datasets, (3) over time, (4) across countries. Data coherence within a dataset can be ensured by cross-checking variables, which ought to be perfectly correlated. One important element of coherence across datasets is ability to merge datasets, for which a good system of assigning unique identifiers is crucial. Standardised procedures and good guidance for data collection (c.f. criterion 3.3) and data management (c.f. criterion 4.1) can help ensure coherence over time and across countries. 4.5 Annotate all data cleaning and processing steps and strive for reproducibility by means of stored programming code Programming facilitates the documentation of study analyses and thus enables external parties to verify study results and claims and reproduce these. Most statistical software packages offer the possibility of doing data management using dropdown menus. Although this may be useful as a first step to explore, the data programming should be preferred to ensure methods reproducibility and results reproducibility. Most statistical programmes also have functionalities to store programmed code and annotate the data cleaning and analysis in a structured format (ie, R software scripts and markdowns, STATA do-files, SPSP syntax and SAS programs). Furthermore, when data are made available at different stages, programming makes it possible to progress on both data management and statistical analyses before the full database is ready. If analyses are not done by means of statistical software packages (eg, spreadsheets or qualitative analysis tools), it is important that they are nevertheless well documented and annotated to ensure results reproducibility. 4.6 For each data file define levels of anonymisation and privacy protection as well as corresponding access rights in line with national and international frameworks Data security measures should be made explicitly clear for each stage of the research process in line with national and international frameworks—such as the General Data Protection Regulation in the EU. Personal data, and especially sensitive personal data should be treated with extreme caution.63 Personal identifiers can be either direct or indirect.64 Although none of the indirect identifiers on its own would point to an individual, several indirect identifiers might do. The appropriately anonymised data have: (1) no single direct identifier or less than three indirect identifiers and (2) if dates are necessary for certain analyses, methods should be used to preserve anonymity without compromising statistical analyses, such as adding or subtracting a small, randomly chosen number of days to all dates. It should be clear at the start of the research, which research team members will have access to which data and how access will be managed (different team members might have different access rights). There are numerous ways to protect sensitive personal data. One method is by saving the personal identification data in a dataset that is separate from the bulk of the study data and only providing the ability to certain research team members to link the two datasets. This can be done by providing different passwords for different datasets or encrypting electronic database files. In the event the data are collected in a paper format, securely stored data forms (in locked cabinets with password access-locks or key-locks to which only specific research members have access) is the best way to keep the data secure. 4.7 At the beginning of the study, prepare an electronic secured study file to store all study documentation and outputs. Regularly update this file and archive it the end of the study Maintaining a secure electronic study file helps to ensure that the most up to date versions of all study materials are stored in a single location. An electronic study file should include protocols, data analysis plans, data management plans, ethical review submissions and responses, informed consent forms, data collection tools, anonymised datasets and transcripts, metadata, data management programmes, analysis programmes, statistical outputs, reports and publications. To ensure secure storage, the study file should resists on two physically separate regularly synchronised storage mediums, for example, on a local laptop hard disk and remote backup server. When setting up the storage system, it is important to think about risks to data integrity, externally (eg, fire, flooding), and internally (disgruntled staff member, ransomware, virus attacks, etc) and how to mitigate those. The choice of where to store and especially where to archive the data may be straightforward if researchers have an established data management facility in their institution. Cost-free remote data repositories may be a useful alternative when these are not available. There are three important considerations when choosing an online repository for data storage and archiving (as opposed to data sharing, which is discussed in criterion 6.6) : 1) Does it offer closed access and protected against unauthorised access; 2) Is it hosted by a trusted institution with a vision and capacity to provide long-term secure storage (eg, at least 10 years). Zenodo (zenodo.org) is a general purpose repository hosted by CERN which fits both criteria, while both can be problematic with public could storage services (such as DropBox, Google Drive, etc). 4.8 Retain source data safely, in their original form, preserving data confidentiality for as long as has been described in the protocol Source data refers to materials collected as part of the research at the primary source of data collection (study participants, household respondents, etc). Thus, source data includes: signed informed consent forms, filled in data collection forms, audios files, videos and photos and biological samples (data, images, photos of slides, but not sample itself). The study protocol should specify how long source data will be stored and under which conditions (and security guidance). Standard 5. Data analyses: analyse data according to the protocol and integrate statistical analyses with approaches from other disciplines in the study 5.1 Only work with personal identifiers that are necessary to answer the research questions During the process of data analysis, the person analysing the data should work on an appropriately anonymised or pseudo-anonymised dataset. Respondent-specific identifier number should be used to identify individual respondents in the data and the key between the identifiers and the personal confidential information of the respondents must be hosted by an independent person. As described in criterion 4.6, identifiers can be direct or indirect, and a combination of indirect identifiers may be sufficient to identify a person.64 Therefore, it is important to realise that there are limits to the extent to which this criterion can be met, especially if a number of indirect identifiers are relevant to analyses (eg, nationality/ethnicity, sex and age). Even if clearly personal information (such as name, address, telephone number, ie, direct identifiers) are removed from a dataset, it is usually still possible to identify individuals though combinations of indirect identifiers (such as disease status, sex, age and ethnic background). Such indirect identifiers are often relevant for the analysis. It is therefore important to realise that in practice, a dataset from a global health research project is rarely anonymised. However, pseudo-anonymisation may well be achieved. 5.2 Conduct statistical analyses in accordance with the protocol and distinguish preplanned from exploratory analyses One of the cornerstones of research integrity in epidemiology is ensuring that analyses do not deviate from the plan. As discussed in criterion 2.2, it is important to think about analyses before conducting a study because of dangers associated with performing multiple statistical tests. Most GEP guidelines recommend that any deviations from the statistical analysis plan are justified and documented. As discussed in criterion 5.4 below, such requirements may be difficult to fulfil in multidisciplinary studies where qualitative research informs the quantitative research (exploratory model) or vice versa (explanatory model).65 The prespecification of all analyses goes against the iterative nature of qualitative research. Certainly, analyses which were preplanned in the protocol and for which the study is powered should be distinguished from other exploratory analyses. Furthermore, all analyses should clearly relate to the research questions the study was set out to answer. 5.3 Fully annotate all analysis steps and strive for reproducibility by providing programming code All analyses steps need to be replicable to ensure results reproducibility and inferential reproducibility. As discussed in criterion 4.5, this can be facilitated by means of stored and annotated programmes or plain language instructions in a spreadsheet or word processing document. When data are made available at different stages, programming makes it possible to progress on both data management and statistical analyses before the full database is ready. Ideally, programming code should be organised in a way that enables results to be reproduced from the ‘clean raw database’ at the click of a button. 5.4 In multidisciplinary studies, integrate statistical analyses with analyses from other study disciplines in an iterative process to coherently address the research objectives As discussed in criterion 2.4, global health promotes multidisciplinary collaboration. In order to maximise the success of a multidisciplinary approach, study plans need to include regular moments of reflection with peers across all involved disciplines, throughout all study phases, but especially in design, analysis and interpretation of findings.38 At the analysis stage, one of the defining features of multidisciplinary research is the iterative cycles through which information from the various disciplines are integrated in order to coherently address the research questions. Multidisciplinary research involving disciplines with both quantitative and qualitative research traditions are especially challenging as it requires researchers to overcome and compromise on at times deep epistemological divergences. In our experience, the following iterative approach can help to ensure that the quantitative data are coherently mixed with the other qualitative disciplines: (1) start qualitative data analysis early on during data collection to ensure that all emerging themes are being explored; (2) conduct preliminary descriptive analyses of both quantitative and qualitative data as soon as data are available for analyses; (3) convene with peers from other research disciplines to discuss further statistical analysis of quantitative data (descriptive and inferential) and synthesis of the qualitative data (key themes); (4) combine analyses from the various disciplines to answer the research questions comprehensively; (5) define further higher-level analyses (either qualitative or quantitative) where gaps persist; (6) take note of elements which still need to be explored with new data and new research. 5.5 Put in place quality control mechanisms to ensure that data have been correctly analysed The most robust method to prevent erroneous analyses from being disseminated is having results (or a purposeful selection thereof) reproduced by a qualified person who was not previously involved in the analyses. Inconsistencies in the results should be discussed and a consensus reached between the two analysts. However, these types of approaches are often not possible in research settings as they are costly and time-consuming. Furthermore, there may not be another qualified person in the team capable of performing of an independent analysis. In such cases, one option is to ensure that the research team meets frequently, at different phases of results generation process, to review results and assess their validity, in order to spot any errors and mistakes at an early stage. Standard 6. Dissemination and communication: report and disseminate results, preferably in the public domain, with means of communication which appropriately target key stakeholders 6.1 Develop user-specific dissemination and communication plans in consultation with key stakeholders (representatives of the affected populations and end-users) Dissemination usually refers to making results known to research peers, policy-makers and other professional organisations to enable them to use the results in their own work.66 Communication refers to the promotion of results to communities and societies as a whole and possibly engaging in a two-way exchange.66 Publication of papers in peer-reviewed journals is often epidemiologists’ preferred mode of dissemination. Yet, it primarily targets the scientific community and international agencies while in order to have an impact (eg, change policies, practices or behaviour), global health research findings needs to be disseminated and communicated more broadly, in ways that will enable end-users to understand and find them.2 Research findings must be translated into different ‘formats and languages’ appropriate to the respective target audience, and should be delivered through effective communication channels.2 Dissemination materials may include policy briefs and white papers summaries for pamphlets and websites. Communication material can take the form of news articles and social media posts; community meetings, newspaper articles, videos or short films, documentaries, podcasts, infographics, etc. Art-based approaches, such as theatre, music, visual arts, storytelling and film67 are especially useful to reach and engage large numbers of people. Study findings need to be communicated neutrally and impartially, and where necessary conflicts of interest need to be clarified/declared. 6.2 Report data reporting in a non-stigmatising, non-discriminatory, culturally sensitive and non-identifying manner The information included in the dissemination and communication material must not stigmatise, discriminate or identify the study participant. Country-specific regulations must be followed during the dissemination of epidemiology study results. However, less stringent data protection standards in low-income countries can never be an excuse for researchers from high-income countries to condone potential privacy breaches.3 Special attention must be paid to ensure the protection of research participants who are at risk of stigmatisation, discrimination or incrimination.3 More specifically, epidemiologists should bear in mind that presenting data from small groups in tables or maps may make individuals easily identifiable and thus break confidentiality. If any participants are quoted with names or in picture, due consent for publicising their information must be obtained, paying particular attention to the protection of minors, elderly and other vulnerable populations. 6.3 Conform to reporting guidelines for the given study design and methods in academic publications Reporting guidelines are structured tools to guide researchers in the preparation of their scientific manuscripts. A reporting guideline provides a minimum list of information needed to ensure a methods and/or results can be understood by a reader, reproduced by a researcher, used by a practitioner and included in a systematic review.68 The Enhancingthe QUAlity and Transparency Of Health Research network is an online platform that promotes and disseminates reporting guidelines for health research, which can be consulted to identify relevant guidelines.68 Guidelines relevant for epidemiological study reporting include the Strengthening the Reporting of Observational Studies in Epidemiology,69 70 RECORD,71 Consolidated Standards of Reporting Trials,72 73 STARD74 and Preferred Reporting Items for Systematic Reviews and Meta-Analyses75 guidelines, and Standards for Reporting Qualitative Research76 for qualitative research. 6.4 Put in place quality assurance and quality control mechanisms to ensure complete, accurate, accessible and interpretable data reporting Complete and accurate reporting in scientific publications is key to research integrity. Previous items have described approaches to guaranteeing prepublication of the protocol (criterion 2.5) and use of reporting guidelines (criterion 6.3). Accessibility of results on the other hand is the primary driver behind open access of publications and is discussed in following sections (criteria 6.5 and 6.6). Interpretability reflects the ease with which users may understand and properly use data products.57 As discussed in criterion 6.1, this is very important in global health, as research findings need to be adequately communicated to end-users in order to have an effect on behaviour, decision making or policies—and an ultimate impact on health. Participatory approaches which engage users in the compilation of dissemination findings are especially useful to ensure that messages speak to the needs and concerns of users, are delivered through the most effective channels, and are understood as intended. 6.5 Consider indexed open access journals for scientific publications Open access to scientific publications is one of the cornerstones of efforts to foster research integrity and transparency. There are two main routes to open access77: (1) self-archiving (‘green’ open access) where researchers archive the published article or the final peer-reviewed manuscript in an online repository before, at the same time as, or after publication; (2) open access publishing (‘gold’ open access) where an article is immediately published in open access mode. With open access publishing publication costs (referred to as article processing charges (APCs)) are borne by the authors instead of readers. The charges of journals with high impact factors can be expensive and need to be considered when budgeting for the research. Many journals do offer discounted or waived rates for researchers from low-income and middle-income countries (and further discounts for students). Although the international status and impact factors of journals is an important aspect to establish the credibility of the research, sometimes, local or national-level journals can better reach targeted audiences and demonstrate a commitment to address local research questions and policy issues.78 These may or may not be open access. Where possible it is good to favour indexed journals, which can be found by search engine databases (eg, PubMed). A journal’s membership of the COPE network (https://publicationethics.org/) also indicates a commitment to ethical publishing practices In this regard, it is important to be aware of predatory publishing, an exploitive academic publishing business model that involves charging APCs to authors without providing editorial services, peer review or indexation. Young and inexperienced researchers from low-income and middle-income countries are most likely to publish in these journals.79 The line between ‘serious and reputable’ journals and predatory journals is blurred and unfortunately, a number of national journals in low-income and middle-income countries are deemed predatory.68 6.6 On study completion, consider publication of the archive in an openly accessible online repository. Consult key stakeholders and research partners to identify strategies within the study team’s remit to encourage as much as possible re-analyses by local researchers Open access data sharing is increasingly being encouraged and at times a condition for funding and publication. It is considered necessary to maximise the return on investment in research, with benefits ranging from the generation of novel findings as researchers re-examine the data applying different hypotheses, the possibility combine data sets from multiple studies, and the development of new research collaborations.77 80 There are many online repositories which support open access data sharing. The considerations to choose a repository for data sharing are slightly different that those discussed for data storage and archiving (criterion 4.7), as the main aim here is to maximise the ease with which peers will be able to find and access the data. The FAIR guidelines aim to improve the findability, accessibility, interoperability and reuse of digital data by both humans and machines.81 On one hand, researchers may want to privilege repositories which comply with these guidelines. On the other hand, there are difficulties in interpreting and putting these principles into practice and many repositories are still not able to comply, especially those in social sciences.82 Therefore, domain-specific open access repositories may be the most effective route to implement open access data sharing for global health epidemiologists, regardless of their compliance with FAIR. The Registry of Research Data Repositories (re3data.org) offers an overview of existing international repositories for research data. However, epidemiologists should also be aware that there is also a less noble side to data sharing in global health. It can end up being a lot more advantageous for scientists in high-income countries with higher analytical capacities than those in low-income countries where the data have been collected. While scientists in high-income countries may be highly trained to perform analyses, they have neither shared the legwork in collecting the data (including intellectual design and practical troubleshooting) with scientists in the low-income countries where the data were collected .83 In order to ensure that data sharing is mutually advantageous to all parties, the principle of ‘as open as possible, as closed as necessary’77 should therefore be followed. Embargoes are a useful short-term strategy to afford more time to local researchers, but fair data sharing should be considered within the frame of comprehensive long-term approaches to knowledge sharing—that is, epidemiological capacity building of researchers and more general investments in research infrastructure in low-income countries. As discussed in criterion 1.1, the extent of this capacity strengthening should be commensurate to the scope of the research. Conclusion None of what is described here will be new to experienced global health epidemiologists and researchers. Yet we know from first-hand experience that it is not easy to navigate the competing demands on a researcher’s loyalty in the complex multistakeholder environment in which we operate. With the benefit of hindsight there are certainly many things we would now do differently. By giving a name and a space to recurring challenges, by stimulating a reflection on routine practice and common assumptions, by offering arguments and background references, we intend to support those who are trying to stand up against questionable research practices and research unfairness. The notes of caution and invitations to reflect on research integrity and research fairness issues jointly can be valuable for teaching purposes for young epidemiologists and researchers embarking on the field of global health epidemiology. The exposure of these notions early on in their educational and professional development can ensure that the new generation of global health epidemiologists is more aware of the intricacies and challenges of our field, so that they do not unknowingly repeat known mistakes and reinforce unfair patterns of research behaviour. Yet we also hope that more experienced researchers will also be open to reflect on some deeply engrained practices and assumptions in global health epidemiology. Ultimately, we are aware that dissemination of these guidelines to a broad audience—including commissioners, funders, reviewers and publishers of research—is key to have to have a tangible impact. Glossary Affected populations: individuals and communities that are affected by the data collection process. This may be the people on whom data collection was actually done, but also their families and the wider community which may be directly or indirectly affected by it. End-users: individuals, communities or organisations external to those who conducted the research, who will directly use or directly benefit from the output, outcome or results of the research. Examples of end-users include researchers, policy-makers from governmental and non-governmental organisations, the service providers, communities and community organisations. Job aids: instructions, lists or quick reference materials derived from the main SOP. Job aids can be used when the full procedure is not needed at the time the task is performed.54 Multidisciplinary research: research which combines and, in some cases, integrates concepts, methods and theories drawn from two or more disciplines. Others may refer to this as ‘mixed methods’,37 ‘cross-disciplinary’8 or ‘multiple discipline’ research.35 Quality attributes: the formulation of quality assurance and quality control activities revolves around goals for quality attributes. Quality attributes in epidemiology include data quality dimensions such as relevance; accuracy; credibility; timeliness; completeness; accessibility; interpretability and coherence.57 These can either be attributes of the system that produced the data (ie, the process) or of the data itself (data output/outcome).82 Quality assurance: set of activities to ensure quality in the processes by which products are developed. Quality assurance aims to prevent defects with a focus on the process used to make the product. It is a proactive and ongoing quality process. Quality assurance includes quality control activities. Quality control: set of activities for ensuring quality in products. The activities focus on identifying defects in the actual products produced. Quality control aims to identify (and correct) defects in the finished product. Quality control, therefore, is a reactive process. Quality control activities a part of a broader quality assurance. Reliability: the degree to which a measurement is free from error, or more extensively, the extent to which scores for patients who have not changed are the same for repeated measurement under several conditions48: internal consistency: using different sets of items from the same multi-item measurement instrument; over time: test–retest; inter-rater: by different persons on the same occasion; intra-rater: by the same raters/responders on different occasions. Parameter/methods to measure reliability include: the SE of measurement, intra-class correlation coefficient, coefficient of variation, Cohen’s kappa, Cronbach’s alpha and Bland-Altman plots.48 Research instrument: set of questions or items used to collect information about research participants. Examples of research instruments: questionnaires for primary data collection with respondents, data extraction forms for collection of existing data records, case report forms to collect clinical data or interview guides for qualitative data collection, laboratory and imaging techniques, global position system and other devices. Synonym: research tool Reproducibility 53: an overall term which refers to Methods reproducibility: provision of enough detail about the procedures of a study so that these study procedures can be repeated exactly. Results reproducibility: ability of an independent study with closely matched procedures to give the same results as the original study. Inferential reproducibility: an independent replication of a study or a reanalysis of a study lead to qualitatively similar conclusions as the original study. Standard operating procedures (SOPs): written step-by-step instructions on how to carry out procedures correctly. SOPs are meant to ensure consistency, accuracy and quality of data. They help ensure compliance to the study protocol, regulations and international standards. SOPs can also be used as training tools.54 Validity: the degree to which an instrument truly measures what it purports to measure. Parameters/methods to measure validity include: specificity and sensitivity, receiver operating characteristic curves, weighed kappa, Spearman’s or Person’s correlation coefficients and Bland-Altman limits of agreement, factor analysis. Three different types of validity can be distinguished48: Content validity: does the content of the instrument correspond with what one intends to measure, with regard to relevance and comprehensiveness? Criterion validity: in situations where there is a gold standard for the measurement, how well do the scores of the measurement instrument agree with the scores on the gold standard? Construct validity: when there is no gold standard, does the instrument provide expected scores, based on knowledge on what it is trying to measure? References Footnotes Handling editor Seye Abimbola Twitter @Ru2ja Contributors JR drafted the section on protocol development, KV on data collection, AL on data management, RP, WM and SJ on dissemination and communication. SA compiled all contributions and finalised the document. SFR reviewed and complemented the first draft. All authors reviewed and approved the final version of this manuscript. Competing interests None declared. Patient consent for publication Not required. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement No data are available.
https://gh.bmj.com/content/5/10/e003237
A suture securement apparatus for selectively securing one or more ends of a suture or cord while allowing adjustments in the tension or a full release of the sutures or cord intermittently after a prolonged period of time. The suture securement apparatus can be particularly adapted for use with a purse-string suture to close a percutaneous catheter insertion site without causing puckering or distortion of the skin at the purse string suture site. The suture securement apparatus allows for subsequent modification of the amount of tensioning or full release of the sutures at the catheter insertion site. An adhesive suture securement device to provide a cradle configured to secure a hub of a catheter and/or securing sutures to be utilized with a suture securement apparatus. A surgical securement and marking system utilizing one or more surgical securement apparatus that are color-coded to convey information about the associated intercorporeal structures. An extender tube connected to the surgical securement apparatus enables securement of intracorporeal structures within the surgical site from outside the surgical site.
Patient Web Portal and Mobile Application The majority of the population today prefers access to the Internet and other services through their mobile devices. It means healthcare services are not aloft from the increased mobile usage trends. This is one solid reason for why hospitals must have patient web portal or a mobile application. Of course, mobility has myriads of advantages when you think of healthcare domain.
http://www.impulsetech.co.in/patient-web-portal-and-mobile-application/
Submitted by Fiona Beal I had a phone call today from a friend in another city telling me that her school is revising its entire IT strategy. They are in the process of reviewing current and past trends in educational IT practice and she wondered if I could forward any documentation or links that speak to the conversation. My interest was immediately sparked and I thought this would make an excellent blog post topic. So, a search from all my bookmarked sites on this topic leads to the following thoughts: |photo: Judi Francisoc’s flipped classroom experiment| 1. The Horizon Report My first thought regarding documentation was the Horizon Report, the latest one for K-12 education being the 2013 edition downloadable at http://www.nmc.org/publications/2013-horizon-report-k12 . The Horizon Project is a comprehensive research venture established in 2002 that identifies and describes emerging technologies likely to have a large impact over the coming five years in education around the globe. Their 2013 table of contents contains: - Time-to-Adoption Horizon: One Year or Less (Cloud Computing and Mobile Learning) - Time-to-Adoption Horizon: Two to Three Years ( Learning Analytics, Open Content) - Time-to-Adoption Horizon: Four to Five Years (3D Printing, Virtual and Remote Laboratories) 2. Tablets, mobile learning, BYOD, and cloud mobility This trend is really gaining popularity world-wide as education policy makers embrace the power, availability, and wide spread use of the smart phone, the tablet, BYOD and the importance of Internet Connectivity in every classroom. We can have information at our fingertips which automatically leads to a complete change in teaching methodology. - This incorporates digital wearable technology like Google Glass and smart watches - Augmented reality is making its way into education with a large number of popular apps such as Aurasma being used effectively in classrooms around the world. - 3D printing is a technology that allows users to turn any digital file into a three dimensional physical product. Several South African schools have invested in 3D printers for educational use. - Google Apps and the paperless classroom. It’s hard to recall what life was life before Google! The paperless digital environment is becoming a reality in many parts of the world and there’s no doubt that Google Apps (and the ever-increasing number of tools that link with Google Apps) is making this happening. Google Drive (formerly known as Google Docs) allows students to work totally in the cloud. This means that students can access their work and files anywhere. They can start an assignment at school, finish it at home, and share it with the teacher virtually to have it assessed. The teacher can add notes highlighting good points and errors, and comment verbally or in writing on what is written within the document. 3. Student-created content is on the rise With information so readily available via the Internet student-created content creation is coming to the fore. Think about what students learn and experience when they create their own digital content.They access and curate materials and put together a layout. They have to research the subject that they are creating the content about and learn the application they’re using to create it. Their work can be shared with a world-wide audience which leads to a sense of pride and achievement. - The use of digital tools is becoming commonplace in education. Rich media content that is available widely for educational purposes is awesome i.e. Khan Academy. Jane Hart gives a great updated yearly list of the top 100 tools in education.. - New online tools for creating content are invented every day! Different tools are used for different needs and the concept of App smashing for creating multimedia projects is being applied. - The Maker Movement, is a technological and creative learning revolution underway around the globe. Education is recognising the amazing consequences of encouraging students to create–from start to finish. 4. Teaching delivery methods are changing There’s no doubt that the use of technology plus all these changes in education will allow teachers to create powerful learning experiences based on what is commonly referred to as 21st century skills. These learning experiences could include: - Learning simulations The use of simulated activities in education is widely becoming recognized as an important tool in schools. They simulate an activity that is “real”, and so it can be said that they are “virtually real”. They simulate the activity so well that there is little difference between the simulated environment and the real one, and the same kind of learning experience can take place. - Project-based learning.Project-based learning is a dynamic approach to teaching in which students explore real-world problems and challenges. With this type of active and engaged learning, students are inspired to obtain a deeper knowledge of the subjects they’re studying. - Classroom Learning Management Systems and Social Learning Networks such as Google Classroom, Edmodo, Schoology, Obami etc. keep a classroom motivated and organised. These are community-building platforms for a class where teachers and students get to share learning resources and interact with each other. - Use of video in education. YouTube is way, way more engaging than reading and writing. Teachers who use instructional video report that their students retain more information, understand concepts more rapidly and are more enthusiastic about what they are learning. - Virtual learning encompasses learning that takes place outside of the school…everywhere really…and brings it into the school. The benefits of virtual field trips are well known: They’re inexpensive—often free—and are less time-consuming than a real trip. - Gamification of education. ‘This is a little like online math and science games, but bigger, longer, deeper, and more fun, with a focus on critical thinking, problem solving, risk-taking, attention to detail and creativity‘ (Jacqui Murray) - Crowdsourcing. So much information is available. ‘’Crowdsourcing’ is the concept of gathering input from the ‘crowd’–in your case, that means students, a class, the school, or whatever group you are focusing on. It encourages everyone’s participation in learning, teaching, and events’ (Jacqui Murray) - Social media The important use of social media in the classroom is being recognised around the world – blogging, Twitter, back-channeling, online communities etc. Knowledge will always matter but connectivity is becoming an important factor in enhancing knowledge. - 21st century skills can be actively pursued in the classroom. Studies have shown a positive impact on learning when students are required to engage in inquiry, collaborate, analyse content critically, construct knowledge, create, and effectively communicate their learning . - Genius hour is based on Google’s 80/20 plan. Genius hour is a movement that allows students to explore their own passions and encourages creativity in the classroom. It provides students with a choice in what they learn during a set period of time (e.g. 20% of the day) during school. - Flipped classroom Flipped instruction frees up some time to enable active learning to take place in the classroom.Sol Khan explain this concept via a video. The network of Microsoft Innovative Schools includes schools and teachers exhibiting many of these attributes. 5. Global collaboration is on the increase We are living in an increasingly collaborative, team-based world. Global collaboration projects bring students together from different countries to work on a joint project. They can be introduced at any age or grade-level and infused into any subject area or curriculum. Skype and Google hangouts make this collaboration possible at face-to-face level. 6. Learning space design Learning spaces are being redesigned. To accommodate the changes taking place in education many schools and teachers are redesigning their classrooms to accommodate deeper learning. 7. Programming and coding http://askatechteacher.com/2014/01/30/7-education-trends-you-dont-want-to-miss/ Maths andSscience are always hot trends but the 2013-14 school year seems to also be about coding, programming, and music in some countries. In the UK coding is being brought in as early as Grade 1. 8. Personal Learning Networks and online communities Teachers can benefit immensely from connecting with others and developing their own personal learning networks as this video ‘Use Digital Tools to Create and Grow your Personal Learning’ explains. 9. TED Talks about education TED Talks bring us the latest innovations and ideas via their popular video series. This video shows some leaders in technology explaining trends. The world is changing before our eyes. Education is changing. What will we do to embrace these changes in our schools and classrooms? https://www.youtube.com/watch?v=9qmwdbhsbVs&feature=youtu.be Further reading - 7 Things Schools Of The Future Will Do Well - 4 Top Trends In Education For 2013-2014 - 8 Great Reasons to Flip Your Classroom (and 4 of the Wrong Reasons), from Bergmann and Sams - Flipping the Classroom Facilitates Active Learning Methods – Experiential, Project Based, Problem Based, Inquiry Based, Constructivism, Etc. - 7 Education trends you don’t want to miss.
https://www.schoolnet.org.za/blog/trends-in-education-thoughts-on-what/
A patient who is waiting to be seen in a clinic is feeling stressed. In relation to physiological stress, explain why their heart rate is increasing. PLEASE TYPE YOUR ANSWER BELOW: Any destruction or disturbance into a person’s mental and physical wellbeing is referred as stress. Stress can be considered as stimulus wherein a person reacts in a different way (Niven, N. 1994). In stressful circumstances, The body react if there are build ups in the production of hormones like adrenalin and cortisol that yields an alteration of heart rate and metabolism to make better performance (Peters, M. 2006). Stress reaction is triggered when there is an activation of stressor in the hypothalamus. In this essay the stressor identified is the patient waiting to be seen in a clinic feeling stressed. When patient perceive that they are in frightening or uneasy situation and that they do not have the capacity to deal successfully, the neurons carry the messages to the cerebral cortex which is the area of the brain where thinking process comes, to determine the trouble. The messages sent through the amygdala which identifies the probable outcome of the trouble (Lightman, S. 2008). Amygdala is a part of the limbic system or called key emotional centre of the brain (Tortora and Derrickson 2009), It is also called the nuclei of the hypothalamus which is the critical part of the brain that coordinate an activities in response of stress. The amygdala triggers hypothalamus to bring messages through sympathetic centers in the spinal cord which then get to the middle of the adrenal gland called adrenal medulla, and release hormone 2 called adrenalin (epinephrine) and norepinephrine (also known as noradrenaline) (Lightman, S. 2008). Adrenaline and noradrenalin are chemicals that released into the blood stream by nerve endings of the sympathetic nervous system (part of autonomic nervous system) in response to physical or mental stress, it is very significant...
https://www.studymode.com/essays/Physiological-Stress-60968044.html
Liz Freeman, Naples Daily News Florida hospitals are facing skyrocketing costs for temporary contract nurses as the COVID-19 pandemic burns out longtime staff members and workforce shortages continue to worsen. As staffing agencies for travel nurses double and triple their fees to hospitals, the Florida Hospital Association is tracking complaints of price gouging in other states. California’s hospital association last month asked the state Department of Justice to conduct a probe on behalf of its 400 hospitals. “We need your immediate support to ensure that high quality, affordable care remains available for all who will need it in the coming weeks and months,” the association’s Sept.15 letter states. Florida’s hospital association declined to say whether price gouging is occurring or if a statewide investigation is warranted, but “we are closely watching what is going on in California and other states,” said Mary Mayhew, the group’s president and chief executive officer. Hospitals in Florida have responded to staffing shortages differently, but nearly every hospital is using travel nurses to combat the shortage and handle surging patient volumes due to the pandemic, Mayhew said. The cost hospitals are paying for travel nurses is a huge concern. “Across the state, we are hearing reports of prices two to three times earlier levels,” Mayhew said in an email. The association is reviewing the best approach to prevent price gouging by nurse staffing agencies while also ensuring hospitals can meet patient needs, Mayhew said. “Florida has very strong laws and regulations against price gouging during hurricanes, and for good reason. It’s simply wrong to take advantage of a crisis,” she said. Outcry over what staffing agencies are charging hospitals for travel nurses has been ongoing for months. In February, the American Hospital Association called on the Federal Trade Commission to investigate, saying “outrageous rate hikes appear to be naked attempts to exploit the pandemic,” according to a Feb. 4 letter to the federal agency. The national hospital association, which represents nearly 5,000 hospitals, has not received a response, a spokesman said. Aya Healthcare Inc., which bills itself as the nation’s largest travel nurse agency, allegedly charged $160 or more an hour for temporary nurses, according to a lawsuit filed in March by Steward Health. The Texas-based hospital system operates 34 hospitals in the U.S. including eight in Florida. Typically the rate was $75 an hour before the pandemic, according to the March 8 complaint in Massachusetts Superior Court. A month later, Aya filed a counterclaim to recover an unpaid contract balance of $40 million from Steward, saying the hospital had agreed to all staffing rates across the country when it signed a crisis staffing agreement March 23, 2020. In California, hospitals have little choice but to pay the current rates and fear a “public airing of concerns” could lead to unwillingness on the part of some agencies to work work with them, according to the California Hospital Association. Hospitals in poorer communities are the least able to pay high staffing agency fees, it said. It’s not clear if hospitals in Florida serving poorer communities are similarly hamstrung, but at least one rural hospital, 25-bed Hendry Regional Medical Center in Clewiston, says it has had to stretch its budget to handle the cost of travel nurses. “It is very concerning we do not have any alternatives if we are to continue serving the community,” said R.D. Williams, the hospital’s chief executive officer. What can travel nurses earn? Florida recently overtook California as the state with the highest number of travel nurses applying for positions, followed by Arizona, Minnesota and Georgia, according to Vivian Health, a hiring marketplace firm that connects healthcare professionals to jobs but is not a staffing agency. Florida has always been a destination for travel nurses, especially in the winter months, said Lynne Gross, president of RNnetwork, a Boca Raton-based staffing agency. “We typically see an increase in demand from fall to spring, although over the last few years the summer has been popular as well,” Gross said in an email. The pandemic has led to a 58% increase in demand, and hospitals in the state “had to double their rates to attract providers,” she said. Nurses have left their permanent jobs at hospitals due to stress and to accept travel positions where pay rates vary greatly depending on the setting and the specific contract, Gross said. She declined to offer pay ranges. “They also want to be adequately paid for their services, so many are finding new jobs both in permanent and travel positions,” she said. Gross did not respond directly when asked about the complaints of price gouging by the national hospital association or how the Florida association is watching rates. “Demand for nurses in all settings, not just for travel, is very high, causing rates to rise across the board,” she said. September’s average weekly pay for a travel nurse in Florida was $3,056 — 4% above the nationwide average of $2,935, according to Vivian Health. That’s a big jump from September 2020, when travel nurses in Florida were paid $1,744 a week. “The pay has increased dramatically over the past six months and is likely to be a factor that is driving the interest from nurses to travel (here) to take assignments,” according to Vivian Health. “Florida is currently the 13th highest paying state for travel nurses. Job volume is at an all-time high.” Nurses who are willing to leave their homes and full-time jobs to take temporary gigs elsewhere can earn around $50 an hour, according to ZipRecruiter, an employment website. The average annual salary nationally for a travel nurse is $108,070, or $9,000 a month, which takes into account a total package that includes non-taxable stipends for housing, meals, mileage and sign-on bonuses, according to TravelNursing.org. The group used figures from Indeed.com, another employment website. The median annual wage for registered nurses was $75,330 in May 2020, according to the latest data from the U.S. Bureau of Labor Statistics. Because the stipends are classified as reimbursement and not income, travel nurses bring home a higher total pay compared to staff nurses, according to TravelNursing.com. Job postings with Fastaff Travel Nursing show night shift nurse openings at Florida hospitals for $5,100 a week. Some positions are vacant, while other night shift positions for $6,100 a week have been filled. Travel nurses fill big gaps At Lee Health, the largest publicly operated hospital system in Southwest Florida, “The rate for travelers is approximately double what it was two years ago,” spokeswoman Mary Briggs said. Lee Health has 386 openings for full-time bedside registered nurses. The hospital system with four acute-care campuses currently has 2,318 full-time registered nurses out of a total workforce of 14,000. She did not say what percentage of the nurse vacancies are being filled by travel nurses. “Due to the nationwide nursing shortage, it is necessary to use travel nurses to supplement our full-time staff to ensure we can effectively care for all our patients during the pandemic and other periods of increased patient volume,” Briggs said. The state’s surge of COVID-19 cases this past summer from the highly contagious delta variant has not been a reason given by any travel nurses who declined a contract with Lee Health, nor has the workload been a reason, she said. “Travelers are willing to go to locations or states with the highest rates,” she said. “Travelers are currently here at Lee Health and if needed for season, we will work to extend their contracts.” The NCH Healthcare System in Southwest Florida is relying on travel nurses to help fill about 45% of its 190 openings for registered nurses, Renee Thigpen, chief human resource officer, said in an email. On Jan. 1, 2020, before the pandemic started, NCH had 35 travel nurses working in its hospitals. That rose to 145 travel nurses by the end of last month. The hospital system, the largest in Collier County, has roughly 1,300 registered nurses among a workforce of 5,000 employees. There’s no question that hospitals across the country have paid much more for qualified travel nurses as the pandemic heightened competition, Thigpen said. “Due to demand, we are paying well over twice what we would ordinarily pay for a travel RN as compared to what we used to pay pre-COVID,” Thigpen said. Hendry Regional, which has 38 registered nurses out of a workforce of 250, had tried to steer clear of hiring travel nurses because of the expense, but that has not been possible during the pandemic. “We are currently filling about 10 spots with travel contactors now,” Lisa Miller, director of human resources, said in an email. “We have hired 28 travelers in the last 18 months (when) usually we would hire less than 10 in an 18-month period.” The hospital has been able to stretch its budget to handle the expense with the help of some of its federal stimulus money, said Williams, the hospital’s chief executive officer. “The cost of travel nurses has increased significantly since March of 2020, more than double what we were paying pre-pandemic,” said Williams, adding that it is difficult to attract and retain suitably skilled nurses to the hospital. “Many nurses are accepting travel assignments for significantly more than we can pay them here,” he said. “That and larger facilities in metropolitan areas have implemented sign-on bonuses and rates of pay which we are unable to match.” At NCH, sign-on bonuses for nurses reach $20,000, with additional incentive pay and free health insurance. A nursing shortage with no end in sight The national shortage of nurses that predated COVID-19 has only worsened as nurses leave the profession due to stress, burnout and rising workloads. A survey of registered nurses showed 36% were thinking about leaving bedside care or already had, according to the findings from 1,000 nurses who were queried in March by the American Association of International Healthcare Recruitment. And 60% said nurse-to-patient ratios rose to unsafe levels in the last year. Roughly 25% of registered nurses in Florida left their jobs last year, citing burnout as a top cause, according to a study by the state hospital association last spring when the vacancy rate stood at 11%. “A shocking 1 in 3 critical care nurses also left the field since the start of the pandemic,” the association’s Mayhew said. “Florida hospitals are reporting significant staff burnout among all staff, especially nurses.” Last week, the group and the Safety Net Hospital Alliance of Florida released a study showing no light at the end of the tunnel for hospitals and their staffing woes. The state is facing a shortfall of 59,100 nurses by 2035, which includes a 12% shortage of 37,400 registered nurses. The state will also have a 30% shortage of 21,700 licensed practical nurses. The two groups commissioned the study, which was conducted by IHS Markit, an analytics firm. “Florida needs nurses now and well into the future,” Mayhew said in a news release. “As Florida’s population continues to grow, our healthcare system must be ready to meet the ever-increasing demand for services.” The state’s rapid growth puts a strain on healthcare systems and nurses, said Justin Senior, chief executive officer of the Safety Net Hospital Alliance of Florida, which represents 14 publicly operated hospitals, children’s hospitals and teaching institutions. “As we have seen throughout this pandemic, there is no substitute for the care of an excellent nurse,” he said in a news release. There are roughly 194,500 openings nationally for registered nurses each year in the U.S., according to the U.S. Bureau of Labor Statistics. The lucrative pay for travel nurses has compounded the nursing shortage in the state, said Willa Fuller, executive director of the Florida Nurses Association. She’s heard some nurses have been offered $100 an hour to travel, but she doesn’t know to which states or if that’s within Florida. “They are offered an insane amount of money,” Fuller said. “I even heard some are paying off their mortgages.” Despite the cost of travel nurses, many hospitals have no choice but to use them, Fuller said. “They prefer to have an in-house pool,” where nurses who don’t want to work full time are willing to accept shifts to fill needs. Pool nurses get a higher hourly wage than staff nurses but no benefits. Even though Florida led the nation for many weeks with its high delta variant cases and hospitals were overflowing, that didn’t stop out-of-state nurses from coming to Florida, Fuller said.
https://firstcoastaccidentlawyers.com/an-insane-amount-of-money-floridas-demand-for-travel-nurses-raises-concerns-of-price-gouging/
7 comments: Love your floral top. It goes so well with the cardigan.ReplyDelete I'm excited for a new post, I check over here almost every day :) I so agree with you about pattern mixing. I'm not typically a fan of mixing loud patterns, but more subtly I think is amazing. I adore mixing textures, and when it all just works, you know! This outfit is amazing! I love the pop of that gorgeous bright blouse. With it all under that fur coat, just makes you so so glam! Love it. Merry Christmas and have an amazing day today!ReplyDelete xoxo Rachel Love the fur! <3ReplyDelete xoxo, Bonnie Wow ...you look pretty and stylish!! Great look!!ReplyDelete Would you like to follow each other? Evi xoxo http://thenotebookofafashionlover.blogspot.com/ I love the coat! And that lining is cute!ReplyDelete Tam, I love the entire idea of this post. It's definitely clever, and your outfit is amazing. I love your blog!!! I'm so happy we are now connected. Keep up the wonderful work. xoxoReplyDelete http://www.sealedwithglitter.com Love the layering!!!ReplyDelete http://worldoffashionandbeauty.blogspot.com New comments are not allowed.
http://www.helloframboise.com/2011/12/fur-trimmed-holiday.html
The First Law of Motion states, “A body at rest will remain at rest, and a body in motion will remain in motion unless it is acted upon by an external force.” This simply means that things cannot start, stop, or change direction all by themselves. It takes some force acting on them from the outside to cause such a change. This property of massive bodies to resist changes in their state of motion is sometimes called inertia. Note: The more mass an object has, the more inertia an object has. The Second Law of Motion describes what happens to a massive body when it is acted upon by an external force. It states, “The force acting on an object is equal to the mass of that object times its acceleration.” This is written in mathematical form as F = ma, where F is force, m is mass, and a is acceleration. The bold letters indicate that force and acceleration are vector quantities, which means they have both magnitude and direction. The force can be a single force, or it can be the vector sum of more than one force, which is the net force after all the forces are combined. Note: If the object moves at a constant velocity, then there is no acceleration (a = 0), therefore, no force is acting on the object. When a constant force acts on a massive body, it causes it to accelerate, i.e., to change its velocity, at a constant rate. In the simplest case, a force applied to an object at rest causes it to accelerate in the direction of the force. However, if the object is already in motion, or if this situation is viewed from a moving reference frame, that body might appear to speed up, slow down, or change direction depending on the direction of the force and the directions that the object and reference frame are moving relative to each other. The Third Law of Motion states, “For every action, there is an equal and opposite reaction.” This law describes what happens to a body when it exerts a force on another body. Forces always occur in pairs, so when one body pushes against another, the second body pushes back just as hard. For example, when you push a cart, the cart pushes back against you; when you pull on a rope, the rope pulls back against you; when gravity pulls you down against the ground, the ground pushes up against your feet; and when a rocket ignites its fuel behind it, the expanding exhaust gas pushes on the rocket causing it to accelerate. Practice Problems: 1. Determine the accelerations that result when a 12-N net force is applied to a 3-kg object. Ans: F = ma a = F/m = 12/3 = 4 m/s 2. A net force of 15 N is exerted on an encyclopedia to cause it to accelerate at a rate of 5 m/s. Determine the mass of the encyclopedia. Ans: m = F/a = 15/5 = 3 kg 3. Ben pushes a suitcase with a horizontal force of 50.0 N at a constant speed of 0.5 m/s for a horizontal distance of 35.0 meters. How much force is the suitcase exerting on Ben during this entire motion? Ans: 50.0 N on Ben in the direction opposite the movement of Ben’s push. 4. An accelerating body has a net force acting on it. True or False? Ans: True. 5. A car going about a roundabout has a net force acting on it. True or False? Ans: True. Changing direction is changing acceleration which only occurs if there is a net force causing the change in direction. In this scenario, there is a constant perpendicular force from the side that is causing this change of direction. Reference:
https://chitowntutoring.com/newtons-law-of-motion/
European Antibiotic Awareness Day Tuesday 18 November is European Antibiotic Awareness Day and Wiltshire CCG is taking part. As a leader in providing and commissioning healthcare to the people of Wiltshire, the CCG is aware that antimicrobial resistance (AMR) is a major public health issue and a threat to the future of healthcare. The European Antibiotic Awareness Day is a public health initiative that takes place each year to raise awareness about the threat of antibiotic resistance and promote prudent antibiotic use. The main objectives of this are to: - Educate, inform and engage patients and healthcare professionals about the appropriate use of antibiotics and reduce the expectation that antibiotics will be prescribed to treat colds, coughs and sore throats - Motivate healthcare professionals to prescribe antibiotics more appropriately - Educate, inform and engage patients and healthcare professionals about the importance of preventing resistance to antibiotics - Reinforce awareness of this problem as a wider international issue by promoting European Antibiotic Awareness Day - Align key messages and activities with the objectives of the UK Five-Year Antimicrobial Resistance Strategy 2013-18 The target audience is: - Frontline prescribing healthcare professionals in primary and secondary care, including GPs, hospital doctors, pharmacists and nurses - Patients and the general public - Parents of young children - Children Prudent use of antibiotics can help stop resistant bacteria from developing and help keep antibiotics effective for the use of future generations. This campaign aims to reduce the overuse and misuse of antibiotics, which is leading to many bacteria becoming resistant to these essential medicines. Antibiotic resistance is one of the biggest threats facing us today. Antibiotics have dramatically reduced the number of deaths from infections and infectious diseases since they were introduced 70 years ago; we need them to continue this important role in treating serious illness and helping to prevent early deaths. They are now a vital tool for modern medicine and we also need them to avoid infections during today’s cancer treatments, caesarean sections and many surgeries. In Europe alone 25,000 people already die each year because of antibiotic resistant bacteria. What Wiltshire CCG is doing We have employed Infection Prevention and Control Specialist Nurses to help in the fight against the spread of antibiotic resistant infections. We have also made a pledge to be an antibiotic guardian, striving to stop the overuse and misuse of antibiotics that is leading to many bacteria becoming resistant to these essential medicines. Our Medicines Management Team work closely with primary care providers to monitor prescribing and promote prudent use of antibiotic and our Quality Team has introduced CQUINs (Commissioning for Quality and Innovation) around antibiotic prescribing and stewardship. Wiltshire CCG is calling on all prescribers to become antibiotic guardians and help slow resistance to antibiotics by not prescribing them unless absolutely necessary.
http://www.wiltshireccg.nhs.uk/news-2/european-antibiotic-awareness-day
Environmental Impact Assessments Virtually all developments result in impacts on the environment. Predicting the nature, scale and duration of those impacts on species and ecosystems requires understanding of their ecology and the processes that sustain them. We are well recognised for our technical capacity to identify the pathways by which receptor species and communities may be impacted, and Biota has developed objective approaches to determining whether an impact is likely to be significant. Formal Environmental Reviews The level of assessment at which proposed developments will be assessed is set by the EPA under the state Environmental Protection Act 1986. Those that will be formally assessed by the EPA are subject to a level of public environmental review, requiring technical studies and a formal review, consistent with approved scoping requirements. Proposals can similarly be deemed to be Controlled Actions under the Commonwealth Environment Protection and Biodiversity Act 1999, which also requires a formally documented public assessment process. Biota has led, prepared and contributed key elements to numerous formal environmental reviews, including meeting both state and Commonwealth requirements. Other Environmental Approvals Proposed developments that may impact on environmental factors, or matters of national environmental significance, could require referral to the Western Australian Environmental Protection Authority (EPA), and may also required referral at Commonwealth level under the terms of the Environment Protection and Biodiversity Conservation Act 1999. There are also multiple other possible environmental approvals pathways that may be appropriate to particular projects. Biota has assisted proponents with strategic approaches to the assessment process, advice on the need to refer, potential assessment pathways and support with preparing and lodging referral documentation at both State and Commonwealth levels of government. Environmental Management Plans Even with the strongest commitment to the avoidance of impacts, residual risks remain that projects may both directly and indirectly affect environmental values. Environmental Management Plans (EMPs) are then needed - or required by regulators - to ensure that appropriate management measures are implemented. Biota has experience with preparing both general construction and operations EMPs to meet regulatory expectations, and we have also prepared species-specific management plans for Threatened fauna and flora and fire management plans for the protection of infrastructure and improving landscape biodiversity values.
https://www.biota.net.au/services-eia
FIELD OF THE INVENTION This invention relates to polymer blends and, more particularly, to thermoplastic molding compositions of high heat deflection temperature that are blends of certain copolyesters with polyetherimides. BACKGROUND OF THE INVENTION Polyesters and copolyesters are useful materials for injection molding and extrusion, but their heat deflection temperatures are often relatively low. Polyetherimides are plastics with excellent flexural strength and performance at high temperature; however high temperature are generally required to process them. Polyetherimides are also generally more expensive than polyesters. Blending polyesters with polyetherimides could provide compositions that have satisfactorily high heat deflection temperatures together with processing temperatures lower than those required for pure polyetherimides. Further, economical blends having good flexural strength would be desirable for certain uses. Blends of polyesters and polyetherimides are disclosed in U.S. Pat. No. 4,141,927. Polyesters of terephthalic acid and isophthalic acid with ethylene glycol are disclosed. Cyclohexanediol is mentioned as a possible glycol but there is no suggestion of cyclohexane-1,4-dimethanol. No mention is made of blends having high heat deflection temperatures and low processing temperatures. Blends of polyarylates with polyetherimides are disclosed in U. S. Pat. Nos. 4,250,279 and 4,908,419. Three component blends of polyetherimide, polyester, and another polymer are also disclosed in U.S. Pat. Nos. 4,687, 819 and 4,908,418. U.S. Pat. No. 4,908,418 mentions a polyester of 1,4- cyclohexanedimethanol as a suitable polyester for the three-component blend. In none of these references is there a suggestion of a polymer blend having the properties of high heat deflection temperature and low melt processing temperature. The present invention provides novel thermoplastic blends of polyetherimides and copolyesters that have in combination a lower processing temperature than the polyetherimide, a desirably high heat deflection temperature and a high flexural modulus. In addition, certain compositions of the invention form single phase solid solutions of excellent clarity. These compositions are especially useful for forming clear molded articles having good high temperature properties. BRIEF SUMMARY OF THE INVENTION The composition of the invention is a thermoplastic polymer blend comprising about 20 to 65 weight percent of a polyetherimide and from about 80 to 35 weight percent of a copolyester of (a) an acid component comprising terephthalic acid or isophthalic acid or a mixture of said acids, and (b) a glycol component comprising ethylene glycol and up to about 60 mol percent 1,4-cyclohexanedimethanol. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a graph of weight fraction of polyetherimide in a polymer blend versus heat deflection temperature. FIG. 2 is a graph of weight fraction of polyetherimide in a blend versus heat deflection temperature increases. DETAILED DESCRIPTION OF THE DRAWINGS In the compositions of the invention polyetherimides are blended with certain copolyesters. The copolyester of the blend is preferably a poly(cyclohexane-1,4-dimethylene-co-ethylene terephthalate) (PCT-co-PET) with up to about 60 mol percent of 1,4-cyclohexanedimethanol in the glycol portion of the copolyester. The acid component of the copolyester can also comprise isophthalic acid or a mixture of isophthalic and terephthalic acids. The compositions of the invention, which comprise a blend of these polyesters with from about 20 to about 65 weight percent polyetherimide, can be processed at lower melt temperatures than the polyetherimide, and exhibit unexpectedly high heat deflection temperatures and have high flexural strength. Similar advantages are also obtained when the copolyesters of the blend are modified with minor amounts of other acids or glycols. The novel polyetherimide/polyester blends comprise about 20 to about 65 weight percent of a polyetherimide of the formula: ##STR1## where n represents a whole number greater than 1, for example, 10 to 10, 000 or more. The radical --O--R--O-- is in the 3-- or 4-- and 3' and 4'- positions. The radical --R-- is a member of the class consisting of: ##STR2## where m is 0 or 1 and Q is a divalent radical of the formula -- O--, ##STR3## --S-- or --C.sub.x H.sub.2x --, and x is a whole number from 1 to 5. The radical --R'-- is a divalent organic radical selected from the class consisting of: (1) aromatic hydrocarbon radicals having from 6 to 20 carbon atoms and halogenated derivatives thereof; (2) alkylene radicals and cycloalkylene radicals having from 2 to 20 carbon atoms; and (3) radicals of the formula: ##STR4## where R&quot; is --O--, ##STR5## --S-- or --C.sub.y h.sub.2y --, and y is a whole number from 1 to 5. Such polyetherimides can be formed, for example, by the reaction of an aromatic bis(ether anhydride) of the formula: ##STR6## with a diamino compound of the formula: H.sub.2 N--R'--NH.sub.2 Specific examples of polyetherimides useful in the compositions of the invention and methods of making the polyetherimides are disclosed in U.S. Pat. Nos. 3,847,867; 3,847,869; 3,850,885; 3,852,242; 3,855,178; 3,887, 588; 4,017,511; 4,024,110 and 4,141,927. These disclosures are incorporated herein by reference. The novel blend also comprises about 80 to about 35 percent by weight of a copolyester comprising: 1) an acid component comprising terephthalic acid or isophthalic acid or a mixture of said acids and 2) a glycol component comprising a mixture of 1,4- cyclohexanedimethanol and ethylene glycol with from 5 to about 60 mol percent 1,4-cyclohexanedimethanol. The polyester is thus a copolymer having the repeating units, ##STR7## where the mol percent of (IV) in the copolyester is from about 5 to 60 percent. This copolyester can be formed, for example, by the reaction of a mixture of terephthalic acid and isophthalic acid or their equivalent esters with a mixture of the two glycols, 1,4-cyclohexanedimethanol and ethylene glycol. The copolyesters can be modified by minor amounts of other acids or a mixture of acids (or equivalents esters) including, but not limited to, phthalic acid, 4,4'-stilbene dicarboxylic acid, 2,6- naphthalenedicarboxylic acid, oxalic acid, malonic acid, succinic acid, glutaric acid, adipic acid, pimelic acid, suberic acid, azelaic acid, sebacic acid, 1,12-dodecanedioic acid, dimethylmalonic acid, cis-1,4- cyclohexanedicarboxylic acid and trans-1,4-cyclohexanedicarboxylic acid. The copolyesters can also be modified by minor amounts of other glycols or a mixture of glycols including, but not limited to, 1,3- trimethylene glycol, 1,4-butanediol, 1,5-pentanediol, 1,6-hexanediol, 1, 7-heptanediol, 1,8-octanediol, 1,9-nonanediol, 1,10-decanediol, 1,12- dodecanediol, neopentyl glycol, 2,2,4,4-tetramethyl-1,3-cyclobutanediol, diethylene glycol, bisphenol A and hydroquinone. Preferably the amounts of modifying acids and glycols are each less than 10 mol percent. Polyetherimides of the invention which are preferred are those in which: R is ##STR8## or R' is an aromatic hydrocarbon radical having from 6 to 10 carbon atoms, or an alkylene or cycloalkylene radical having from 2 to 10 carbon atoms; R&quot; is --O--, ##STR9## or --C.sub.y H.sub.2y --; Q is --O--, ##STR10## or --C.sub.x H.sub.2x --; and m, x and y are as defined above. Polyetherimides of the invention which are more preferred as those in which: R is ##STR11## R' is ##STR12## R&quot; is --C.sub.y H.sub.2y --; and Q is --C.sub.x H.sub.2x --. Polyetherimides of the invention which are even more preferred are those in which: R is ##STR13## The preferred blends of this invention comprise about 35 to about 55 weight percent of polyetherimide and about 65 to about 45 weight percent of the copolyester. Preferred copolyesters are formed from terephthalic acid and a mixture of glycols in the ratio of 20 to 40 mol percent of 1,4- cyclohexanedimethanol to 60 to 80 mol percent of ethylene glycol. Blends of a copolyester of this composition form clear blends (single phase solid solutions) with polyetherimides in all possible ratios of blending. In addition, blends containing this copolyester have high flexural modulus, i.e., greater than 300,000 psi, and a melt extrusion temperature lower than that of the polyetherimide. Preferred blends of polyetherimides and copolyesters of the invention are those in which said copolyester has an acid component which comprises 100 to 50 mol percent terephthalic acid and 0 to 50 mol percent isophthalic acid, and a glycol component which comprises from 95 to 40 mol percent ethylene glycol and 5 to 60 mol percent 1,4- cyclohexanedimethanol. In yet another aspect of the invention, a blend wherein said copolyester has (a) an acid component comprising 100 to 50 mol percent terephthalic acid and 0 to 50 mol percent isophthalic acid and (b) a glycol component comprising 80 to 60 mol percent ethylene glycol and 20 to 40 mol percent 1,4-cyclohexanedimethanol is preferred. Although the copolyester components of blends are referred to for convenience herein as copolyesters of certain acids and certain glycols, it should be understood that the actual syntheses of the copolyesters can employ any equivalent reactants. For example, instead of a diacid, a corresponding anhydride, ester, or acid halide can be employed. The blends of the invention may be compounded in the melt, for example, by using a single screw or twin screw extruder. Additional components such as stabilizers, fillers, reinforcements, flame retardants, colorants, lubricants, release agents, impact modifiers, and the like can be included in the formulation. This invention can be further illustrated by the following examples of preferred embodiments thereof, although it will be understood that these examples are included merely for purposes of illustration and are not intended to limit the scope of the invention unless otherwise specifically indicated. The starting materials are commercially available unless otherwise indicated. To form the compositions of the examples, polyesters and copolyesters having the compositions listed below were blended with a polyetherimide (PEI). The polyesters and copolyesters were prepared by reacting the dimethyl esters of terephthalic acid (TA) or a mixture of the dimethyl esters of terephthalic acid and isophthalic acid (IA) with ethylene glycol (EG) or with a mixture of ethylene glycol and 1,4- cyclohexanedimethanol (CG). Poly(butylene terephthalate) was also prepared, from the dimethyl ester of terephthalic acid and 1,4- butanediol. By 1,4 -cyclohexanedimethanol is mean cis and trans isomers and mixtures thereof. ______________________________________ Composition of the polyesters and copolyesters Name Composition ______________________________________ PETG poly(ethylene-co-cyclohexane-1,4- dimethylene terephthalate) with 69 mol percent ethylene and 31 mol percent cyclohexane-1,4-dimethylene in the glycol PCTG1 poly(ethylene-co-cyclohexane-1,4- dimethylene terephthalate) with 42 mol percent ethylene and 58 mol percent cyclohexane-1,4-dimethylene in the glycol PCTG2 poly(ethylene-co-cyclohexane-1,4- dimethylene terephthalate) with 28 mol percent ethylene and 72 mol percent cyclohexane-1,4-dimethylene in the glycol PCT1 poly(cyclohexane-1,4-dimethylene terephthalate) PCT2 poly(cyclohexane-1,4-dimethylene terephthalate-co-isophthalate) with 83 mol percent terephthalate and 17 mol percent isophthalate in the acid PBT poly(butylene terephthalate) PET poly(ethylene terephthalate) ______________________________________ The polyetherimide used in these examples was Ultem 1000&trade;, which is commercially available from General Electric Company. This polyetherimide is essentially the reaction product of 2,2-bis[4(3,4- dicarboxyphenoxy) phenyl] propane dianhydride: ##STR14## and meta- phenylenediamine. The blends were compounded in the melt on a Warner and Pfleiderer 28 mm twin screw extruder and molded on a Toyo 90 injection molding machine. The heat deflection temperatures (HDT) were measured according to ASTM D648. The results of the HDT measurements are presented in the tables which follow each example. The first two examples below illustrate compositions of the invention. EXAMPLES Example 1 Blends of PETG with PEI were prepared and tested. The results are presented in Table 1. All of these blends were clear and exhibited a single glass transition temperature, Tg. This indicates the existence of a single phase solid solution in these blends. They had a slight yellow color, due to the color of the PEI. TABLE 1 ______________________________________ Heat Deflection Temperature Results for Blends of PEI with PETG Weight Percent HDT (&deg;C.) Example PETG PEI at 264 psi ______________________________________ 1A 100 0 65 1B 90 10 -- 1C 80 20 77 1D 70 30 85 1E 60 40 98 1F 50 50 110 1G 40 60 121 1H 30 70 134 1I 20 80 -- 1J 0 100 188 ______________________________________ Example 2 Blends of PCTG1 with PEI were prepared and tested. The results are presented in Table 2. TABLE 2 ______________________________________ Heat Deflection Temperature Results for Blends of PEI with PCTG1 Weight Percent HDT (&deg;C.) Example PCTG1 PEI at 264 psi ______________________________________ 2A 100 0 65 2B 90 10 -- 2C 80 20 76 2D 70 30 84 2E 60 40 100 2F 50 50 113 2G 40 60 122 2H 30 70 137 2I 20 80 -- 2J 0 100 188 ______________________________________ Comparative Example 1 Blends of PCTG2 with PEI were prepared and tested. The results are presented in Table C1. TABLE C1 ______________________________________ Heat Deflection Temperature Results for Blends of PEI with PCTG2 Weight Percent HDT (&deg;C.) Example PCTG2 PEI at 264 psi ______________________________________ C1A 100 0 67 C1B 90 10 -- C1C 80 20 74 C1D 70 30 81 C1E 60 40 87 C1F 50 50 92 C1G 40 60 119 C1H 30 70 141 C1I 20 80 -- C1J 0 100 188 ______________________________________ Comparative Example 2 Blends of PCT1 with PEI were prepared and tested. The results are presented in Table C2. TABLE C2 ______________________________________ Heat Deflection Temperature Results for Blends of PEI with PCT1 Weight Percent HDT (&deg;C.) Example PCT1 PEI at 264 psi ______________________________________ C2A 100 0 70 C2B 90 10 -- C2C 80 20 73 C2D 70 30 80 C2E 60 40 82 C2F 50 50 87 C2G 40 60 110 C2H 30 70 148 C2I 20 80 -- C2J 0 100 188 ______________________________________ Comparative Example 3 Blends of PCT2 with PEI were prepared and tested. The results are presented in Table C3. TABLE C3 ______________________________________ Heat Deflection Temperature Results for Blends of PEI with PCT2 Weight Percent HDT (&deg;C.) Example PCT2 PEI at 264 psi ______________________________________ C3A 100 0 66 C3B 90 10 -- C3C 80 20 -- C3D 70 30 76 C3E 60 40 80 C3F 50 50 86 C3G 40 60 104 C3H 30 70 145 C3I 20 80 -- C3J 0 100 188 ______________________________________ Comparative Example 4 Blends of PBT with PEI were prepared and tested. The results are presented in Table C4. TABLE C4 ______________________________________ Heat Deflection Temperature Results for Blends of PEI with PBT Weight Percent HDT (&deg;C.) Example PBT PEI at 264 psi ______________________________________ C4A 100 0 52 C4B 90 10 -- C4C 80 20 59 C4D 70 30 55 C4E 60 40 60 C4F 50 50 75 C4G 40 60 91 C4H 30 70 -- C4I 20 80 -- C4J 0 100 188 ______________________________________ Comparative Example 5 Blends of poly(ethylene terephthalate) (PET) with PEI were prepared and tested. The results were presented in Table C5. TABLE C5 ______________________________________ Heat Deflection Temperature Results for Blends of PEI with PET Weight Percent HDT (&deg;C.) Example PET PEI at 264 psi ______________________________________ C5A 100 0 63 C5B 90 10 70 C5C 80 20 74 C5D 70 30 83 C5E 60 40 94 C5F 50 50 104 C5G 40 60 117 C5H 30 70 128 C5I 20 80 142 C5J 0 100 188 ______________________________________ The unexpected advantage of the blends of the present invention is shown in FIGS. 1 and 2. They illustrate the effect of addition of PEI to various polyesters on the heat deflection temperature (HDT) of the blends. FIG. 1 shows the HDT of the blend as a function of the weight fraction of PEI. FIG. 2 shows the increase in HDT resulting from the addition of polyetherimide to the various polyesters. It is clear from FIG. 2 that the inclusion of PETG and PCTG1 in the blend, in accordance with the present invention, results in substantially higher values of HDT compared to addition of PCTG2, PCT1, PCT2, or PBT for a given weight ratio of PEI to polyester when the weight percent of PEI in the blend is about 65 or less. The advantage of PETG and PCTG1 over PET is somewhat less but still significant. There are other additional advantages associated with the blends of the invention. Their flexural strength and flexural modulus increase with the addition of the polyetherimide to the polyester. In addition, these blends can be processed at a much lower temperature than that which is required for processing the pure polyetherimide. The blends of the present invention have excellent flexural strength in comparison to other PEI polymer blends. In this respect they are particularly superior to the ternary blends containing PEI, a polyarylate and a polyester, as disclosed in U.S. Pat. No. 4,908,418 to Holub. Example VIII of this patent discloses such a ternary blend containing 33. 3 weight percent PEI that has a flexural modulus of 185, 000 psi and a flexural strength of 8280 psi. By comparison, as shown in Table 3 below, the binary blend of PEI and PETG of the present invention containing 30 weight percent PEI has a flexural modulus of 366,000 psi and a flexural strength of 14,030 psi. Compositions of the invention thus exhibit superior flexural modulus and strength and combine this strength advantage with melt processability at moderate temperature and relatively high heat deflection temperatures. These compositions have broad applications, including the formation of molded articles, fibers, sheets or films. Table 3 below further illustrates the valuable mechanical properties of compositions of the invention. It lists the tensile strength, flexural modulus and flexural strength for the series of blends of PEI and PETG reported above in Table 1. TABLE 3 ______________________________________ Tensile Flexural Flexural Weight Percent Strength* Modulus** Strength** Example PETG PEI (psi) (psi) (psi) ______________________________________ 1A 100 0 6,780 304,000 9,950 1B 90 10 -- -- -- 1C 80 20 8,830 348,000 12,360 1D 70 30 9,920 366,000 14,030 1E 60 40 11,070 388,000 15,500 1F 50 50 12,180 414,000 17,130 1G 40 60 6,550 436,000 13,910 1H 30 70 7,030 444,000 14,260 1I 20 80 -- -- -- 1J 0 100 15,930 527,000 22,270 ______________________________________ *Tensile properties measured according to ASTM D638 **Flexural properties measured according to ASTM D790 Although blends of PEI and PET as listed in Table C5 above also have good flexural strengths, they suffer from the previously noted disadvantage of heat deflection temperatures that are several degrees lower than those of blends of the invention having the same weight ratios of PEI to polyester. The invention has been described in detail with particular reference to preferred embodiments thereof, but it will be understood that variations and modifications can be effected within the spirit and scope of the invention. Moreover, all patents, patent applications (published and unpublished, foreign or domestic), literature references or other publications noted above are incorporated herein by reference for any disclosure pertinent to the practice of this invention.
From ice storms to heat waves and unseasonable temperatures, Halton Hills has experienced some strange weather over the past few years. These unusual climate events have affected all our lives and have everyone talking. As a community, we can take steps to adapt and mitigate the effects of this phenomenon. Town Council is committed to taking direct action to reduce greenhouse gases and develop effective climate change initiatives. In May 2019, Town Council declared a climate change emergency in Halton Hills. Through adoption of the declaration the Town is committed to taking concrete actions and achieve a net-zero target by 2030. The Town is focusing on corporate and community-wide actions to reduce or remove greenhouse gas (GHG) emissions through mitigation and adaptation measures. These measures are mutually beneficial. Effective mitigation can reduce climate change impacts, therefore reducing the level of adaptation required by a community. Similarly, mitigation actions help us to adapt to climate change and to protect and preserve the Town's natural assets and ecosystem. To achieve measureable results, the Town has adapted a Low Carbon Resilience Framework. In 2020, Mayor Rick Bonnette and Councillors Jane Fogal and Clark Somerville will lead a Climate Change Action Task Force. The goal is to take direct action in the community to reduce greenhouse gases and establish a significant number of climate change initiatives. | | Timeline | | | | Council reports and updates | | What you can do | | Public engagement Public engagement is an essential component to all climate change efforts. In 2019 the Town asked for residents' feedback in guiding the Climate Change Adaptation Plan. The information and survey can be found on our engagement platform, Let's Talk Halton Hills. How to help There's so much you can do to help! Here are some ideas - check back often for more! | | Adaptation - managing the effects | | Adaptation is about preparing for and managing the impacts of climate change, i.e. how to absorb the changes that result from climate impacts. Adaptation measures help strengthen the Town's resilience to climate change. Examples of adaptation measures include building more resilient infrastructure, protecting and preserving the Town's natural assets and ecosystems, flood mitigation efforts like planting trees or building green roofs, and installing permeable paving stones to allow for better stormwater management. Climate Change Adaptation Plan The Climate Change Adaptation Plan (CCAP) has been developed to address the physical, economic, social, and ecological impacts of climate change, expected for the Town of Halton Hills in the next 30 years. The CCAP was informed by three key background studies: Watch a short video to learn more about the CCAP: How Climate Change will impact Halton Hills Important links For more information please contact Rija Rasul | | Mitigation - reducing our impact | | Mitigation is about dealing with the causes of climate change, i.e. how to reduce greenhouse gas (GHG) emissions. Through mitigation policies, plans and strategies the Town is focusing on corporate and community-wide actions to reduce or remove GHG emissions that contribute to climate change. There are many actions that can be taken such as building and developing to more energy efficient standards, reducing energy consumption in homes and buildings, choosing sustainable transportation modes, and reducing household waste. Where will Halton Hills' emissions go by 2030 if no new actions are taken? Low Carbon Transition Strategy The Low Carbon Transition Strategy will establish the action pathways necessary to achieving the target set out in the Climate Change Emergency Declaration of May 6, 2019 to achieve net-zero carbon by 2030. The plan will address de-carbonizing every aspect of town-wide GHG emissions, from homes to transportation to industry to agriculture and waste. In order to achieve the rapid transition to a low-carbon community, the Town will need a Low Carbon Transition Strategy that lays out a clear implementation plan and is developed collaboratively with representation from across the Halton Hills community. The Low Carbon Transition Strategy is currently underway through the guidance of a Multi-Stakeholder Governance Committee which is made up of Town staff and community members. This strategy will: Watch this short video on the Low-Carbon Transition Strategy. How the Low Carbon Transition Strategy will be developed The community-wide Low-Carbon Transition Strategy (LCTS) is currently being developed with public consultation through the following stages: Resources Corporate energy plan The 2020-2025 Corporate Energy Plan (CEP) constitutes the Town's second Energy Conservation & Demand Management Plan which was originally completed in 2014. For more information please contact Michael Dean. Important links | | Green Development Standards | | The Green Development Standards was developed in 2010 and has been periodically updated in 2014 and 2021. The current version of the Green Development Standards 3 (GDS v3) was approved by Town Council in June 2021. The focus of the GDS v3 builds on the foundation of previous green development standards and puts increased weight on measures that reduce greenhouse gases of new development in the community The purposes of the GDS v3 are: To learn more, view the Green Development Standards v3. For more information please contact Michael Dean. Resources | | Natural assets | | The Town has recognized the value of natural assets such as a healthy tree canopy and natural vegetation to reduce the impacts on climate change through approved Town policies, strategies and plans. Natural assets include: rivers, wetlands, forests, meadows and open spaces which provide a range of ecosystems that captures carbon emissions which benefit our environment and well-being. For more information please contact Jennifer Spence. Natural assets inventory and valuation project Natural assets play a vital role in the Town's environmental health and provide natural services in adapting and mitigating threats by climate change. A well-managed natural asset will continue to produce a sustainable flow of services, such as stormwater management, air quality improvement, and reduction of carbon in the long term or even in perpetuity. The Town is currently carrying out a project on natural assets in partnership with Credit Valley Conservation and Greenbelt Foundation. The project consists of two phases: Council report and presentation Privately-owned tree management strategy The development of a Privately-Owned Tree Management Strategy is currently underway. The Town's strategy will determine how best to protect and enhance privately-owned trees in Halton Hills. Trees are important in the fight against climate change since they absorb and store carbon dioxide -- a greenhouse gas that heats the earth. In Halton Hills, approximately 83% of all tree canopy cover is located on privately-owned lands. This project is expected to be completed in 2020. Important links | | Retrofit Halton Hills | | Residential buildings are the second largest source of Greenhouse Gas (GHG) emissions in Halton Hills and the Retrofit Halton Hills pilot program uses local improvement charges as a financing tool to help homeowners retrofit their homes. Learn more and visit the Retrofit Halton Hills webpage. | | Climate Change Investment Fund | | The Climate Change Investment Fund has been established to encourage and assist local community groups and organizations to take action and contribute to the Town’s climate change goals. The 2021 deadline for submission is August 3, 2021 by 4:30 p.m. Apply today! Application Guide Application Form For more information, contact Jennifer Spence, Climate Change Outreach Coordinator. | | Sustainable Neighbourhood Action Plan | | The Town of Halton Hills and Credit Valley Conservation have developed a Sustainable Neighbourhood Action Plan for the Hungry Hollow - Delrex Neighbourhood. This plan advances the Town's Climate Climate Change Adaptation Plan and identifies actions to help the community become more environmentally sustainable and resilient to the impacts of climate change. Browse the Plan.
https://www.haltonhills.ca/en/your-government/climate-change.aspx
Q: Quintillion bytes to terabytes I am trying to convert 2.5 quintillion bytes to terabytes (IBM's estimate on the amount of data produced daily), could someone check if my calculations are correct? 1 Terabyte is 1000 Gigabytes 1 Gigabyte is 1000 Megabytes 1 Megabyte is 1000 Kilobytes 1 Kilobyte is 1000 Bytes 1 Quintillion is $10^{18} $ 2.5 Quintillion is $2.5 x 10^{18} $ 1 Terabyte is $10^{12}$ bytes How many terabytes is it then? How do you work this out? Is this correct? $2.5x10^{18} $ / $10^{12} $ = 2,500,000 Terabytes? This sounds like an awful lot to me? Does this sound/look correct? A: Your calculation is correct. There are two different usages of kilo/mega/gigabytes, one with factors of $1000$ and one with factors of $2^{10}=1024$. Since you're dealing with a rather rough estimate, the difference is probably not important.
Audio Episode 6: I Gotta Be Me An idea a lot of us carry over from our childhoods is that me is always incorrect. Who doesn't remember this scenario: YOU: Is it all right if me and Tom go to the park and play? PARENT: Tom and I. YOU: Is it all right if Tom and I go out and play? Despite our parents' good intentions, the correction was often left unexplained. Therefore, it's easy to think that me is always wrong—and to conclude that this is based solely on politeness, i.e. allowing another's name to go first. Unfortunately, this misconception leads to constructions like this: These apples are for Ned and I. It turns out that in this second instance, it's correct to say, "These apples are for Ned and me." You could even put the "me" first and it would still be correct (shock! horror!). Now, I could natter endlessly about subjects and objects and bore everyone to tears. Instead, here's a quick test you can do to check the correctness of I versus me, and it flies in the face of the "politeness" logic: take your friend out of the sentence completely and see how it sounds. Here's how the test works: Situation 1. "Is it all right if Tom and [I / me] go to the park?" Remove "Tom and" from this sentence and you have the following: "Is it all right if I go to the park?" (Sounds good.) or "Is it all right if me go to the park?" (Sounds like Cookie Monster.) Situation 2. "These apples are for Ned and [I / me]." Remove "Ned and" from this sentence and you have the following: "These apples are for I." (Sounds creepy.) or "These apples are for me." (Sounds good.) That's the test. It's a great little tool. Just be sure to leave an apple for Ned.
This character template was created for a Gen Con 2000 event, an AD&D 2nd edition update to the Caves of Chaos found in B1: Keep On The Borderlands. Most of the info was taken directly from Palladium Books’ Revised RECON RPG by Erick Wujcik, Kevin Siembieda, Matthew Balent and Maryann Siembieda. Download PDF version of sheet: adnd-tl7-soldier.pdf MODERN ARMY SOLDIER CLIMATE/TERRAIN: Any FREQUENCY: Uncommon ORGANIZATION: Squad ACTIVITY CYCLE: Any DIET: Omnivorous INTELLIGENCE: Average to Very (8-12) TREASURE: Incidental ALIGNMENT: Any NO. APPEARING: 5-20 per squad ARMOR CLASS: 10 or 6 MOVEMENT: 12 or 9 HIT DICE: 2+2 THAC0: 19 NO. OF ATTACKS: 1 or by weapon type (see below) DAMAGE/ATTACK: By weapon type (see below) SPECIAL ATTACKS: Nil SPECIAL DEFENSES: Move Silently, Hide in Shadows MAGIC RESISTANCE: Nil SIZE: M MORALE: Steady WEAPON PROFICIENCIES: Rifle, Brawling, Pistol, Knife NON-WEAPON PROFICIENCES: Area knowledge (11), Camouflage (11), Demolition (10), Driving (truck) (11), Scrounging (9), Swimming (10), Survival (pick one – forest, jungle, desert, etc.) (11), Tracking (9) SPECIAL ABILITIES: Move Silently (as per thief) (20%), Hide in Shadows (as per thief) (20%), Climbing (as per thief) (80%) SAVING THROWS: As 2nd-level warrior EQUIPMENT Clothing: camouflage fatigues (pants, shift-jacket, underwear, sun hat), rank insignia removed; metal helmet with cloth covering; rubber band around helmet holds toothbrush and paste; jungle boots, socks, sunglasses; dogtags. Attached gear on first belt: first aid dressing case, compass case, canteen, bayonet, weapon cleaning kit, eight ammo pouches with eight M- 16 30-round clips, two smoke grenades and two fragmentation grenades. Attached gear on second belt: .357 Colt backup revolver, 40 extra rounds of ammunition for revolver. Rucksack: contains five extra ammo clips for rifle, extra canteen, an extra rifle bolt, a poncho, poncho liner, and five C-rations. On outside of rucksack dangle two smoke and two fragmentation grenades. Pocket items: lighter, a fork, wire cutters, P-38 can opener, extra shoelaces, wire, deck of playing cards, local paper money (if applicable), insect repellent, packets of powered drink mix, writing paper, pencil. Strapped on back or carried: M-16 assault rifle, belt of .50-caliber ammo. This kit description is based on a Vietnam-era United States Army infantry soldier after six months of active duty in the field. These soldiers have learned speed, stealth and cover are much better than their standard issue flak jackets; however, if worn, change AC 10 and move 12 to AC 6 and move 9. Each squad will have a leader (sergeant or lieutenant) equal to a 3rd-level fighter (THAC0 18, 3+3 HD). For green recruits fresh out of boot camp, treat them as 1st-level fighters (THAC0 20, 1+1 HD) and have them make the usual mistakes – they wear their ranks openly, they carry too much gear into the field, etc. Note using the basic weapons and statistics described here, just one soldier can inflict an average of 45 hit points of damage per minute against lightly protected targets. For AD&D2e notes about firearms, download the full PDF sheet.
http://epicsavingthrow.com/add2e-vietman-era-soldier/
A large whale shark, Rhincodon typus, was observed in the Bay of Fundy, Canada on August 22, 1997. The sighting was at 44°15′19′′N, 67°44′07”W. Whale sharks are a circumglobal species occurring in the warmer waters of the tropical and subtropical seas and no prior sightings of this animal north of 42°N have been reported. The reasons for the shark to be in the Bay are unknown. It is important to note that the whale shark may be able to tolerate the colder waters of the Bay of Fundy, although there have been no subsequent sightings in the Bay. You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither BioOne nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations. Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the BioOne website.
https://bioone.org/journals/northeastern-naturalist/volume-13/issue-1/1092-6194(2006)13%5B57%3AROOART%5D2.0.CO%3B2/Rare-Occurrence-of-a-Rhincodon-typus-Whale-Shark-in-the/10.1656/1092-6194(2006)13%5B57:ROOART%5D2.0.CO;2.short
Department of Civil Engineering of Izmir University of Economics reflects the objective of producing knowledge and technology, then practicing and sharing them with the aim of carrying out the activities of education, research and community service at the level of universal standards and contributing to local and national improvement of our country. Department of Civil Engineering aims to educate the inquisitive engineers of the future with professional and ethical values, cultural knowledge, ability to develop contemporary solution algorithms for civil engineering problems and the principle of continuous improvement, and also aims to a department that produces knowledge for the benefit of humanity. The aim of Department of Civil Engineering is to prepare students for future by equipping them with advanced knowledge required by civil engineering in compliance with today’s technological developments. For this purpose, the students will perform applied practices by utilizing laboratory resources and also with the help of theoretical practices by the knowledgeable and dynamic academic staff of our university closely following the technological developments. The need for qualified civil engineers speaking a foreign language, competent on advanced technologies by the projects financed by foreign investors in accordance with international standards has been gradually increasing both abroad and in Turkey. In particular, the construction projects have to be modeled very well in order that multi-functional complex structures can be built in a very short time, economically, and in international quality norms; furthermore, construction planning, systematic control, and the quality management system have to be nicely envisioned. These systems require particularly the highly qualified knowledge of computers (CAM, CAD), the knowledge of technology governance and construction techniques as well as the knowledge of foreign language especially English. Merely a small part of current stock of civil engineers in our country is able to have these characteristics and they fail to meet the needs. Therefore, the need for civil engineers with the knowledge of modern information technologies, computer-aided design, knowledge of practice and governance as well as foreign languages has been gradually increasing every day in our country. Izmir University of Economics aims to educate future civil engineers qualified for the need of our country via the available computer system infrastructure and informatics systems and by the help of academic members specializing in computer-aided design as well as academic members with experience in practice and laboratories. The medium of education in the program of Civil Engineering is English; moreover, students will learn a second foreign language offered by our university during the education of students. These students to graduate with a full command of second foreign language will meet greatly the need in the construction sector equipped with theoretical and practical knowledge as well as vision of future.
http://ects.ieu.edu.tr/akademik.php?section=cie.ieu.edu.tr
a valid hypothesis can never be wrong Hello dear friends. In this post on the solsarin site, we will talk about “a valid hypothesis can never be wrong”. Stay with us. Thank you for your choice. What Is a Scientific Hypothesis? A scientific hypothesis is the initial building block in the scientific method. Many describe it as an “educated guess,” based on prior knowledge and observation. While this is true, the definition can be expanded. A hypothesis also includes an explanation of why the guess may be correct, according to National Science Teachers Association. Hypothesis basics A hypothesis is a suggested solution for an unexplained occurrence that does not fit into current accepted scientific theory. The basic idea of a hypothesis is that there is no pre-determined outcome. For a hypothesis to be termed a scientific hypothesis, it has to be something that can be supported or refuted through carefully crafted experimentation or observation. This is called falsifiability and testability, an idea that was advanced in the mid-20th century a British philosopher named Karl Popper, according to the Encyclopedia Britannica. A key function in this step in the scientific method is deriving predictions from the hypotheses about the results of future experiments, and then performing those experiments to see whether they support the predictions. A hypothesis is usually written in the form of an if/then statement, according to the University of California. This statement gives a possibility (if) and explains what may happen because of the possibility (then). The statement could also include “may.” The evolution of a hypothesis Most formal hypotheses consist of concepts that can be connected and their relationships tested. A group of hypotheses comes together to form a conceptual framework. As sufficient data and evidence are gathered to support a hypothesis, it becomes a working hypothesis, which is a milestone on the way to becoming a theory. Though hypotheses and theories are often confused, theories are the result of a tested hypothesis. While hypotheses are ideas, theories explain the findings of the testing of those ideas. “Theories are the ways that we make sense of what we observe in the natural world. Theories are structures of ideas that explain and interpret facts,” said Tanner. A hypothesis can’t be right unless it can be proven wrong A hypothesis is the cornerstone of the scientific method. It is an educated guess about how the world works that integrates knowledge with observation. Everyone appreciates that a hypothesis must be testable to have any value, but there is a much stronger requirement that a hypothesis must meet. A hypothesis is considered scientific only if there is the possibility to disprove the hypothesis. The proof lies in being able to disprove A hypothesis or model is called falsifiable if it is possible to conceive of an experimental observation that disproves the idea in question. That is, one of the possible outcomes of the designed experiment must be an answer, that if obtained, would disprove the hypothesis. Our daily horoscopes are good examples of something that isn’t falsifiable. A scientist cannot disprove that a Piscean may get a surprise phone call from someone he or she hasn’t heard from in a long time. The statement is intentionally vague. Even if our Piscean didn’t get a phone call, the prediction cannot be false because he or she may get a phone call. They may not. A good scientific hypothesis is the opposite of this. If there is no experimental test to disprove the hypothesis, then it lies outside the realm of science. Scientists all too often generate hypotheses that cannot be tested by experiments whose results have the potential to show that the idea is false. MORE POSTS: - how to put alcohol in air tank - picture perfect mysteries exit stage death watch online free - which asian capital was formerly known as edo? - how big is the virgo constellation? - what type of galaxy is milky way Three types of experiments proposed by scientists - Type 1 experiments are the most powerful. Type 1 experimental outcomes include a possible negative outcome that would falsify, or refute, the working hypothesis. It is one or the other. - Type 2 experiments are very common, but lack punch. A positive result in a type 2 experiment is consistent with the working hypothesis, but the negative or null result does not address the validity of the hypothesis because there are many explanations for the negative result. These call for extrapolation and semantics. - Type 3 experiments are those experiments whose results may be consistent with the hypothesis, but are useless because regardless of the outcome, the findings are also consistent with other models. In other words, every result isn’t informative. Formulate hypotheses in such a way that you can prove or disprove them by direct experiment. Science advances by conducting the experiments that could potentially disprove our hypotheses. Increase the efficiency and impact of your science by testing clear hypotheses with well-designed experiments. Testing a hypothesis Notice that all of the statements, above, are testable. The primary trait of a hypothesis is that something can be tested and that those tests can be replicated, according to Midwestern State University. An example of untestable statement is, “All people fall in love at least once.” The definition of love is subjective. Also, it would be impossible to poll every human about their love life. An untestable statement can be reworded to make it testable, though. For example, the previous statement could be changed to, “If love is an important emotion, some may believe that everyone should fall in love at least once.” With this statement, the researcher can poll a group of people to see how many believe people should fall in love at least once. A hypothesis is often examined by multiple scientists to ensure the integrity and veracity of the experiment. This process can take years, and in many cases hypotheses do not go any further in the scientific method as it is difficult to gather sufficient supporting evidence. “As a field biologist my favorite part of the scientific method is being in the field collecting the data,” Jaime Tanner, a professor of biology at Marlboro College, told Live Science. “But what really makes that fun is knowing that you are trying to answer an interesting question, so the first step in identifying questions and generating possible answers (hypotheses) is also very important and is a creative process. Then once you collect the data you analyze it to see if your hypothesis is supported or not.” A null hypothesis is the name given to a hypothesis that is possibly false or has no effect. Often, during a test, the scientist will study another branch of the idea that may work, which is called an alternative hypothesis, according to the University of California, Berkeley. During a test, the scientist may try to prove or disprove just the null hypothesis or test both the null and the alternative hypothesis. If a hypothesis specifies a certain direction, it is called one-tailed hypothesis. This means that the scientist believes that the outcome will be either with effect or without effect. When a hypothesis is created with no prediction to the outcome, it is called a two-tailed hypothesis because there are two possible outcomes. The outcome could be with effect or without effect, but until the testing is complete, there is no way of knowing which outcome it will be, according to the Web Center for Social Research Methods. During testing, a scientist may come upon two types of errors. A Type I error is when the null hypothesis is rejected when it is true. A Type II error occurs when the null hypothesis is not rejected when it is false, according to the University of California, Berkeley. Upon analysis of the results, a hypothesis can be rejected or modified, but it can never be proven to be correct 100 percent of the time. For example, relativity has been tested many times, so it is generally accepted as true, but there could be an instance, which has not been encountered, where it is not true. For example, a scientist can form a hypothesis that a certain type of tomato is red. During research, the scientist then finds that each tomato of this type is red. Though his findings confirm his hypothesis, there may be a tomato of that type somewhere in the world that isn’t red. Thus, his hypothesis is true, but it may not be true 100 percent of the time.
https://solsarin.com/a-valid-hypothesis-can-never-be-wrong/
Oct 16, 2017 - Construction Contractors frequently hire subcontractors to perform many aspects of a construction project. In some cases, all of the work is performed by subcontractors. In order to pay the subcontractors for their work and materials, the contractor typically anticipates receiving payments from the owner for the work performed. Contractors will sometimes seek to transfer the risk of delayed payment onto the subcontractors by adding a “pay when paid” clause into the contracts between the contractor and the subcontractors. Such clauses often lead to legal disputes between contractors and subcontractors. The question that typically arises is whether the contractor must pay the subcontractor if the owner never pays, and when is payment due. Broadly speaking, many payment clauses in construction contracts fall into one of two categories: “pay-if-paid” or “pay-when-paid.” When a property owner fails to pay for subcontracted services or materials, understanding the nature of the contractor and subcontractor’s compensation arrangement is among the first steps involved in determining which party bears the risk of non-payment. With a “pay-if-paid” clause, the subcontractor bears the risk of non-payment. If the contractor does not receive payment from the property owner, then it is not required to pay the subcontractor for material supplied or services rendered. This type of clause obviously favors the contractor substantially; and, in contract negotiations, subcontractors must consider whether they are willing for payment to be contingent upon the property owner satisfying its financial obligations to the contractor. Both parties should also carefully consider what collection measures the contractor should be required to undertake before it can deny payment to the subcontractor. Under a “pay-when-paid” clause, the subcontractor is entitled to payment within a specified period of time after payment by the property owner; or, if the property owner does not pay, within a reasonable time after the completion of the subcontractor’s services. As a result, unlike a “pay-if-paid” clause, a “pay-when-paid” clause does not shift the non-payment risk to the subcontractor. The contractor is entitled to wait for payment according to the terms specified in the parties’ agreement; but, if it becomes clear that payment is not forthcoming, the subcontractor is still entitled to the agreed-upon compensation. In other words, the subcontractor can demand payment from the contractor prior to the contractor receiving payment from the owner, and even if the owner never pays. “Pay-if-paid” and “pay-when-paid” clauses are not the only options available. Under varying circumstances, it can be in one or both parties’ best interests to negotiate an alternate payment provision. When structuring the terms of payment, contractors and subcontractors should ensure that their agreements are as clear as possible, as ambiguous payment provisions tend to be ripe for litigation. Contractors and subcontractors should also be aware that prime contracts with property owners can include acceptance rights, lien waivers, prohibitions on “pay-if-paid” and “pay-when-paid” clauses, and other provisions that may limit their options; and, in any case, they should enter into their agreements with a clear understanding of the risks involved and the remedies (if any) available. If you are preparing to negotiate an agreement for a construction project in the Jacksonville area, or if you are facing a potential payment dispute under a construction contract, we encourage you to contact us for a confidential consultation. To discuss your needs with one of our construction attorneys, please call (904) 737-4600 or send us a message online today.
https://www.ansbacher.net/blog/subcontract_agreements/
Please share this: Tweet Tomorrow takes me to the Cape Cod town of Chatham for one of a series of meetings with Lesley alumni across the country. But this year, we’re doing things a little differently. Rather than simply meeting for drinks, hors d’oeuvres and pleasantries — though there will be plenty of those — we’re asking alumni to help us with a little creative problem-solving. We’re calling this initiative Creativity LIVE, and you can follow the discussion on social media using #mycreativity . We’re changing things up because creativity is no longer optional, and competence is no longer sufficient, in higher education, the economy and the world at large. Our country and the global community need university graduates with the ability and experience to integrate competence and creativity to meet the challenges in a variety of areas: • Design solutions to environmental challenges, from recycling to global warming • Integrate digital technology and humanistic pedagogy to enhance student learning • Craft new approaches to end chronic homelessness, especially for children • Start-up sustainable businesses that improve the economic core of urban and rural communities • Deliver affordable community-based services for those with psychological challenges or trauma, including veterans • Design, deliver and finance affordable, quality health care in all communities • Bring the visual and performing arts to families, communities, organizations and countries as the common denominator of the human experience • Construct new ways to link immigration and citizenship in the public discourse • Describe new options for affordable housing in regions throughout the country • Balance ethnic identity with national and global citizenship • Mediate regional conflicts and design paths to peace and productivity • Support religious diversity as opposed to religious tyranny Tomorrow, July 17th, from 5:30 to 7 p.m. , I’ll be meeting the Cape Cod Alumni & Friends at the Chatham Bars Inn, 297 Shore Road, Chatham . If you’re a Lesley alumna or alumnus and you’ll be in the area, I urge you to RSVP here and bring your ideas.
http://www.lesley.edu/blog/president/2014/7/creativity-on-the-cape/
Abstract: The primary purpose of specialization in a health profession is to improve the quality of care patients receive; to increase the chances of positive treatment outcomes; and, ultimately, to improve quality of life. Specialties evolve in response to the development of new knowledge or technology that can affect patient care, and the resulting changes in patient-care needs. The rapid, dramatic advancement in drug therapy in recent decades has created a clear need for pharmacy practitioners who specialize in specific kinds of treatment and aspects of care. Specialty certification is a responsible, progressive initiative from the profession to ensure the best possible patient care. The author discusses the changing health-care environment, the historical perspective of specialties, pharmacy specialties, the type of people who seek specialty certification, the value of certification, the specialty certification process, establishment of a new specialty and upcoming examinations. He also provides contact information for the Board of Pharmaceutical Specialties.
http://www.ijpc.com/abstracts/abstract.cfm?ABS=926
OPINION: Children come in all shapes and sizes, but not with a manual. Childhood achievements such as walking and talking are often celebrated signs that things are going well in a child’s life. However, once these achievements start being compared between children (at the park, on Facebook) they can become the cause of anxiety. Why isn’t he crawling yet? Is her language normal? Is there something wrong? It’s often difficult for parents to know whether they should be waiting or worrying. Asking for advice is natural but lots of opinions can be confusing. Doctors, health professionals or early childhood teachers may give differing opinions about childhood development as they often look at it from different perspectives. Family and friends may give alarmist advice or be falsely reassuring. Parental knowledge and instincts about their children are very powerful but parents often lack objective reference points to compare their children to. Anxiety, hope, denial and competing priorities can complicate matters. Understanding the key underlying principles of child development will help clarify these issues and outline what needs action, and what action to take. What is ‘normal’ development? The age that children attain certain skills is variable. While many children achieve skills at a similar age, the range of what is considered “normal development” is in fact far broader than what is considered “common”. For example, it’s common to walk at around 12 months but it can be perfectly normal to not walk until 16 months. Normal development relies on an underlying foundation of elements: a child’s body, brain, well-being and practice. If all these elements are healthy, then it can be normal to be slower in a certain milestone. But if any of these elements are impaired then a child’s development might be problematic, even if their milestones appear at a common age. Two children who appear to have the same difficulty walking might, in fact, have very different underlying issues and require different interventions. Foundations of child development Body, brain, well-being and practice is one way to group together the vast and expanding body of research about the foundations of childhood development. Body refers to a child’s physical health. Eyesight, hearing, nutrition, muscles and internal organs all need to be in good shape. A child’s metabolic systems, iron and thyroid hormone levels are also important. Brain refers to neural pathways and regions of the brain that are specific for certain skills. For example, there are specific brain centres for motor co-ordination, language and social reciprocity. Genetic code anomalies are important underlying causes of problems. The health of pregnant women, for example having sufficient folate and avoiding alcohol, is also well recognised as important. Well-being refers to social and emotional health that is related to a child’s temperament and nurturing. It presents as a sense of self, resilience and determination. Children need safety, security and reciprocal engagement from their carers’ and community to thrive. Practice refers to having access to the right environmental opportunities to practice developing skills. A child needs exposure to key experiences and activities for the brain to develop optimally. Children then build future skills based on this. What do I need to know about milestones? Child development is a continuous process of acquiring skills, or milestones, which emerge from the foundations described above. Professionals cluster developmental skills into groups or domains. These are commonly called motor, communication, cognition and social-emotional domains. Motor: gross motor skills refers to the control of the body and limbs. These are most easily recognised in infancy, and include skills such as head control, sitting and walking. Fine motor skills refers to the use of hands and fingers, such as when manipulating objects and drawing with precision. The quality of motor skills also depends on muscle tone and co-ordination, which may be smooth, clumsy or imprecise. Communication is one of the best recognised domains and is divided into three components: expressive language (production of words and sentences), receptive language(understanding of sentences) and non-verbal communication or pre-linguistic skills. Pre-linguistic skills are essential for healthy language development. They are the way we communicate in the absence of words and include eye contact, gestures and reciprocal responses. Cognition or intelligence is often signified by problem-solving skills, memory, and identifying key concepts. As children’s cognition develops, they mature in their co-operation, application to new tasks and they broaden their play skills. Every parent marvels at their child’s ability to learn new things, but assessing intelligence objectively requires a formal test. Social and emotional: babies have an inherent interest in human voice and movement, and our brains involuntarily mirror the movements we see. Toddlers watch other children and soon want to spend more time with enjoyable people than toys. They are programmed to “copy and paste” what others do. Toddlers observe, mimic and extend on things they see, then look for a response and re-evaluate their actions. Children who have limited “copy and paste” or reduced interest in the perspectives of others tend to learn on their own agenda, and this leads to the slower acquisition of skills. Emotional development manifests as a balance between confidence and seeking reassurance, developing a sound sense of self and others. Instability in early emotional development can result in dysregulation of emotions, unsettled behaviour, or sometimes guarded social responses. Helping my child’s development Milestones can be useful markers of a child’s progress, but alone they are not good tools for diagnosis. The context, pattern and foundations that underpin childhood development are central to interpreting milestones. A practical way for parents to bring together all of the aspects of child development into everyday experiences can be summarised as Love Talk Sing Read Play. This is a resource for parents containing helpful information on what to expect from your child, how to stimulate them and when to seek further advice. Milestones are measurable evidence of a child’s development but are not always the best way to understand what children need. If you find yourself worrying about your child’s milestones, see your GP or early childhood nurse and start a conversation about your child’s development and what to do next. Milestones are visible and well known. Supporting children’s needs, and understanding their developmental foundations, is much more important than simply measuring when they walked or talked. Chris Elliot is Consultant Paediatrician and Conjoint Associate Lecturer at UNSW. Con Papadopoulos is a specialist in developmental and general paediatrics at UNSW. This opinion piece was first published in The Conversation.
https://newsroom.unsw.edu.au/news/health/what%E2%80%99s-milestone-understanding-your-child%E2%80%99s-development
The exemplary embodiment relates to a system and method for promoting environmental behavior by users of electromechanical devices, such as printers, for execution of jobs. It finds particular application in conjunction with a network printing system in which multiple shared printers are available to users for printing their print jobs and will be described with particular reference thereto. To improve operations, both in terms of environmental impact and cost, organizations such as companies, government organizations, schools, residential facilities, and the like, have attempted to promote a more environmentally conscious behavior in many areas of operation. However, to motivate users to change their habits in order to contribute to a collective objective is a complex matter, both at work and in society at large. The measures used in expressing environmental impact are difficult for users to grasp. The concept of a ton of carbon, for example, is meaningless to many people, both in terms of its size and the impact it may have on the environment. Additionally, employees may view a company's promotion of environmentally conscious behavior as merely a cost saving exercise. In environments such as transportation and home energy consumption, consumers have been provided with environmental information in terms of CO2 consumption (often referred to as the “carbon footprint”), which is widely accepted as a factor affecting climate change. The accurate association of CO2 emissions with specific processes is complex, as the span of the processes and the factors involved are difficult to determine with precision. Experiments done in the UK with respect to home behavior concerning energy consumption have shown positive effects when the information is presented to the consumers by means of “smart meters” that facilitate the understanding of the current and temporal state of use (see, e.g., Darby, S. “Why, what, when, how, where and who? Developing UK policy on metering, billing and energy display devices,” in Proc. ACEEE Summer Study on Energy Efficiency in Buildings, Asilomar, Calif. Aug. 17-22, 2008). There is no completely agreed upon definition of what a smart meter is. One widely-used definition introduced by the UK Industry Metering Advisory Group includes several dimensions which go beyond bare measurement of consumption. It includes “storing of measured data for multiple time periods” and “analysis of the data and a local display of the data in a meaningful form to the consumer”. Still, it remains to be specified which data should be presented and which form of presentation would be meaningful to the consumer. Standardization bodies, such as the ISO, address this issue through the creation of technical committees charged with the study of a certain process and determination of the carbon footprint associated with that process. In the area of printing, the ISO Technical Committee overseeing graphic technology standards (ISO TC 130) has initiated a committee for the printing and publishing industries. Currently, there is no standard method for measuring carbon footprint in this sector. Also, there is no consensus as to how to present this information to users in an effective way. CO2 calculations may depend on how far one goes in the production chain. For example, when computing the amount of CO2 consumed by a print job, it is difficult to estimate what account should be taken of the manufacturing process of the printer and the CO2 it consumes, or the transportation costs of paper and ink. Further, these are environmental costs over which the end user may have little or no control. Above-mentioned U.S. application Ser. No. 12/773,165 discloses a system and method for quantifying printer usage for review by a user. Data containing information related to a print job and community data relating to resource usage by members of a plurality of communities within a system are collected. A resource profiling component receives the marking engine data and the community data to evaluate resource usage by a user compared to one or more other users within their community. The system of Ser. No. 12/773,165 increases user awareness of printer usage by presenting the information graphically. However, this may not provide sufficient motivation to effect behavioral change. Users often have some choice in the printers that they use and can select options, such as whether to print in black and white or color, the type of paper to use, and so forth. Additionally, they have a choice as to how many times a document is printed. Often, the behavior of users in their printing is not motivated by environmental concerns. They may select, for example, to print on the closest printer, use default settings, or print a document multiple times during its creation, simply as a matter of convenience. Additionally, the print-on-demand nature of most network printing systems may result in printers being woken up from a low energy sleep mode to an awake mode for printing a single document when the user requesting printing did not need the document immediately. As a result, consumables are used and power consumed by devices which may needlessly impact the environment. The exemplary system and method promote environmentally-concerned behavior by users of such devices.
This book considers literary images of Japan created by David Mitchell, Kazuo Ishiguro, and Tan Twan Eng to examine the influence of Japanese imperialism and its legacy at a time when culture was appropriated as route to governmentality and violence justified as root to peace. Using David Mitchell’s The Thousand Autumns of Jacob de Zoet, Tan Twang Eng’s The Garden of the Evening Mists and Kazuo Ishiguro’s work to examine Japanese militarists’ tactics of usurpation and how Japanese imperialism reached out to the grass-root public and turned into a fundamental belief in colonial invasion and imperial expansion, the book provides an in depth study of trauma, memory and war. From studying the rise of Japanese imperialism to Japan’s legitimization of colonial invasion, in addition to the devastating consequences of imperialism on both the colonizers and the colonized, the book provides a literary, discursive context to re-examine the forces of civilization which will appeal to all those interested in diasporic literature and postcolonial discourse, and the continued relevance of literature in understanding memory, legacy and war. - Copyright: - 2019 Springer Book Details - Book Quality: - ISBN-13: - 9789811504624 - Publisher: - Springer Singapore, Singapore - Date of Addition: - 2019-11-24T15:32:50Z - Language: - English - Categories: - History, Language Arts, Literature and Fiction, Nonfiction, - Usage Restrictions: - This is a copyrighted book. Choosing a Book Format EPUB is the standard publishing format used by many e-book readers including iBooks, Easy Reader, VoiceDream Reader, etc. This is the most popular and widely used format. DAISY format is used by GoRead, Read2Go and most Kurzweil devices. Audio (MP3) format is used by audio only devices, such as iPod. Braille format is used by Braille output devices. DAISY Audio format works on DAISY compatible players such as Victor Reader Stream. Accessible Word format can be unzipped and opened in any tool that supports .docx files.
https://bolivia.bookshare.org/en/bookshare/book/2959382
Nestled on the central-western Tyrrhenian Sea border, Tuscany, or Toscana as the natives call it, boasts nearly 9,000 square miles and approximately 4 million residents. Tuscany is known for the seemingly boundless rolling hills, and mountainous region of the Apuan Alps. Tuscany is a region with an all-season appeal, with a variety of things to enjoy year-round. In the summer, Tuscany offers warm sun and fine sandy beaches. If you’re interested in experiencing one of the best moments in the Tuscan lifestyle, visit during September to October for the grape harvest, or November for the olive harvest. Considered ‘the birthplace of the Italian Renaissance,’ and the once breadbasket of the Roman Empire, Tuscany has a rich culture filled with art, food, and of course, wine. Tuscany is the ideal vacation destination for many reasons. Many come in search of fine art, others to explore the extraordinary countryside. Gourmets and wine buffs descend on Tuscany to enjoy the simple yet wonderful cuisine and wine. Walkers enjoy the mountain paths, cyclists the rolling hills, summer vacationers the sea coast and islands. Students come to learn the beautiful Italian language and culture. TOURS AND ACTIVITIES Boasting some of the world’s most beautiful landscapes, Tuscany is the perfect place to take in the proverbial, ‘breath of fresh air.’ There are many tours and activities that can help you enjoy everything Tuscany has to offer, including: Bicycle Tours: Bike the Tuscan countryside, and take in sites like the Chianti, Lunigiana, and Maremma Valleys, or Orcia, Lucca, and Pisa. Don’t forget to stop along the way to savor the rustic Tuscan cuisine, the Chianti wines, and most favorably the awe-inspiring sunsets. Walking Tours: Slow down and take a walk. Explore centuries-old Tuscan cities and villages, they have plenty to offer. From the Etruscan tombs of Sarteano, the Uffizi Gallery featuring works from Michelangelo, Botticelli, and Leonardo Da Vinci, to the famous terracotta façade of Il Duomo, and medieval castles, there is a piece of history for every vacationer. Horseback Riding Tours: Always wanted to explore the stunning countryside of Italy on horseback, but don’t know how to ride? Take a day-long, guided riding tour and navigate the hills of Chianti, Siena, Florence, Lucca, and the Tuscan coast with no riding experience necessary. Hiking Paths: Feeling a little more adventurous? Trek your way through the hillsides of Fiesole and Settignano, or the National Park (Parco Nazionale delle Foreste Casentinesi) for starters. Or, take your hike to the next level and trail the Apuan Alps of north Tuscany. Wine Routes: Known as the ‘Wine Trails of Tuscany’ join an intimate wine tour of some of the oldest castles and vineyards in the world, including Barone Ricasoli, a vineyard known most for their wine, Castello di Brolio. Then, continue along to taste all the full-body and floral flavors of the Tuscan countryside by visiting the wine regions of Chianti, Montalcino, Montepulciano, and Bolgheri. Chianti Wine Route Tour: Enjoy the charm of the Chianti countryside, one of the most suggestive areas of Tuscany, renowned all over the world not only for its famous wine, but even for its landscapes and vineyards, olive trees, and cypresses surrounding churches, parishes, castles and villages hidden among the hills. Tour castles, where Chianti- Classico is stored, gardens, and town squares. After spending a pleasant afternoon in the heart of the Chianti region, the best way to complete your tasting experience is by indulging in a Tuscan specialty, bistecca alla fiorentina (T-bone steak Florentine style). Delight your palate with a 36 oz T-bone serving for two, washing it down with smooth Chianti-Classico. Golf: Feeling sporty? There is nothing like playing the back nine on the rolling hills of the Tuscan terrain. Enjoy tree-lined courses with the added bonus of an unadulterated view as far as the eye can see. PLACES TO VISIT Florence: The capital of Tuscany, Florence (or Firenze) is the mecca of art, history, and culture for not only the region, but also all of Europe. Known as, ‘the birthplace of the Italian Renaissance,’ Florence posses some of the greatest works of art of all time, from artists like Michelangelo, Leonardo da Vinci, Giotto, and Dante Alighieri, to name a few. For more information, see our guide to Florence. Siena: A medieval hillside town, Siena is famous for its shell-shaped piazza (or town square) known as Piazza del Campo. Piazza del Campo is the heart of this town, housing the palace, Palazzo Pubblico and its tower, Torre del Mangia. It is also home to the twice-annual summer horse racing event, Il Palio. And if you’re fit enough to brave the 400 stair climb, you’ll be rewarded at the tower with a picturesque view of the cathedral and architectural treasure, Duomo di Siena. Discover more here. For more information on Il Palio, check out our special events listings below. Pisa: Home to the 14th century bell tower, The Leaning Tower of Pisa, Pisa has many other artistic and architectural marvels to see. Home to approximately 60,000 students, Pisa has a rich and vibrant nightlife of parties, cultural events, and shows. The best way to visit Pisa is walking the streets, and enjoying the sights and the atmosphere. Lucca: Most noted for its intact Renaissance walls that surround the city, Lucca has many medieval sites to behold. The Basilica di San Frediano, Church of St. Michele, Tomb of Ilaria del Carretto, and the ‘Holy Face’ of The Cathedral of St. Martino, are but a few of the many attractions in this small city. Versilia: If miles of fine sandy beaches are your forte, Versilia is the perfect place to visit. Comprised of four areas: Viareggio and Torre del Lago Puccini, Lido di Camaiore, Marina di Pietrasanta and Forte dei Marmi. Viareggio: A part of the Versilia, Viareggio has a history of tourism dating back to the 19th century. Known for the shopping promenade, the beaches, and the most incredible Carnival floats parade in Italy, Viareggio is a great choice if shopping and beaches are what you’re looking for. Lido di Camaiore and Marina di Pietrasanta: Lido di Camaiore and Marina di Pietrasanta are two seaside resorts known for their opulent secluded villas, parks, and gardens. Camaiore has many historical relics and artistic features, while Pietrasanta is the old capital of Versilia and an epicenter for artistic marble work. Massarosa: Massarosa is famous for its splendid natural environment in each of its 16 tiny hamlets that surround the center. Well-known since prehistoric times, Massarosa was inhabited during the Roman period and was so important that we still today find evidence of the ancient Roman Baths at Massaciuccoli. Seravezza and Stazzema: At the base of the Apuan Alps, these two towns have plenty of history to offer any tourist. Seravezza has a Medici palace, and Stazzema is known for its historical roots in WW II. San Gimignano: San Gimignano is a small medieval hill town known as the, ‘City of Beautiful Towers.’ It has 14 remaining towers that comprise its beautiful skyline, and the town itself has both Gothic and Romanesque influences, a must see for any architectural buff. The town also known for its white wine, Vernaccia di San Gimignano, produced from the ancient variety of Vernaccia grape, grown on the sandstone hillsides of the surrounding area. Montepulciano: Montepulciano is a walled city most often visited on the ‘Wine Trails of Tuscany.’ It is known for its wine called Vino Nobile, and its remarkable main square and picturesque Renaissance buildings. Montalcino: Not far from Montepulciano is Montalcino. Montalcino is one of the oldest cities in Tuscany, famous for its great wine, Brunello di Montalcino. This is a place to visit if you’re in search of relaxation, fine wine, and incredible hillside views of neighboring vineyards, olive orchards, and villages. Pienza: Nestled between Montepulciano and Montalcino, we find the stunning Pienza. This city, often referred to as the, ‘touchstone of Renaissance urbanism,’ was rebuilt out of a village called Corsignano by Pope Pius II to be a retreat from Rome. This town was among the first to employ humanist urban planning concepts, and influenced many other European centers thereafter. Cortona: The setting of the famous book, Under the Tuscan Sun, Cortona is a hill town surrounded by a 3,000 year old foundation of Etruscan walls, with layers of history built into its culture and architecture from its Etruscan core. Cortona is a well rounded site to view art, architecture and scenic hillside. Some historians claim interesting history linked Noah’s Ark. MUST SEES Chianti Region: Start your wine tour off with some of the best wine and scenery Tuscany has to offer. Known for inventing Chianti, a wine known for having notes of cherry, plum, and strawberry, to name a few, the Chianti region is an area located in the center of Tuscany, between Florence and Siena. Chianti has a landscape characterized by its rolling hills covered by vineyards, olive orchards, and lush valleys. Throughout the Chianti region, there are numerous ancient villages, churches and abbeys, castles and fortresses, farmhouses and villas, making this a great adventure for wine lovers. Volterra: Another of Tuscany’s walled hill towns, Volterra has a rich medieval history seen throughout the city. This is a lesser traveled tourist spot, but still nothing short of a must see. From the yellow-grey panchino stone seen in most structures throughout the city, to one of the most impressive squares of Tuscany, Piazza dei Priori, this under-traveled site will be sure to impress. Elba Island: One of seven and the largest islands of the Tuscan Achipelago islands, Elba Island is a favorite among tourists. Having over 50 sandy beaches, ranges of pine trees, coral reefs, archaeological ruins to explore, and Napoleon’s home during his first exile, this island is not short on things to see. Montecatini Terme: If a little rest and relaxation is what you seek, then you’ve come to the right place. Dating back to having inhabitants during the Paleolithic Era, though most notably known more for it Mesolithic period, Montecatini Terme is notorious for its water’s properties since the 14th century. Relax at the many thermal establishments and mud baths while taking in the Liberty style architecture. FOOD SPECIALTIES Panzanella: Enjoy this summer salad classically made of soaked and drained stale bread, tomatoes, vinegar, olive oil, and topped off with salt and pepper. Onions and basil are often optional additions. Panforte: Akin to the fruit cake, this Sienese cake has a strong flavor of candied fruit and spices and it is rather thick. Made with honey, sugar, flour, and candied fruits, it is an ideal accompaniment for coffee, or dessert wine after a meal. Cantuccino: A variation on the classic biscotti this biscuit exemplifies Tuscan tradition. Often made with almonds, cantuccino are served with Vin Santo dessert wine. Chianti Classico, Brunello di Montalcino, Rosso di Montepulciano: These are only some of the highly appreciated Tuscan wines known throughout the world. SPECIAL EVENTS Palio di Siena: A two-time summer event, Siena’s Piazza del Campo hosts the famous horse race Palio di Siena. The race is comprised of 17 contrade (each representing a neighborhood in Siena), and each with a rider. 10 contrade take part in the first race, July 2nd. The other 7 contestants, plus 3 from the July race compete August 16th. Winning the Palio is a great honor and the race is highly competitive, with an element of danger— not uncommon to see a rider-less horse or two cross the finish line. Carnevale di Viareggio: A carnival dating back to the 19th century, Carnivale di Viareggio is famously known for its parade floats from enormous colorful machineries where monsters, politicians, animals, and fantastic creatures move their tentacles, heads, arms, etc. The construction of the floats are taken very seriously, often employing various mediums to achieve the best float in the parade, most often the use of paper mache is prevalent among most floats. Luminaria, Regatta and Battle of the Bridge: On June 16th, take part in the most magical night in Tuscany, as the skyline of Pisa is ablaze with over 70,000 burning candles to kick-off the festivities honoring Saint Ranieri, the patron saint of Pisa and protector of all travelers. Following the mystical night of lights, watch as four boats representing the most ancient districts of the city compete in the Regatta (boat race). Upon crossing the finish line it is up to the team’s climber to complete the victory, boarding an anchored boat to climb the mast and retrieve the blue silk banner. About a week later, Pisa reenacts the Battle of the Bridge. The event is structured into two different moments: a historical military costume parade through the city center and the Battle of the Bridge. The battle is historically medieval taking place on the bridge of Pisa (Ponte di Mezzo). In a grueling ‘push of war’ combatants have to push a duly placed on the bridge track in two opposite directions. The ones who remain on the bridge win the contest. Chianti Classico Wine Festival: Rounding out the summer, enjoy the Chianti Classico Wine Festival (Rassegna del Chianti Classico) in Greve. Enjoy a glass of wine or three, fresh olive oil and cheese tastings, theatre, games, and entertainment. The festival takes place the second weekend of September and the preceding Friday.
https://www.helloitalytours.com/destinations/tuscany/
The HyperScope multiphoton imaging system now has advanced imaging capabilities; the introduction of an extended wavelength lens set means you can image deeper and through thin scattering layers in in vivo samples. Learn more here. Memory encoding: Synaptic plasticity requires specific coordinated activation Scientists at the University of Bristol have shown that associative synaptic plasticity involves a complex intracellular signalling cascade requiring the activation of separate Ca2+ sources and metabotropic glutamate receptors. Principal investigator, Dr Jack Mellor, from Bristol’s Centre for Synaptic Plasticity, said: “Our research shows that it is not simply the local concentration of calcium ions within single dendritic spines that determines whether a synapse becomes stronger or weaker. There is a much more complex cascade of feedback loops involved in the regulation of synaptic plasticity. This is important because it enables neuromodulator systems to control the triggers for synaptic plasticity.” According to the paper, published in Nature Communications, long-term potentiation (LTP) at glutamatergic synapses in dendritic spines depends upon the activation of postsynaptic NMDA receptors (NMDARs) followed by voltage-sensitive Ca2+ channels (VSCCs). Furthermore, inhibition of SK channels by group 1 metabotropic glutamate receptors (mGluR1) activation is also required for lasting synaptic changes. On the postsynaptic membrane of glutamatergic synapses on dendritic spines, an initial depolarisation of the postsynaptic spine is initiated by activation of ionotropic glutamate receptors. This depolarisation can be further amplified by the opening of voltage-gated calcium channels. NMDARs also only open when the neuron is already depolarised, increasing Ca2+ influx. This creates a positive feedback loop that will eventually lead to cell toxicity and death if not regulated. Regulation comes in the form of SK channels, which pump K+ ions out of the cell when intracellular Ca2+ gets too high. However, the opening of SK channels inhibits synaptic plasticity, which depends on sustained depolarisation. Synaptic plasticity mechanism in dendritic spines (Credit: Nature/Thomas G. Oertner) The Nature Communications paper demonstrated that repeated activation of pre- and post-synaptic neurons induced LTP, but required the activity of mGluR1. These receptors act by inhibiting SK channels, leading to a sustained depolarisation and enhanced Ca2+ influx. An important, and still unanswered question is how these mechanisms converge to determine the strength and direction of synaptic plasticity. Dr Mellor said: “Scientifica’s Multiphoton Imaging System provided us with an integrated whole-cell patch clamp and calcium imaging system that enabled us to measure both the calcium response within single dendritic spines and the outcome of those calcium responses in terms of synaptic plasticity." Whole-cell patch-clamp recordings were made from CA1 pyramidal neurons visualised under IR-DIC on a SliceScope Pro 6000/Multiphoton Imaging System. This same system was used for two-photon calcium imaging using dual channel fluorescence. Paper References:
https://www.scientifica.uk.com/research-news/memory-encoding-synaptic-plasticity-requires-specific-coordinated-activation
Brazil once had the highest deforestation rate in the world and as of 2013 still has large areas of forest removed annually. We abuse the land because we regard it as a commodity belonging to us. When we see land as a community to which we belong, we may begin to use it with love and respect. Aldo Leopold As mentioned elsewhere on this site, just two countries, Brazil and Indonesia are estimated to account for approximately 55% of the world’s deforestation. Since 1970, over 600,000 sq kilometres (230,000 sq miles) of Amazon rainforest have been destroyed. In 2013, the Amazon was approximately 5.3 million sq kilometres, which is only 86% of its original state. Rainforests have decreased in size entirely due to deforestation. Despite reductions in the rate of deforestation in recent years, the Amazon Rainforest could still be reduced by a further 30-40% in the next 15 years at the current rate. Deforestation in the Brazilian Amazon is responsible for as much as 10% of current greenhouse gas emissions due to the removal of forest which would have otherwise absorbed the emissions having a clear effect on global warming. The problem is made worse by the method of removing the forest where many trees are burned to the ground emitting vast amounts of carbon dioxide into the atmosphere, not only affecting air quality in areas of Brazil but affecting the carbon dioxide levels globally as a result. Between May 2000 and August 2006, it is estimated that Brazil lost nearly 150,000 sq kilometres of forest. That is an area larger than the entirety of England. The Brazilian rainforest is one of the most biologically diverse regions of the world. Over a million species of plants and animals are known to live in the Amazon and many millions of species are unclassified or unknown. With the rapid process of deforestation the habitats of many animals and plants that live in the rainforests are under threat and species may face extinction. Rainforests are the oldest ecosystems on earth. Rainforest plants and animals continued to evolve, developing into the most diverse and complex ecosystems on earth. Living in limited areas, most of these species are endemic, or found nowhere else in the world. In tropical rainforests, it is estimated that 90% of the species that exist in the ecosystem reside in the canopy. Since the tropical rainforests are estimated to hold 50% of the planet’s species, the canopy of rainforests worldwide may hold 45% of life on Earth. We are currently seeking local partners with whom we may collaborate and develop seedling nurseries in the same proven format we have been operating in North Thailand. To learn more about deforestation in Brazil and South America, go to our Research page under Education and Resources. As mentioned elsewhere on this site, tree planting in Brazil or SE Asia has far greater benefits than planting in high latitudes such as Europe. Research show that Trees only really work to cool the planet if planted in the tropics In the so-called mid-latitude region where the United States and majority of European countries are located, the climate benefits of tree planting to reduce global warming is very low. Tropical forests are very beneficial to the climate because they take up carbon and increase cloudiness, which in turn helps cool the planet. Land and labour costs are also far lower and the benefits to what are predominantly poor communities far greater. In our view, schemes designed to offset carbon emissions are only effective if planted in tropical climates. In addition to offsetting carbon emissions however, our projects are also about maintaining bio diversity, helping wildlife and communities. It is difficult for tree planting alone to replicate the biodiversity and complexity of a natural forest but we go to some lengths to achieve a balanced and diverse environment. To promote the growth of native ecosystems, WTT advocate only indigenous trees be planted. Where essential to begin rebuilding desolate areas of land, we may on occasion plant tough, fast-growing native tree species. We seek to plant non-invasive trees that assist in the natural return of indigenous species to assist natural regeneration. Please support us.
Jazz 1920Jazz is a musical genre that originated in the African American communities of New Orleans, Louisiana, the United States, in the late 19th and early 20th centuries, with roots in blues and ragtime. Since the jazz era of 1920, it has been recognized as an important form of musical expression in traditional and popular music, united by the common ties of African-American and European-American musical ancestry. - Jump Blues 1940The jump blues was a musical subgenre that emerged from the blues in the late 1930s in the United States. In those years the sound of blues was urbanized in a mixture of classic blues with humorous lyrics and rhythms inherited from the boogie-woogie. Louis Jordan and his band are considered the pioneers of this style, which was widely accepted and called jump blues. - Pop 1950The origins of pop music can be found in a variety of different musical styles, including the jazz piano tunes of ragtime, a musical trend associated with the late 19th and early 20th centuries. Its roots can also be found in improvised rhythms of the jazz era of the 1920s and 30s and the orchestras of the big band era, which reigned in the 1940s. - Rock and roll 1950It is a musical genre of marked rhythm, derived from a mixture of various genres of American folk music (doo wop, rhythm and blues, hillbilly, blues, country and western are the most prominent) and popularized since the 1950s.2 The most popular singer it was Elvis Presley; its most influential guitarist, Chuck Berry; its most important pianist, Jerry Lee Lewis and its most prominent predecessors, Eddie Cochran, Gene Vincent, Little Richard, Fats Domino, Buddy Holly and Bill Haley, among others. - Heavy Metal 1960Metal music began in the late 1960s and rose to prominence in the early 1970s in Great Britain. The term "metal" is believed to come from the hippie movement, when "heavy" meant deep or serious. Metal music revolves around a few key components: heavily distorted guitar riffs and chords, powerful drums, extra low-range bass notes, and aggressive or throaty vocals. Occasionally there is an element of speed at play as well be it the tempo of the song or a fast guitar solo showing technical prowess - Rock Alternative 1960is an offshoot of the rock music genre that became very popular in the 1990s. It was a term that was used generously to describe the bands involved in the phenomenon of the early 1990s.These genres are unified by their collective debt to punk;In the1970s - Hip Hop 1970Born and raised to age 10 in Kingston, Jamaica, DJ Kool Herc began playing records at parties and between sets that his father's band played while he was a teenager in the Bronx in the early 1970s. - punk rock 1974The punk counterculture includes a diverse and widely known variety of ideologies, fashion and other forms of expression, visual arts, dance, literature, and film. It is largely characterized by anti-system views, the promotion of individual freedom, the DIY ethic, and focuses on a loud and aggressive rock music genre called punk rock. - Rap metal 1980Rap metal is a musical genre born in the mid-1980s, based on rap rock artists (a fusion between rap and rock) and consolidated in the early 1990s in the United States. It mainly fuses elements of hip hop with heavy metal, although, generally, it is influenced by other styles, such as hard rock, various subgenres of rap or alternative metal, among several others. - Grunge 1980It emerged at the end of the eighties, with groups coming mainly from the North American state of Washington, in particular, from the Seattle area. The first company that promoted and made the genre known was the Sub Pop record company, supporting bands that would be fundamental in the development of the nascent genre, such as Nirvana, Green River, Pearl Jam, Soundgarden and Alice in Chains.
https://www.timetoast.com/timelines/evolution-of-music-955d0012-a982-43c1-9cb5-cc9acdf9add2
Directionality and Amount of Broken Glass Visualizes direction and quantity of glass particles as the glass is impacted by a force. In this learning object the student will learn that regardless of the surface onto which a blood droplet is falling, the angle or velocity at which it does so, or the volume of the droplet, there are four distinct phases involved in the reaction of a moving droplet with impact against a surface. This learning object shows how the shape of a stain defines the angle of impact. In general terms the more circular the stain, the more perpendicular will be the angle at which it struck the surface. The more elliptical the shape of the stain, the more acute the angle will be. With practice and experience, the analyst can recognize the general angle of impact based solely on the shape of the stain. You may also like Creative Commons Attribution-NonCommercial 4.0 International License.
https://dev.wisc-online.com/learn/career-clusters/law-public-safety-corrections-and-security/crj1611/directionality-and-amount-of-broken-glass