content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
5 Concerning Cybersecurity Trends of 2022 Trends in cybersecurity in 2022 show that home and business networks continue to expand, and cybercriminals developed new ways to try to breach into those networks. Domestically and abroad organizations are working to assemble cybersecurity teams and to implement more technology in order to stay secure. But will their cybersecurity standing be proactive enough? If you’d like to learn about other business trends coming next year, watch our expert-led 2023 Business Trends webinar now! 2022 Cybersecurity Trends Let’s go into each of the 2022 cybersecurity trends in depth. 1. More Phishing Lures to Watch Out For Phishing emails—in which a sender pretends to be a trusted figure or company in order to get the user to click a link or download a file—have unfortunately become a common occurrence in our inboxes. In fact, at 37% phishing was the most common type of cyberattack experienced by organizations in 2022. In another study, 255 million phishing attacks appeared in emails and other channels analyzed for suspicious attachments and links. This is a 61% increase in phishing attempts compared to 2021. While the cybersecurity community works to use technology to deter malicious actors from hacking into networks, the bad actors themselves develop new strategies to achieve breaches. In a developing trend, hackers are using artificial intelligence to deploy automated phishing attacks that mimic human verbal patterns to fool unsuspecting users. With this technology, the number of attempted attacks could increase exponentially. Phishing works, the statistics show. Therefore, it’s important to implement cybersecurity training in your organization to raise awareness of the ways to identify it and prevent it. Related Blog: Spear Phishing: Why You Should Be Protecting Your Email 2. Ransomware is Here to Stay Currently, ransomware attacks appear in the news cycle regularly. One of the most recent attacks, a reported ransomware attack on newspaper The Guardian, affected the media organization’s IT network and systems. Malicious hackers also successfully deployed a ransomware attack on Knox College in Illinois. They went as far as contacting the students to let them know they had exfiltrated their data. Usually hackers deal with organizational leadership, but in this case, they messaged the affected students directly. A typical ransomware attack involves encrypting the victim organization’s or user’s data in order to demand a ransom for a decryption key. Bad actors make use of large networks to share knowledge, tools, and to sell the exfiltrated data. A number of ransomware gangs also have members work together to coordinate attacks on larger enterprises and even governments. 2022 cybersecurity trends show that ransomware is here to stay. A study surveying security leaders found that 79% of reporting organizations encountered ransomware attacks. Of those, 35% lost access to their own data and systems. To combat this trend, organizations and individuals should implement thorough cybersecurity training initiatives. For small and midsized businesses, we recommend they create a cybersecurity program backed by specialists. 3. Our Attack Surface is Larger The Internet of things (IoT)—an interconnected network of computing and digital devices—has created many benefits for businesses and individual users alike. However, since we are using more devices than ever before, this also means malicious actors have a larger attack surface. In 2022, the number of IoT connected devices reached 11.57 billion. This number is expected to increase more than double by 2030, reaching more than 25 billion devices. While businesses take advantage of IoT to increase customer engagement, automate worker tasks, increase efficiency, and reduce costs, they are also under pressure to protect so many connected devices. This cybersecurity trend in 2022 is a consequence of the pandemic beginning in 2020. Since many workers migrated to work-from-home environments—the number tripled from 2019 to 2021—organizations are struggling to implement security standards to protect their networks. 32% of cybersecurity professionals report having more difficulty protecting attack surfaces, compared to 28% in 2021. In addition, 60% of knowledge workers are remote, and a sizeable portion of them will continue to work remotely. Therefore, cybersecurity experts urge organizations to act now to prevent breaches instead of retroactively fix damages once a breach has happened. Remote work may be here to stay, but this does not mean your devices need to stay unprotected. Especially useful for organizations are endpoint protection and network security monitoring solutions to keep your organization systems and data safe. Related Blog: Why You Need Layered Security 4. Healthcare and Education Sectors Affected by Cybercrime Bad actors’ main goal when directing cyberattacks is financial gain. It is no surprise, then, that the industry most affected by data breaches is finance, according to the 2022 Data Breach Investigations Report. The professional services and healthcare industries follow close behind, with public administration, IT, education, and manufacturing also making the top list. Educational organizations were particularly affected by the pandemic. Due to online learning and remote work, the attack surface for these institutions has also increased. While many educational facilities have started to prioritize cybersecurity initiatives, this industry has come last in studies rating their cybersecurity standing. Attacks on the healthcare sector have also seen an uptake in 2022. Since patients’ health information can be sold online, data breaches have been a continuing cybersecurity trend in 2022 for healthcare facilities. If breached, organizations that handle patient’s protected health information (PHI) may have to pay fines, handle legal repercussions, and lose their clients’ trust. On average, the cost of a data breach in the healthcare sector was $7.13 million. Related Blog: What Is HIPAA Compliance and Why Is It Important? 5. The Cybersecurity Talent Shortage Continues Organizations may try their best to prevent data breaches and implement the latest cybersecurity solutions. However, the skill gap cybersecurity teams deal with continues to grow. The number of cybersecurity workers reached an all-time high in 2022. Despite this, the talent gap continues to go unfilled, and the percentage of needed professionals grows every year. In the US alone, more than 700 thousand cybersecurity positions still need to be filled. This is in partly due to recruiters requiring a high number of credentials to fill the positions. Another factor affecting the skill gap is speed at which cyber threats evolve. Since cybercriminals are continuously developing new tools and techniques to attempt data breaches, it also means that the cybersecurity learning curve is always changing. Companies who wish to protect their networks can benefit from a partnership with a cybersecurity provider, since the provider can provide the technology and expertise needed to safeguard their clients’ environments. Bottom Line These cybersecurity trends of 2022 show that a proactive approach to safeguarding business networks is the best response to growing threats. Ransomware, phishing, a larger attack surface, and the shortage of skilled cybersecurity workers are affecting not only individuals, but also many businesses and even government organizations. Staying up to date on developing cybersecurity trends as well as speaking with your IT team leaders or a cybersecurity consultant should be a priority for all organizations. Cybersecurity has become a mainstay in business, and the trends of 2022 show that it will be so in the future too. Learn about the upcoming business trends of 2023 so that you know what to expect in the upcoming year for your organization. Watch our 2023 Business Trends webinar led by industry experts.
https://www.impactmybiz.com/blog/concerning-trends-in-cybersecurity-2022/
Mitt huvudsakliga forskningsområde är matkultur och näringsfång, och hur detta förändras under långa tidsförlopp. Min forskning sker främst inom ett fält som kallas för biomolekylär arkeologi. I kombination med vedertagen arkeologisk bevisföring, och skriftligt källmaterial (från de perioder det finns tillgängligt), använder jag mig av både molekylär analys och isotopanalys av lämningar efter mat som påträffas på och inuti förhistoriska keramikkärl och antropogena jordar. Kronologiskt spänner sig min forskning från sent Paleolitikum fram till sen Modern tid. Efter min doktorsdisputation i juni 2000 var jag gästforskare på the Fossil Fuel and Environmental Geochemistry Newcastle Research Group, University of Newcastle upon Tyne, United Kingdom, från november 2000 till mars 2001. Sedan dess har jag varit anställd som externfinansierad forskare vid Arkeologiska forskningslaboratoriet och vid Centrum för evolutionär kulturforskning, båda vid Institutionen för arkeologi och antikens kultur vid Stockholms universitet. Jag har varit projektledare för tre och medsökande på fyra externa forskningsprojekt under denna tid (se nedan). Sedan augusti 2016 innehar jag en tillsvidareanställning som lektor i Arkeologi med laborativ inriktning vid Arkeologiska forskningslaboratoriet, Stockholms universitet, och från och med oktober 2016 är jag engagerad som handledare inom the Marie Skłodowska-Curie European Joint Doctoral Training Site ArchSci 2020. Projekt Som projektledare: 2007-2010: A Spartan way of life? On the culture of food and subsistence in Bronze Age Sweden. Finansierat av Forskningsrådet. 2002-2007: Research Fellowship (Bidrag för rekryteringsanställning som forskarassistent i arkeologi, samt Tilläggsbidrag till anställning som forskarassistent (arkeologi).) Finansierat av Forskningsrådet. 2001-2005: By House and Hearth - The chemistry of culture layers as a document of the subsistence of prehistoric man. Co-applicant: Björn Hjulström. Finansierat av Forskningsrådet. 2001 06 01-2001 08 31: Tracing ancient vegetable foods. Finansierat av Kungliga Vetenskapsakademien. Som medsökande: 2013-2014: Whey to go - detecting prehistoric dairying practices in Scandinavia. Principal Investigator: Prof. K. Lidén, Stockholm University. Co-applicants: Dr. Sven Isaksson, Dr. Gunilla Eriksson. Finansierat av Berit Wallenbergs stiftelse. 2011-2014: Ceramics before Farming: Prehistoric Pottery Dispersals in Northeast Asia. Principle Investigator: Dr P. Jordan, University of Aberdeen, UK. Co-applicants: Dr B. Fitzhugh, University of Washington (USA), Dr I. S. Zhushchikhovskaya, Russ.Acad.Sci. (Russia), Prof. H. Kato (Project Associate), University of Sapporo (Hokkaido), Dr S. Isaksson (Project Associate), Stockholm University (Sweden), Dr P. S. Quinn, University of Sheffield (UK). Finansierat av The UK Leverhulme Trust. 2010-2013: Uniquely Human. Principal Investigator: Prof. M. Enquist, Stockholm University. Co-applicants: Prof. Stefano Ghirlanda, Dr Sven Isaksson, Dr Johan Lind. Finansierat av Vetenskapsrådet. 2007-2009: Cultaptation – "Dynamics and adaptation in human cumulative culture". Coordinator: Prof. Kimmo Eriksson. Other Principal Investigators: Prof. Magnus Enquist, Prof. Stefano Ghirlanda, Prof. Kevin Laland, Prof. Kerstin Lidén, Prof. Pierluigi Contucci, Prof. Arne Jarrick. Co-applicants: Hanna Aronsson, Micael Ehn, Lewis Dean, Dr. Gunilla Eriksson, Dr Sven Isaksson, Fredrik Jansson, Dr. Jeremy Kendal, Elin Fornander, Dr. Jonas Sjöstrand, Dr. Luke Rendell, Pontus Strimling, Dr. Niklas Janz, Dr. Johan Lind and Christina Schierman. Finansierat av EU:s 6:e ramprogram. Akademiska priser 2008: Från Societas Archaeologica Upsaliensis "för hans framgångsrika, flervetenskapliga strävan att förena naturvetenskap och humaniora genom att skickligt och uppslagsrikt väva samman egna biomolekylära och arkeologiska analyser och tolkningar." 2001: Från Kungliga Vitterhets, Historie och Antikvitets Akademien, för “förtjänt vetenskapligt arbete (Food and Rank in Early Medieval Time)”. PublikationerI urval från Stockholms universitets publikationsdatabas - Artikel The impact of environmental change on the use of early pottery by East Asian hunter-gatherers2018. Alexandre Lucquin (et al.). Proceedings of the National Academy of Sciences of the United States of America 115 (31), 7931-7936 The invention of pottery was a fundamental technological advancement with far-reaching economic and cultural consequences. Pottery containers first emerged in East Asia during the Late Pleistocene in a wide range of environmental settings, but became particularly prominent and much more widely dispersed after climatic warming at the start of the Holocene. Some archaeologists argue that this increasing usage was driven by environmental factors, as warmer climates would have generated a wider range of terrestrial plant and animal resources that required processing in pottery. However, this hypothesis has never been directly tested. Here, in one of the largest studies of its kind, we conducted organic residue analysis of >800 pottery vessels selected from 46 Late Pleistocene and Early Holocene sites located across the Japanese archipelago to identify their contents. Our results demonstrate that pottery had a strong association with the processing of aquatic resources, irrespective of the ecological setting. Contrary to expectations, this association remained stable even after the onset of Holocene warming, including in more southerly areas, where expanding forests provided new opportunities for hunting and gathering. Nevertheless, the results indicate that a broader array of aquatic resources was processed in pottery after the start of the Holocene. We suggest this marks a significant change in the role of pottery of hunter-gatherers, corresponding to an increased volume of production, greater variation in forms and sizes, the rise of intensified fishing, the onset of shellfish exploitation, and reduced residential mobility. - 2017. Ester Oras (et al.). Journal of Mass Spectrometry 52 (10), 689-700 Soft‐ionization methods are currently at the forefront of developing novel methods for analysing degraded archaeological organic residues. Here, we present little‐used soft ionization method of matrix assisted laser desorption/ionization‐Fourier transform‐ion cyclotron resonance‐mass spectrometry (MALDI‐FT‐ICR‐MS) for the identification of archaeological lipid residues. It is a high‐resolution and sensitive method with low limits of detection capable of identifying lipid compounds in small concentrations, thus providing a highly potential new technique for the analysis of degraded lipid components. A thorough methodology development for analysing cooked and degraded food remains from ceramic vessels was carried out, and the most efficient sample preparation protocol is described. The identified components, also controlled by independent parallel analysis by gas chromatography‐mass spectrometry (GC‐MS) and gas chromatography‐combustion‐isotope ratio mass spectrometry (GC‐C‐IRMS), demonstrate its capability of identifying very different food residues including dairy, adipose fats as well as lipids of aquatic origin. The results obtained from experimentally cooked and original archaeological samples prove the suitability of MALDI‐FT‐ICR‐MS for analysing archaeological organic residues. Sample preparation protocol and identification of compounds provide future reference for analysing various aged and degraded lipid residues in different organic and mineral matrices. - Artikel Ancient lipids document continuity in the use of earlyhunter–gatherer pottery through 9,000 years of Japanese prehistory2016. Alexandre Lucquin (et al.). Proceedings of the National Academy of Sciences of the United States of America 113 (15), 3991-3996 The earliest pots in the world are from East Asia and date to the LatePleistocene. However, ceramic vessels were only produced in largenumbers during the warmer and more stable climatic conditions ofthe Holocene. It has long been assumed that the expansion of potterywas linked with increased sedentism and exploitation of newresources that became available with the ameliorated climate, butthis hypothesis has never been tested. Through chemical analysis oftheir contents, we herein investigate the use of pottery across anexceptionally long 9,000-y sequence from the Jo¯mon site of Torihamainwestern Japan, intermittently occupied from the Late Pleistocene tothe mid-Holocene. Molecular and isotopic analyses of lipids from 143vessels provides clear evidence that pottery across this sequence waspredominantly used for cooking marine and freshwater resources,with evidence for diversification in the range of aquatic productsprocessed during the Holocene. Conversely, there is little indicationthat ruminant animals or plants were processed in pottery, althoughit is evident from the faunal and macrobotanical remains that thesefoods were heavily exploited. Supported by other residue analysisdata from Japan, our results show that the link between potteryand fishing was established in the Late Paleolithic and lasted wellinto the Holocene, despite environmental and socio-economic change.Cooking aquatic products in pottery represents an enduring socialaspect of East Asian hunter–gatherers, a tradition based on a dependabletechnology for exploiting a sustainable resource in an uncertainand changing world. - Artikel Okhotsk - arktiskt vildmarksliv2016. Sven Isaksson. Överleva 77 (1), 34-43 - Kapitel Pots in Context2016. Ludvig Papmehl-Dufay (et al.). In dialogue, 55-66 - Artikel A Novel Method to Analyze Social Transmission in Chronologically Sequenced Assemblages, Implemented on Cultural Inheritance of the Art of Cooking2015. Sven Isaksson (et al.). PLoS ONE 10 (5) Here we present an analytical technique for the measurement and evaluation of changes in chronologically sequenced assemblages. To illustrate the method, we studied the cultural evolution of European cooking as revealed in seven cook books dispersed over the past 800 years. We investigated if changes in the set of commonly used ingredients were mainly gradual or subject to fashion fluctuations. Applying our method to the data from the cook books revealed that overall, there is a clear continuity in cooking over the ages - cooking is knowledge that is passed down through generations, not something (re-) invented by each generation on its own. Looking at three main categories of ingredients separately (spices, animal products and vegetables), however, disclosed that all ingredients do not change according to the same pattern. While choice of animal products was very conservative, changing completely sequentially, changes in the choices of spices, but also of vegetables, were more unbounded. We hypothesize that this may be due a combination of fashion fluctuations and changes in availability due to contact with the Americas during our study time period. The presented method is also usable on other assemblage type data, and can thus be of utility for analyzing sequential archaeological data from the same area or other similarly organized material.
https://www.su.se/profiles/isak-1.184680
Loading... | | Agatha Christie Born: 1890 Country: United Kingdom Dame Agatha Mary Clarissa Christie, Lady Mallowan, DBE (/ˈæɡəθə/; née Miller; 15 September 1890 – 12 January 1976) was an English writer. She is known for her 66 detective novels and 14 short story collections, particularly those revolving around her fictional detectives Hercule Poirot and Miss Marple. Christie also wrote the world's longest-running play, a murder mystery, The Mousetrap, and six romances under the name Mary Westmacott. In 1971 she was appointed a Dame Commander of the Order of the British Empire (DBE) for her contribution to literature. Christie was born into a wealthy upper-middle-class family in Torquay, Devon. She served in a Devon hospital during the First World War, tending to troops coming back from the trenches, before marrying and starting a family in London. She was initially an unsuccessful writer with six rejections, but this changed when The Mysterious Affair at Styles, featuring Hercule Poirot, was published in 1920. During the Second World War she worked as a pharmacy assistant at University College Hospital, London, during the Blitz and acquired a good knowledge of poisons which featured in many of her subsequent novels. | BOOKS OF THIS AUTHOR | Loading... TOP AUTHORS RULES DMCA Notice Terms of Services - Terry Pratchett - Darren Shan - Samantha Young - Gav Thorpe - Pamela Palmer - Gene Wolfe - F. Paul Wilson - Amanda Maxlyn - Jodi Ellen Malpas - Jill Shalvis - Susan Hill - Sherrilyn Kenyon - Megan Shepherd - Colleen Gleason - Kylie Scott - Courtney Summers - Elizabeth Kostova - Yanna Lee - Tobias S. Buckell - Jennifer Estep - Georgia Cates - Emma Chase - J.C. Reed - C.J. Ellisson - Brent Weeks - Kendare Blake - Sam Sykes - Meg Cabot - Nicole Peeler - Philip Athans - Lauren Kate - S.D. Perry - Max Brooks - Fritz Leiber - MaryJanice Davidson - James S.A. Corey - R.K. Lilley - Molly McAdams - Robert McCammon - Katja Millay Loading...
http://anybooksfree.com/authors/books/agatha-christie
This module builds on Investigating psychology 2 and takes a critical and creative approach to methodology in psychology, with a substantive empirical project. Experimentation, survey methodology and text-based qualitative analyses (discourse analysis and phenomenological analysis) are explored through the topics of memory, language, creativity, personality, child development, emotions, and relationships. These topics are also used to present research in the core domains of biological, cognitive, developmental individual differences and social psychology. In addition, quantitative and qualitative methods are taught. Students can express a preference for the method to be used in their independent project: text-based analysis, experimentation, or survey. Investigating psychology 3 gives you the opportunity to carry out an independent research project with specialist supervision. To facilitate this, students are strongly encouraged to engage with an online activity that outlines the broad options available for the independent project. This takes place before the module begins and is designed to help you decide on your preferences. At the start of the module you can record these preferences and will be used in allocating you an appropriate tutor. During the first half of the module the interactive online study guide leads you, week by week, through an exploration of the key methods used in psychological research, investigating how the diversity of methods originated, and the way that psychology relates to both social and natural sciences. In Block 1, you'll examine how experimentation, survey and text-based methods are used and consider the kind of psychological knowledge that each method generates. The use of experimentation in memory research and how it relates to biological methods, such as brain imaging, will also be reviewed. Our discussion then turns to the use of surveys and explores how attitudes and beliefs about the way children learn and develop relate to our practices in child rearing and education. We review the use of surveys as a method in personality research and in assessing creativity. Experiments and surveys produce data that can be analysed using statistics and this module builds on the statistical techniques introduced in Investigating psychology 1 and 2. The methods also lend themselves to the use of software and you will be introduced to professional grade packages that allow you to produce experimental procedures and questionnaires as well as to collect data in a straightforward and accurate manner. Block 2 considers text-based, qualitative research in psychology. You'll begin with phenomenological analysis, the way we explore our experiences of the world and ourselves. The topics covered include jealousy, close relationships and our experience of emotion. You’ll then turn to discourse analysis which explores how we use language to create our world. We explore how this method helps us to understand the social construction of health-related issues such as ADHD and also how we talk about our life story. This returns us to memory research but using a different methodology. Throughout this first part of the module you will be encouraged to think critically about the methods of data collection and analysis and how they are used. The second part of the module is your opportunity to carry out your own psychological investigation. Under the close supervision of your tutor you'll design and build a study, considering procedural and ethical issues. You'll collect your data, carry out the appropriate analysis and report your findings as a research report. You’ll also participate in your fellow students’ projects which will deepen your appreciation of how psychological data are generated. Throughout this process you'll be very well supported, but we stress that this is your project and you'll be expected to take responsibility for it. In our experience many students find the independent project is the most satisfying part of the whole degree. This is one of the core modules in our British Psychological Society (BPS) accredited degrees in psychology. This module is not available for standalone study; it can only be studied as part of a qualification. Normally, you should have successfully completed Investigating psychology 2 (DE200) before you study this module. If you have any doubt about the suitability of the module, please speak to an adviser. You'll be provided with two text books, statistical analysis software (SPSS) and have access to the module website which includes: Access to specialist software (Gorilla, Qualtrics and NVivo) to aid experimental, survey and qualitative projects will also be made available through the module website. A computing device with a browser and broadband internet access is required for this module. Any modern browser will be suitable for most computer activities. Mobile devices and computing devices that do not meet the specs listed below, including Chromebook laptops or tablets running the Linux -based Chrome OS as its operating system, will not be able to install or run the SPSS statistics software required and thus are not suitable for parts of this module. Inability to use SPSS will prevent you from passing the module. Additional software will be provided, including the SPSS statistics program. For this reason, you will need to be able to install and run this software on a desktop or laptop computer with either: The screen of the device must have a resolution of at least 1024 pixels horizontally and 768 pixels vertically. To join in the spoken conversation in our online rooms we recommend a headset (headphones or earphones with an integrated microphone. Our module websites comply with web standards and any modern browser is suitable for most activities. Our OU Study mobile App will operate on all current, supported, versions of Android and iOS. It's not available on Kindle. You’ll be assigned a tutor who will give you advice and guidance throughout the module. They will help you with the study material and mark and comment on your written work, and will primarily support you to design, carry out and produce your project. We offer specialist teaching forums that provide tuition and support on the core areas of psychology that you are encouraged, but not obliged, to take part in. The tutors staffing these forums will help you with the study materials, as well as marking and commenting on your written assignments. We also offer online tutorials that you are encouraged to participate in. Contact us if you want to know more about study with The Open University before you register. The assessment details for this module can be found in the facts box above. The OU strives to make all aspects of study accessible to everyone and this Accessibility Statement outlines what studying DE300 involves. You should use this information to inform your study preparations and any discussions with us about how we can meet your needs. Investigating psychology 3 starts once a year – in October. This page describes the module that will start in October 2022. We expect it to start for the last time in October 2025. Find your personal contacts including your tutor and student support team: Help with the University’s computing systems: Help with accessing the online library, referencing and using libraries near you: ©.. . Please tell us where you live so that we can provide you with the most relevant information as you use this website. If you are at a BFPO address please choose the country or region in which you would ordinarily be resident.
https://www.open.ac.uk/courses/qualifications/details/de300?orig=q07
Supply Carbon Ammonium Chloride basic introduction: Ammonium bicarbonate (Ammonium bicarbonate) is a carbonate of ammonia with the molecular formula NH4HCO3. Carbon dioxide can also be passed into ammonium carbonate solution to obtain ammonium bicarbonate. It is a white powder with a strong ammonia smell and is easily soluble in water. Ammonium bicarbonate solution will release carbon dioxide when placed in the air or heated, and the solution will also become alkaline. The chemical properties of ammonium bicarbonate are not very stable. Ammonium bicarbonate is easily decomposed by heat, generating ammonia (NH3), water (H2O), and carbon dioxide (CO2). The chemical equation is: NH4HCO3==heating==NH3↑+H2O+CO2↑ Among them, the ammonia gas has a special ammonia smell, so there will be a pungent smell in the place where ammonium bicarbonate fertilizer is piled for a long time. . Used as a nitrogen fertilizer, suitable for various soils, can provide both ammonium nitrogen and carbon dioxide required for crop growth, but low nitrogen content, easy to agglomerate; used as an analytical reagent, also used for synthesis of ammonium salts and fabric degreasing; Used as a chemical fertilizer; it can promote crop growth and photosynthesis, promote seedlings to grow leaves, can be used as top dressing, or as base fertilizer for direct application. In rural my country, ammonium bicarbonate is an important nitrogen fertilizer used by farmers. The nitrogen content of pure ammonium bicarbonate is about 17.72%. During storage and transportation, the nitrogen contained in it is easy to volatilize and lose. In order to identify its quality and determine the amount of fertilizer applied in the field, it is necessary to determine its nitrogen content. Industrial ammonium chloride (Ammonium Chloride Tech Grade) is a colorless crystal or white crystalline powder; it is odorless, tastes salty and cool; it has hygroscopicity. Easily soluble in water, slightly soluble in ethanol. It is a strong electrolyte that dissolves in water and ionizes ammonium ions and chloride ions. When ammonia gas and hydrogen chloride are combined to form ammonium chloride, there will be white smoke. Odorless. The taste is salty, cool and slightly bitter. It has low moisture absorption, but it can also absorb moisture and agglomerate in humid and rainy weather. Industrial ammonium chloride is mainly used in dry batteries, storage batteries, ammonium salts, tanning, electroplating, precision casting, medicine, photography, electrodes, adhesives, as well as beneficiation, metallurgy and other industries. With the electronics industry and non-ferrous metal smelting industry more and more The more it has become an important industry in the national economy, the use of industrial ammonium chloride has become more extensive. Storage and transportation: Carbon ammonia ammonium chloride should be stored in a cool, ventilated and dry warehouse, and be protected from moisture. Avoid co-storage and transportation with acids and alkalis. Protect from rain and hot sun during transportation. Be careful when loading and unloading to prevent damage to the package. In case of fire, water, sand, carbon dioxide fire extinguishers can be used to put out the fire.
http://en.chinayuhuagroup.com/tananlvhuaan/gongyingtananlvhuaan.html
Modul: A modular toaster, created to address the issue of e-waste by designing for a circular economy, by Deen Peerthy The inspiration behind this project is sustainability. Through reading great books such as ‘Design for the Real World’ by Victor Papanek, ‘Cradle to Cradle’ by Michael Braungart and William McDonough, and ‘The Story of Stuff’ by Annie Leonard, I realised that design should be used to help better the world, both socially and environmentally. Many products on the market have little use other than simply wasting materials, fulfilling no purpose in helping anyone. In this area of sustainability, I chose the stream of e-waste to solve. I decided to redesign a toaster, a product that often has a very short lifespan and gets thrown away and replaced far too often. I created a modular toaster, designed to last forever, whilst also being able to keep up with ever-changing trends in the design world. The toaster is designed with both mechanical and aesthetical benefits in mind: all the main functioning electronic internals have been integrated into one singular module which is user-serviceable and easy to repair, plus I have made all the cosmetic parts of my toaster modular so that it can be replaced by the users to suit their own preference and style. The cosmetic parts are available in a wide range of colours and combinations (anodised finishes are advised to allow for ease of aluminium recyclability) to suit different interiors and can be replaced to meet variables such as moving accommodation etc. More information about my project can be found here:
https://www.creative-conscience.org.uk/finalist-new-designers-creative-conscience-environmental-award-2021/
Minors' rights to refuse medical treatment requested by their parents: remaining issues. Nurse practitioners are regularly faced with ethical and legal dilemmas when providing care to minors. Laws may not provide clear direction; there may even be conflicting precedents regarding the status of minors, particularly with regard to the juvenile justice system. This article reviews the status of minors' rights with regard to refusing or consenting to medical tests or treatments. Three cases from one author's (DPG) practice illustrate the issues involved.
Humans aren’t the only species capable of getting on the same wavelength with each other. Research from Berkeley scientists shows that bats have synchronized brain activity when engaging in social behaviors, such as grooming, fighting or sniffing each other. Led by Michael Yartsev, assistant professor of bioengineering and of neurobiology, the study is the first to show neural correlation during social interactions by a non-human species. In the study, Yartsev and postdoctoral scholar Wujie Zhang used wireless neural recording devices to measure the brain activity of bats interacting in a chamber. These devices captured signals that included the bats’ higher frequency brain waves, as well as electrical activity from individual neurons. The researchers found surprisingly strong correlations between the bats’ brains, especially for brain waves in the high frequency band. These correlations were present whenever the bats shared a social environment and increased before and during their social interactions. To better understand these correlations, a team of undergraduate students went through hours of high-speed video of the bats, characterizing behavior in each frame. The lead researchers then analyzed the relationship between bat behavior and inter-brain correlation, allowing them to rule out other possible explanations for the synced-up brain activity, such as that the bats’ brains were simply reacting to the same environment, or that the bats were engaging in the same behavior. The researchers hope this work will help future studies on how brains process social interactions.
https://engineering.berkeley.edu/news/2020/04/in-sync/
The University of Evansville offers a variety of outstanding instrumental and vocal ensemble experiences. Our ensembles have performed at venues like Carnegie Hall and been recognized through invited performances by the American Choral Directors Association, College Band Directors National Association, Indiana Music Education Association, and the Elmhurst Jazz Festival. All ensembles perform concerts both on and off campus throughout the year. UE Ensemble opportunities include: - University Choir, Choral Society, Women’s Chorus, Kantorei - UE’s Schmidt Opera Series - University Symphony Orchestra, String Ensemble - Wind Ensemble, University Band, Athletic Bands - Jazz Ensembles I and II - Wind, String, and Percussion Chamber Ensembles Music is an integral part of the University's commitment to a liberal arts education. We welcome and encourage students from all other disciplines to participate along with our music majors in any of these ensembles. Audition Placement Information for Current Students Jazz Ensemble Auditions will take place on beginning at 7:00 p.m. The signup sheet for audition times will be posted on Fine Arts Room 144 (Dr. Zifer's office) in the Krannert Hall of Art and Music. The Audition will consist of: - Blues Scales (you can use the scale sheet) - Two Excerpts (see posted PDF's) - Sight Reading For further information, contact Dr. Timothy Zifer at [email protected]. Fall 2021 University Symphony Orchestra Seating Audition To audition for the University Symphony Orchestra, the musician must play: - One, two- or three-octave scale of your choice. - A brief solo (approximately two minutes long.) - Audition Excerpts (excerpts provided below and all excerpts are marked with RED markers. *) Note: Violinists wishing to be placed in the first violin section must play the Violin I excerpt and a three-octave scale. Audition Schedule - Monday, August 23, 2021 - Tuesday, August 24, 2021 - By appointment Audition Signup Deadline: Friday, August 20, 2021 For further information, contact Dr. Chun-Ming Chen at [email protected]. * If you have trouble seeing the red marks, please download the file and view it with a desktop or a laptop. The UE Wind Ensemble audition will consist of prepared excerpts, two major scales (one in a flat key, one in a sharp key), chromatic scale (practical range of instrument), and sight reading. Audition Schedule - Tuesday, August 24, 2021, 2:00 - 4:00 p.m. - Wednesday, August 25, 2021, 7:00 - 9:00 p.m. - Thursday, August 26, 2021, 3:00 - 5:30 p.m. Auditions are held in Krannert Hall of Fine Arts, Room 103. Please sign up for an audition on the UE Bands bulletin board, across from FA 103. For further information, please contact Dr. Kenneth Steinsultz ([email protected]) Please note: Wind Ensemble will not meet on Thursday, August 26, 2021, due to auditions. First rehearsal will be Tuesday, August 31, 2021. Audition Excerpts For information regarding fall 2021 ensemble placement auditions, please contact: Jazz Band Dr. Timothy Zifer, [email protected] Orchestra Dr. Chun-Ming Chen, [email protected] Wind Ensemble Dr. Kenneth Steinsultz, [email protected] Office Phone: 812-488-2754 Office Email: [email protected] Office Location:
https://www.evansville.edu/majors/music/ensIntro.cfm
Birds spend most of their time caring for their feathers; they use their bill to sort through their feathers, cleaning off parasites and dust; they smooth and align the small interlocking barbules that act like tiny zippers to hold the feathers together neatly. This personal grooming keeps their feathers well maintained and helps to keep water from reaching the skin. You may also see birds reaching around to their rump with their bill, often with their tail fanned as they stretch around. There is a special preening oil gland at the base of the tail in most bird species (called the uropygial gland). Birds wipe a waxy oil from this gland onto their bill and crown and apply it to the rest of the feathers. The preening oil makes the plumage shiny. Well cared for feathers repel water by their fine structure - more so than any properties of the preening oil itself. So during a light rain shower birds generally stay out, finding food and living their lives. But if the rain is too harsh, or accompanied by winds, then the birds need to seek shelter. What can we do to help birds during periods of bad weather? Most wild birds are not that strong fliers, as they can be in danger of flying into objects such as power lines, tree branches in strong winds. They could also be hit by twigs or leaves blown by the wind. During storms birds can hide in bushes and dense trees, they may be able to find calmer areas on the quiet side of a wood, protected from some of the winds. Why do we not see birds flying in the rain? Certain birds can fly in heavy rain such as geese, ducks, swans and gulls. During storms, though they use a lot more energy to fly and it does become harder to find food and to refuel. So flying when the weather is stormy is not that advantageous so they generally find a place to sit out the storm. They do not migrate in heavy rain either unless they get caught by surprise. They just keep doing what they do best.
https://www.haiths.com/where-do-wild-birds-go-during-bad-weather/
Health literacy is a multi-dimensional concept comprising a range of cognitive, affective, social, and personal skills and attributes. This paper describes the research and development protocol for a large communities-based collaborative project in Victoria, Australia that aims to identify and respond to health literacy issues for people with chronic conditions. The project, called Ophelia (OPtimising HEalth LIterAcy) Victoria, is a partnership between two universities, eight service organisations and the Victorian Government. Based on the identified issues, it will develop and pilot health literacy interventions across eight disparate health services to inform the creation of a health literacy response framework to improve health outcomes and reduce health inequalities. Methods/Design The protocol draws on many inputs including the experience of the partners in previous co-creation and roll-out of large-scale health-promotion initiatives. Three key conceptual models/discourses inform the protocol: intervention mapping; quality improvement collaboratives, and realist synthesis. The protocol is outcomes-oriented and focuses on two key questions: ‘What are the health literacy strengths and weaknesses of clients of participating sites?’, and ‘How do sites interpret and respond to these in order to achieve positive health and equity outcomes for their clients?’. The process has six steps in three main phases. The first phase is a needs assessment that uses the Health Literacy Questionnaire (HLQ), a multi-dimensional measure of health literacy, to identify common health literacy needs among clients. The second phase involves front-line staff and management within each service organisation in co-creating intervention plans to strategically respond to the identified local needs. The third phase will trial the interventions within each site to determine if the site can improve identified limitations to service access and/or health outcomes. Discussion There have been few attempts to assist agencies to identify, and respond, in a planned way, to the varied health literacy needs of their clients. This project will assess the potential for targeted, locally-developed health literacy interventions to improve access, equity and outcomes.
https://bmcpublichealth.biomedcentral.com/articles/10.1186/1471-2458-14-694
Sipping coffee at a sidewalk cafe in upscale Houston Heights, Evelyne Marcks shakes her head at the Texas governor's decision to scrap a mask-wearing mandate before the Covid-19 pandemic is under control. "I don't know who he is trying to please," she said, sitting at a table at the Central City Co-Op, "but it's certainly not people like us from the big cities." "He probably wants to please the right-wing people who live in places where, to be honest, there's no need for a mask," she added, referring to the roughly four million Texans who live in rural areas. But some restaurant owners and clients in the state's largest city, Houston, were perplexed by or even against Governor Greg Abbott's recent decision to drop the mask mandate "and open Texas 100%" beginning Wednesday. "We will continue to ask our customers to mask up," said Jessica Navas, an owner of the Central City Co-Op, which also sells fresh vegetables from area farms. A fervent defender of eating locally and responsibly, Navas added that the Co-Op's mask requirement "will continue so long as CDC guidelines recommend it." The US Centers for Disease Control and Prevention website currently recommends that "people wear masks in public settings, at events and gatherings, and anywhere they will be around other people." Not far from the Co-Op, at the Taco Stand and Burger Joint on Shepherd Drive, Houston Heights' central avenue, owner Matthew Pak has taken a similar stance. - 'No-win situation for restaurants' - "We are not going to change anything that we are doing," he said. "We are going to require all our staff and customers to wear masks, continue sanitizing, keeping everything extra, extra clean, social distancing as much as we can enforce." Those precautions will probably not end soon, he said. "There's only a low percentage that have the vaccine" so far, Pak noted. "I mean, none of my staff has vaccine." So far, some 4.1 million Texans -- 14.2 percent of the population -- have received at least one dose of a Covid-19 vaccine. That figure is about two percentage points below the national average, owing partly to the severe disruptions of the recent historic cold wave in the state. Farther down the avenue, before an enormous Texas flag painted on the corrugated metal wall of Piper's BBQ & Beer, co-owner Richard Orozco ponders the position the governor has put him in. "It's really a no-win situation for the restaurants," he said. "If we choose to enforce the mask policy, there's going to be vocal critics about that. If we say no mask, there'll probably be even more vocal critics," he noted. "It really puts us in a tough spot." He and his partners finally decided to let customers decide whether to wear a mask. At Angela's Oven, in a quieter, residential part of the neighborhood, owners reached the same conclusion. The bakery caters to an affluent and international clientele, part of the gentrification transforming parts of northern Houston. Alex Harsema-Mensonides, a Dutch national who works in the natural gas industry, sips on a hot espresso. Angela's bread and croissants "remind me of my vacations in France," he says. Before the pandemic struck, the bakery had indoor seating for a few customers. Owner Angela, who would not provide her last name, foresees no early return to that practice. "I think our employees will probably (continue to) wear a mask," she said. As for her customers, "I think we'll give them their choice -- but we still social distance."
https://www.ndtv.com/world-news/end-to-mask-rule-puts-texas-restaurants-stuck-in-tough-spot-2385323
This Hotel is situated in Via Laura, where two historical figures of the Renaissance have left their mark: Lorenzo de’Medici and Sister Domenica of Paradise. Originally it was a country road crossing into some vegetable gardens, thus aptly called Via Verzura later modified into Via Ventura. When Lorenzo de’Medici decided to build a house for courtesans, the name was changed to Via Laurenziana, then abbreviated to Via Laura. Sister Domenica was the daughter of a farmer from Pian di Ripoli, south of Florence, who worked some lands belonging to the convent of St. Brigida al Paradiso. Having entered this same convent and taking the name of Sister Domenica del Paradiso, she developed a reputation for sanctity. This didn’t stop her from giving her nuns a useful and practical occupation. She introduced the art of weaving gold and silver cloths with great economic success. Even though she was a Domenican, she did not agree with Fra’ Girolamo Savonarola whom she never quoted in her writings. This is why she earned the friendship of Savonarola’s great antagonists, the Medici, who allowed her to buy a large piece of land to one side of Via Laura (where the present building stands) for a mere 190 Florins. In 1511 she began building a new convent, spending some 20.000 gold Florins. It wasn’t by chance that it was made easy for a Domenican convent, loyal to the Medici, to be built only one block away from Savonarola’s church, S.Marco. Later on, Pope Clement VII, Lorenzo’s nephew (his father, Giuliano de’Medici, was killed in the famous Pazzi conspiracy) was very generous to Sister Domenica of Paradise. She kept her old name in honour of her former convent though the new one was called the convent of the Crocetta after the small red cross that the nuns wore sewn on their habit. Even the street was called Via della Crocetta for a long period of time. Along this same street, in 1502 canon Marco Strozzi founded another convent for six devout ladies: S. Maria degli Angeli, afterwards called S. Maria degli Angiolini, near the Palazzo of the Crocetta that later became the Archeological Museum. On the side of the street, where the Hotel Morandi alla Crocetta now stands, the convent of the Crocetta had its gardens and cloisters. On this site, Sister Domenica of Paradise had a vision of Jesus commemorated by a XVI century tabernacle built to the rear, on Via Giusti. In the hotel Morandi alla Crocetta, one can admire XVII century frescoes depicting scenes of the life of the blessed Domenica of Paradise. The convent was then enlarged by the devout princess Maria Maddalena de’Medici, daughter of Grand Duke Ferdinand I. She lived in her Palazzo of the Crocetta, built in 1619, and, in order to visit the nuns more conveniently, she had an overpass built across the street, which can still be seen in Via Laura. The same princess engaged the architect Luigi Orlandi in 1757 to redecorate and modernize the church which contained the remains of the, by now, Blessed Domenica of Paradise. At the turn of the century, during the suppression of monasteries, the Convent of the Crocetta was requisitioned and after various wanderings, the nuns settled in Via Aretina, transferring the remains of Blessed Domenica of Paradise. The church was partly demolished and embodied in the building where the school of law of the University of Florence stands, having formerly housed the general archives of finances when Florence was the capital of Italy. During the same period, as capital of Italy, in order to satisfy the immediate need for housing for the State employees, the cloisters and convent were walled up to create new lodgings. The convent of S. Maria degli Angeli was transformed into a Conservatory by the Lorain Grandukes.
http://airbnb-host.info/hotels-in-florence
The chairman of FIFA’s referees committee, Pierluigi Collina, says assistant World Cup referees have been advised to avoid flagging close offside calls during potential goal-scoring opportunities and leave them for video assistant refereeing (VAR) to decide. “If you see some assistant referee not raising the flag, it’s not because he’s making mistakes,” Collina told The Guardian. “It’s because he’s respected the instruction to keep the flag down. They were told to keep the flag down when there is a tight offside incident and there could be a very promising attack or a goal-scoring opportunity because, if the assistant referee raises the flag, then everything is finished.” There will reportedly be 13 referees officiating exclusively watching the control screens in the video operations room. On those plays that require the use of VAR, fans inside the stadiums will be shown clips of the play under revision, but only once the referee has made his decision and play has restarted. Collina added that VAR referees will be outfitted in uniforms identical to what their colleagues on the pitch will be wearing. “It’s because they sweat like they do on the pitch,” he said last week. “It’s not like watching a game on the couch while drinking coffee. It’s very stressful so they can’t be dressed like a clerk.” The opening match of the 2018 World Cup between host nation Russia and Saudi Arabia will kickoff on Thursday at 11:00 a.m ET.
https://sportwily.com/world-cup-referees-told-not-to-call-tight-offside-penalties/
First heat the oil in a large pan. Reserve some of the green portion of the spring onion for garnish and add the remaining to the pan. Throw in the garlic as well and saute until the garlic just starts to change color. Add in the cauliflower florets and give a quick mix. Now add all the powders (turmeric, red chilli, coriander, cumin) and salt. Add about 2 Cups of water and stir well. Cook covered in medium-low flame until the cauliflower florets are cooked fully for about 30 - 40 mins. Turn off the flame and let the cauliflower cool down to room temperature. Once the cauliflower is cool enough, transfer it to a blender/mixie and blend until you get a smooth puree. Transfer the cauliflower puree back to the pan along with the coconut milk. Give a quick taste and add salt if needed. Heat the pan in medium flame until the mixture is warm. Turn off the flame immediately. Serve hot, optionally with a garnish of spring onion. Notes | FAQ - Adjust the quantity of spices and salt based on the size of your cauliflower. - Since this is a soup, the flavor of the spices should be subtle and not overpowering. Adjust based on your taste preference. - I used freshly made coconut milk which was not too thick or thin. In case of using coconut milk from the can, adjust the quantity based on the thickness of the coconut milk. - For added flavor, you can choose to use Vegetable broth instead of water. - Adjust the quantity of liquid based on your desired thickness for soup.
https://revisfoodography.com/wprm_print/recipe/15213
Today’s business content is replete with words — and reaching your audiences can seem like a jigsaw puzzle. While your content’s structure matters (like optimizing for mobile), one of the best ways to improve readability is to edit out words that lead to ambiguity. Why care about ambiguous messages? As we read, we rely on language to have clarity and transparency, so that we can understand intent. And as Reid highlighted, when you allow ambiguity to create vague, confusing messages, you hurt your readers’ abilities to learn and grow from what they read. As you refine your own messaging, let’s look at two ways you can provide readers with better specificity, so you improve your communications’ clarity and successfully connect. 1. Be careful with words that have multiple meanings. Ambiguous phrasing often emerges in business communications, because you use a word with multiple definitions (called polysemous words) within unclear context. Reading ambiguous phrases creates uncertainty and anxiety in our brains instead of trust. And if you’re trying to encourage your readers to act, that eroded trust can also erode their relationship with your brand. You can avoid confusion by fixing polysemous wording. As you do, be sure to also establish a clear relationship within how each word connects with another, so that readers won’t miss your meaning. By doing so, you strengthen your content’s intent. I saw a tree on a hill with a telescope. I have seen, through my telescope, a tree on a hill. I can use a telescope to cut a tree that is on a hill. I’m on a hill with a telescope, and I am sawing a tree. While an obvious meaning may emerge, you never want to leave room for assumptions in your writing. In business, you typically need to share clear, solid facts. Below are 10 words in business communications that all have multiple definitions. Depending on your context, you may need to choose clearer wording. For further reference, you can check out this comprehensive list of words with multiple meanings. 2. Be specific with words that have opposites. You can also clarify your communications by thoughtfully using words with opposite meanings, called contranyms. Like a word with different definitions, contranyms can confuse readers by also meaning their own opposite. Context is further critical when using contranyms, so readers can receive the word’s intended meaning, based on surrounding verbiage. Because of the agency’s oversight, the company’s behavior was sanctioned. Two contranyms are in here: 1) oversight and 2) sanctioned. Oversight can mean either to supervise well or to miss completely. Sanction means either to allow or to prohibit. So, did the agency supervise well or miss the mark? Did the company have to cease its action or continue operating? The key lies in your context and ability to support clarity. If you’re peppering your communications with commonly used contranyms, be sure the relationships you create with words in your sentences are purposeful and help clarify the word’s meaning. Or, replace any contranym with a word that doesn’t have an opposite meaning. Are any of my words polysemous or contranyms? Am I providing sufficient context to make my message clear? Taking time to replace ambiguity with clear wording and complete context will make your content easier to grasp. And, making life easier for the employees, customers, and clients you work with is always work worth doing.
https://zuulaconsulting.com/create-engaging-content-by-clearing-out-ambiguity/
Congressional Democrats are probing the possible use of high-tech surveillance tools by federal law enforcement agencies to monitor protesters at nationwide marches against police brutality. House Democrats have in recent days sent letters to the FBI, Drug Enforcement Administration, the National Guard Bureau, Customs and Border Protection and the Defense Department seeking to understand whether authorities have deployed powerful tools like facial recognition and cell phone data-tracking against protesters. And Sen. Edward J. Markey, D-Mass., on Monday sent a list of questions to Clearview AI, the controversial facial recognition technology company that partners with law enforcement agencies and private companies, to ensure it “will not force Americans to choose between sacrificing their rights to privacy or remaining silent in the face of injustice.” In a statement to CQ Roll Call, Hoan Ton-That, Clearview AI’s chief executive, said he would respond directly to Markey’s letter. “Clearview AI’s technology is intended only for after-the-crime investigations, and not as a surveillance tool relating to protests or under any other circumstances,” Ton-That said. The inquiries come amid weeks of mostly peaceful protests across the United States following the deaths of George Floyd, Breonna Taylor and other black Americans at the hands of police. As the protests have continued, concerns about surveillance have grown among civil liberties advocates who say the surveillance could have a chilling effect on those marching. “It's not good for the First Amendment and it's not the right response to the protests,” Neema Singh Guliani, senior legislative counsel at the American Civil Liberties Union, told CQ Roll Call. “The right response is to seriously consider the issues that people are raising, not to create an environment that instills further fear in people and makes them more afraid to speak out.” [Democrats push bigger role for courts to curb police misconduct] In a letter to federal agencies on Tuesday, Reps. Anna G. Eshoo, D-Calif., and Bobby L. Rush, D-Ill., questioned the use of FBI and National Guard surveillance aircraft over protests in Las Vegas and Washington D.C., and CBP drones over Minneapolis, San Antonio and Detroit. They also asked about a BuzzFeed report that said the DEA had been given authority by the Justice Department to “conduct covert surveillance” of protesters and a VICE report that an FBI plane may have deployed technology that imitates a cell phone tower to collect personal data. Eshoo and Rush noted recent articles on tactics protesters can use to protect themselves against invasive surveillance tactics, especially those that can be used against smartphones. “Americans should not have to take proactive measures to protect themselves from government surveillance before engaging in peaceful demonstration,” they wrote. “The fact that the agencies you lead have created an environment in which such headlines are common is, in and of itself, an indication of the chilling effect of government surveillance on law-abiding Americans.” Eshoo and Rush, backed by 33 additional House Democrats, said the agencies should cease any surveillance practices currently in place. Drones over Minneapolis In a separate letter, Democrats on the House Oversight and Reform Committee asked the Homeland Security Department, which encompasses CBP, to explain why one of its drones was flying over protests in Minneapolis. Writing to Chad Wolf, the acting secretary of Homeland Security, committee Democrats led by Chairwoman Carolyn B. Maloney, D-N.Y., asked whether the drone, which took off from Grand Forks Air Force Base and flew over protestors on May 29, recorded any video footage of the protest and, if so, how DHS plans to use it. They also asked whether DHS or any other law enforcement officers privy to the drone's video feeds are users of facial recognition technology. The letter also questioned whether use of the drone was legal because of limitations on CBP's jurisdiction further than 100 miles from the U.S. border. “This administration has undermined the First Amendment freedoms of Americans of all races who are rightfully protesting George Floyd's killing,” they wrote. “The deployment of drones and officers to surveil protests is a gross abuse of authority and is particularly chilling when used against Americans who are protesting law enforcement brutality.” The letter was co-signed by Democratic Reps. Jamie Raskin of Maryland, Stephen F. Lynch and Ayanna S. Pressley of Massachusetts, and Alexandria Ocasio-Cortez of New York. The members requested answers from DHS to a detailed list of questions by Friday. Other Democrats, including House Intelligence Chairman Adam B. Schiff, D-Calif., are probing how the Defense Department may be involved in surveillance efforts. In a letter Monday to Joseph Kernan, the Pentagon’s undersecretary for intelligence, Schiff said he is concerned military personnel may be asked to undertake “unlawful or unethical activities that could violate civil liberties and erode even further the legitimacy of, and trust in, the military and law enforcement.” “We know that you share our reverence for the rights enshrined in the Constitution and are committed to your duty to protect Americans’ civil liberties and constitutional rights,” Schiff wrote to Kernan. “It is therefore imperative that [you] refrain from any activity that could infringe upon those rights, or even be perceived as doing so.” Guliani, of the ACLU, said the efforts in Congress are key to understanding whether protesters are being surveilled by federal authorities. But she said Congress should curb investments in surveillance technologies that raise privacy concerns, especially in the hands of police.
https://rollcall.com/2020/06/09/democrats-seek-answers-on-high-tech-surveillance-of-protesters-by-u-s-agencies/
Da Vinci’s Mona Lisa Comes To Manchester HERE’S a picture of Coronation Street’s neon-tanned Michelle Keegan recreated as Leonardo Da Vinci’s Mona Lisa. If you read The Da Vinci Code closely, is says that “a woman on the street is the direct descendent of Emily Bishop” who was secretly married to Ken of Baldwin. Also, the original Mona Lisa (32DD) was already painted a pinky white, but had to be darkened when she returned from a session at the ochre-ologist. Keegan’s face is being used to add a dash of glamour to the travelling exhibition about the Italian artist. Da Vinci – The Genius opens next month at the Museum of Science & Industry (MOSI) in the soap’s home town of Manchester.
https://www.anorak.co.uk/227875/celebrities/da-vincis-mona-lisa-comes-to-manchester.html
The uneven impact of Covid-19 The pandemic has sharpened the pre-existing economic disparity between men and women. Women are more likely to have lost work and income. They are more likely to work in low-paid, insecure frontline roles. In many of the sectors that have suffered most - retail, hospitality, tourism - women are over-represented. During the pandemic women have continued to do more unpaid domestic and care work than men. During school closures, for instance, 70% of mothers reported being completely or mostly responsible for homeschooling, and mothers were 50% more likely to be interrupted during paid work hours. Covid disproportionately affected women’s mental health. Covid lockdowns also sharply increased the incidence of domestic violence. Low income and migrant status both significantly increase women’s vulnerability to domestic abuse, underlining the need for policymakers to understand how gender intersects with other axes of inequality. The Fawcett Society has collated evidence on the social and economic impacts of Covid-19 on women and how these have intersected with other axes of inequality. A 2020 survey of 19,950 mothers by campaign group Pregnant Then Screwed found significant employer discrimination against mothers. 15% of mothers had been or were expecting to be made redundant during the pandemic, nearly half of whom said that lack of childcare provision played a role in their redundancy. The Women’s Budget Group has published an analysis of the gender differences in access to coronavirus government support schemes, finding women more likely to be furloughed than men, and young women aged 18-25 were the largest group furloughed by age and gender. Better policymaking Under the Public Sector Equality Duty, public bodies are required to have 'due regard' to gender and other types of equality. Many organisations concerned with equalities argue that this requires public bodies to undertake equality impact assessments (EIAs) to ensure that policy does not discriminate against women, ethnic minorities and other groups protected under the 2010 Equality Act. Many economists have argued that assessments of policy from government, as well as the media, should be based on a broader account of economic and social progress. This means targeting the reduction of inequalities as well as focussing on GDP growth. 'Gender budgeting', analysing government spending and tax decisions in terms of their impact on women, is one such approach. The Women's Budget Group sets out how Equality Impact Assessments can ensure that policy makers take account of the different impacts of policy on women. Meaningful equality impact assessments should consider cumulative impact, intersectional impact (for example on women of colour and disabled women), the impact on individuals as well as households, impact over a lifetime and the impact on unpaid care. The Women’s Budget Group has curated a set of resources on gender budgeting, the analysis of tax and spending decisions from a gender perspective. It argues that this approach can also be applied to other types of inequality. The Fawcett Society has proposed a new Equal Pay Bill which would modernise UK law on equal pay. The Bill would give women who suspect they are not getting equal pay the ‘Right to Know’ what a male colleague doing the same work is paid, thereby enabling women to resolve equal pay issues without having to go to court. The care sector It is generally acknowledged that social care services in the UK have suffered from a long period of political neglect, and entered the Covid-19 pandemic in a fragmented, under-funded and under-staffed condition. There is widespread consensus on the need for reform to make the care system more resilient, expanding access to and increasing the quality of services. Investing in public care services would make a significant contribution to tackling gender inequality. Greater public care provision could relieve the burden on unpaid carers, the majority of whom are women. As 80% of the adult social care workforce are also women, action to tackle recruitment and retention challenges in the sector would so much to improve pay and conditions. There is evidence of majority public support for extending the principles underlying the NHS to social care, making it free at the point of need and largely taxpayer-funded. Over recent decades, as most of the UK's social care provision was outsourced from the public sector, private equity companies have taken over a significant proportion of care homes. It is widely argued that the 'financialisation' of care provision has undermined the quality of service. Modelling by the New Economics Foundation for the NHS has analysed the economic and health cost to society of unpaid care work in England. NEF estimates these costs to be £37bn per year including lost tax revenue and mental health treatment. It argues that this underlines the economic case for greater public investment in care provision. The final report of the Commission on a Gender-Equal Economy outlines eight steps required to create a 'caring economy'. These include the creation of a Universal Care Service. The Commission argues that a care-led approach to economic policy could form the basis of economic renewal, akin to the creation of the welfare state in 1945. IPPR have laid out proposals for a social care system free at the point of need, supported by research on public opinion and on the effects of financialisation in the social care system. The Women’s Budget Group has brought together evidence on the need for reform of social care, highlighting the problems of deregulation and privatisation in the care sector and the effects on gender inequality, for example through increasing strain on unpaid carers. Social infrastructure Social infrastructure is the term now commonly given to those sectors of the economy - health, education, adult social care and childcare - which are critical for its effective functioning but which are often neglected in both economic theory and policy. Spending on social systems is rarely classed as ‘investment’, despite the investment-like returns in these areas. It can be argued that this reflects a gender bias in economic policy making. The majority of jobs in social infrastructure sectors are held by women, and many of them by people of colour. Pay is often very low. Investment in these sectors could therefore help to reduce both gender and racial inequalities. Social infrastructure sectors are also 'green', using less energy and material resources than many other sectors, particularly physical infrastructure. Improved access to affordable childcare is a critical part of social infrastructure provision, giving parents, particularly women, the ability to take up and stay in paid work. Analysis by the Women’s Budget Group estimates that investing in care as part of an economic stimulus package would provide almost three times as many jobs as the equivalent investment in construction. It would narrow gender inequality and also have positive environmental impact. The Greater Manchester Independent Prosperity Review argues that investment in physical infrastructure alone will not narrow the UK’s unusually pronounced regional inequalities, emphasising the need for social infrastructure spending. Coram Family and Childcare argues that four key goals should inform childcare policy: making sure every parent is better off working after childcare costs; making sure there is enough high quality childcare for all children, including those of school age; making sure children with special education needs or disabilities can access high quality childcare; and recognising the value of childcare professionals through pay, professional development and representation. The gender pay gap Women in full-time employment in the UK are paid 7% less on average per hour than their male counterparts. Among employees as a whole, women earn on average 15% less than men per hour. This is largely because women are over-represented in part-time employment, which is less well paid. One factor behind the gender pay gap is illegal pay discrimination - unequal pay for equal work. Another is the uneven burden of unpaid care work. A key issue is the 'maternity penalty', the economic cost to mothers of taking on more unpaid child-rearing than men, which slows their career progression and leads many women to take on more flexible, less senior and less well-paid roles. Encouraging men to take on more childcare, for example by increasing paternity leave, could help redress this imbalance. The introduction of mandatory gender pay gap reporting in large employers has generally been recognised as incentivising pay equality. But the pay gap is proportionately much greater among higher-paid jobs than lower-paid, a consequence of the fact that in many sectors senior positions are still dominated by men. A report by the Women's Budget Group, the University of Nottingham and the University of Warwick found that the largest economic burden of the pandemic has been experienced by working class women, and called for sick pay to match the National Living Wage. In its The State of Pay report IPPR provides an overview and explanation of the drivers of the gender pay gap in the UK, and how these might be redressed. Examining the history and causes of the gender pay gap, Linda Scott of the Said Business School at Oxford argues that it is the result of entrenched biases in institutions run by men. The final report of the Commission on a Gender-Equal Economy shows how unpaid and undervalued care work contribute to the gender pay gap and proposes a series of measures to invest in social care and childcare which would enhance women's pay both directly and indirectly by enabling more women to continue in paid work. The Fawcett Society has conducted a comparative study of the gender pay gap indifferent countries. It finds that the UK approach is more light touch than elsewhere, resulting in fewer incentives on organisations to improve women's pay.
https://www.neweconomybrief.net/in-depths/gender-inequality
Press Statement by Michelle Bachelet in Jakarta, Indonesia. [Check against delivery] Assalamualaikum warahmatullahi wabarakatuh Good Afternoon, Thank you very much for being here this afternoon. This is my first visit to Indonesia as the UN Women Executive Director. It is a visit I have looked forward to very much. My sincere appreciation goes to the Government and the people of Indonesia for your hospitality and your warm welcome. I am inspired by your diversity, determination and democratic reform. We have much to learn from you and look forward to continuing and expanding our collaboration. While here in Jakarta, I met with government officials, Islamic scholars, and members of society. Earlier today I participated in the Organization of the Islamic Conference meeting - its fourth ministerial conference on women and development. I stressed the importance of women's empowerment, dignity and equality. Advancing women's equality and empowerment offers real hope for our shared future. When women enjoy equal opportunity and participation, societies and economies grow healthier and stronger. This year I have three priorities for UN Women: Advancing women's political participation and leadership, expanding women's economic opportunities, and ending violence against women and girls. Last year the General Assembly adopted a resolution on Women's Political Participation. The resolution calls on all countries to increase the number of women at all levels of political decision-making. Nations agree that this is essential to achieve equality, sustainable development, peace and democracy. Today women constitute half the world's population. Yet women remain under-represented in positions of leadership. Women constitute just 20 percent of parliamentarians globally and 18 percent of the parliamentarians here in Indonesia. UN Women is a strong proponent of temporary special measures, such as quotas, to achieve at least 30 percent of women in parliament, in line with international agreements. We need more women leaders working alongside men to make societies economically, environmentally and socially sustainable. Women also need equal economic opportunities. When women can participate fully in the economy, economic growth is higher, more inclusive and more sustainable. It is time to remove the barriers that hold women and economies back. Today women's wages represent between 70 - 90 per cent of the wages of men in most countries, and women continue to face a multiple burden and discrimination at home and on the job. In Indonesia, female participation in the labour market is 51 percent compared to 84 percent for men. While women's participation in formal employment is increasing, women are more likely to enter vulnerable employment with poor working conditions and the lack of welfare or social security benefits. By promoting equal pay for equal work, equal opportunity, and policies to reconcile family and work responsibilities such as childcare, women can play their full role in the economy. Increasing the female labor participation could translate into a huge economic potential for Indonesia. The United Nations estimates that limits on women's participation in the workforce across the Asia-Pacific region cost the economy an estimated US$89 billion every year. In fact, data from 135 countries in all regions show that empowering women and reducing gender inequality enhances productivity and economic growth. Urgent action is also needed to end violence against women and girls, a problem that affects all countries around the world. UN Women is working with countries worldwide to prevent and end violence against women. Indonesia enacted a Domestic Violence Law in 2004. Stronger commitment is needed to provide services to victims and eliminate the culture of impunity and silence to ensure the Law is enforced. I thank the people and Government of Indonesia for your commitment to peace, democracy, justice and equality. Thank you. And I look forward to answering your questions.
http://www.unwomen.org/en/news/stories/2012/12/press-statement-by-michelle-bachelet-in-jakarta-indonesia
Training Notes: August-November 2019 The last Notes was back around the time of the Boone Gran Fondo. This update includes everything from that point until the end of November. I'm going to post training notes much less frequently from now on, just covering any broader developments and insights from each training cycle, and see how that goes. The Fondo went well, but my fitness progression pretty much stalled for several weeks afterwards. My lactate threshold power failed to improve from August until the end of September, and my performance had basically stagnated. In fact, the whole season didn't go as I had hoped. I managed to go through the entire Summer without being able to complete the full ride with the fast group in Savannah. I just didn't have the ability to handle the repeated surges. Given that last year was a totally different story (I only got dropped once, on my first attempt, just ten months after starting riding; by the end of that Summer I was actually one of the stronger riders in the unlimited group), I was struggling to determine what was going on. At first I suspected a relative lack of anaerobic training was responsible; last year I was obsessed with short (30-60 second) Strava segments, and so did a very large amount of work in the anaerobic range. This year I was essentially rebuilding my aerobic base over the Summer, with intervals an afterthought1. But it wasn't lack of any specific work that was holding me back; it was my stubborn refusal to take enough time out for recovery. In fact, I wasn't getting close to the amount I needed. It's actually great that I have the time and motivation to do big training blocks, and this shouldn't (and isn't going to) change, but what did need a huge rethink was the length, frequency and intensity of my recovery periods. One of the most basic exercise principles is that of build-overload-recovery. Within a training cycle you progressively increase volume and intensity until your body is pushed harder than it can currently tolerate, goes into overload and performance declines due to an acute build-up of fatigue. At that point you (should!) stop and rest until you've properly recovered, during which period your body grows stronger than it was before (this is called supercompensation). Only then should you begin the next cycle. To keep progressing, you must have all these elements; waiting too long to begin the next cycle negates the effects of supercompensation, whereas beginning the next cycle too soon, during the recovery period (or having a recovery period that's not easy enough, or just not having a recovery period at all), leads to excessive fatigue, overreaching and eventually even overtraining syndrome. Furthermore, sticking to the same kind of volume and intensity all the time rapidly leads to a training plateau and stagnation due to lack of overload and training monotony — another reason to have these build-peak-overload-recovery cycles. That's the fundamental training conundrum. In my case, I have no problem either building up to a sufficiently tough level during a training block, or of going out and training day after day. My big problem has been training too much. A lot of people need a coach to motivate them to workout harder and more often, whereas my coach (me) is constantly trying to stop me training. I know this perfectly well sitting at home, but then I get out on the bike and do everything I shouldn't. I think this time my need for rest may finally have sunk in, but then again I've been here before. Maybe I need someone to lock up my bikes from time to time. The good news is that for the last few weeks I really have reined it in, a lot. I had three relatively light weeks leading up to the final event of the year, Pedal Hilton Head, before starting the first of two planned winter endurance blocks. This block was tough: 16 days, 1613 km (1002 miles), including five consecutive 100-mile rides. Due to the easy time that preceded the block I was nice and fresh going in, and due to the nature of the block I was overloaded coming out. Afterwards, I immediately transitioned into a big recovery period: a rest week (very low volume and low intensity, which is ongoing as I type; I'm on my fourth consecutive day of total rest, and the difference already — to my mood, energy, sleep, appetite and sense of well-being — is amazing), to be followed by a low volume, moderate intensity 'Free' week, after which I should be fresh again, ready for a second endurance block. This is the kind of thing I want to be doing: big, hard training blocks followed by easy recovery periods that go until I'm fully recharged. And my training plan, which was in constant flux for months as I vainly tried various tweaks to the individual workouts, has now settled on a very different shape from what it was, reflecting this need for recovery and recuperation. This year (before I hit the brakes) I was headed for 900 hours cycling. Since I backed off I'll actually be a little short of this, but should still end up having covered over 15,000 miles. That's a lot of riding. With my totally revamped plan, next year I'm looking at just under 750 hours. The 150-hour reduction comes entirely from the increased frequency of the far, far easier recovery periods; my build blocks will be just as tough as they have been. I know I can tolerate these kind of training loads because it took at least six or seven consecutive big weeks in the middle of the Summer before things started to go backwards. And as you're about to see, I no longer go anywhere near that long without a recovery week. Let's do a brief run through of each macrocycle, starting with the Post-Season. This is coming off the In-Season, usually around the first week in October. I immediately begin with two unstructured Free weeks followed by four Rest weeks, to shake off the mental and physical fatigue built up over the year. This is new to me, but crucial. Its most important function is to restore hormonal balance — basically to give my adrenal glands a break. The remainder of this cycle is a transitional period, consisting of the first endurance block of the winter. This is mostly low intensity, with maybe a couple of efforts on Saturdays and some strength work on the midweek Endurance+ rides. This should prepare me for the real endurance block that's coming up next: The Off-Season follows a very light Recovery week at the end of the Post-Season, and runs from December to early January. The four-week build block increases the intensity a little more with the addition of under/over efforts to the Wednesday endurance rides and super-long rides to end each week. Finishing with a hefty 26-hour week should certainly ensure I achieve overload! By this point I'll have fully restored my aerobic base, ready to pound out some intervals in the Pre-Season: The extra-light Recovery at the end of the Off-Season will allow me to start the Pre-Season extra fresh, which is vital as the intensity now ramps up much more, without much reduction in volume. There are three short mesocycles; the first focuses on twice-weekly 8-minute interval sessions, the second on VO2 Capacity intervals at still greater intensity, and the final one is a whole lot of anaerobic intervals. Strength and power training also get increased emphasis. The first two cycles have a 2-week build. On the first I go easier on the Saturday group rides (it's only January to early February so that's not a problem); Tuesdays and Thursdays are where the intensity is concentrated. This changes in the second cycle as we get closer to March; tougher Saturday rides in addition to the midweek intervals. The third cycle is just one week long, but has five straight days of anaerobic intervals. I've never tried this so-called block periodization before, but I think it's worth experimenting with as a one-off, as there is some published research showing it can work well. We shall see. As you guessed by now, an easy Recovery week is added at the end of each of the three mesocycles. The mesocycles are also shorter, since the increased intensity should lead to a much faster overload. All the above should see me arrive at the main season in great shape, but still ready to go. It is disastrous to get to this point already fatigued, even to the point of injury (as happened to me in the Spring earlier this year). It would be much better to err on the side of being a little undertrained, as there's plenty of opportunity to catch up during the late Spring and early Summer. The modular In-Season plan covers late March until the end of September. The mesocycles vary in length, since they're built around specific Events; Build weeks are moved around as necessary to fit the schedule. Here I do my highest-intensity intervals workout during the week, and also have the toughest Saturday group rides. All my other rides are at low intensity; I'll no longer be attempting high-intensity work any more frequently than this during the main season. I'll generally be targeting 2-3 key events during the season. For these I'll do a full taper, so I'll be at my freshest on the day. If there are any other events I want to enter, I'll just work them into the regular plan in place of weekend rides. For example, there is a local criterium series with about 6 races over the course of the Summer. I'll probably enter a couple of these, but they're not really very important to me. So in these cases, I could add one to the weekend of a Recovery week, following the 3-week Build period, before starting the next mesocycle with Build 1. For more important events, however, I'll do a proper 2-week taper. So after Build 3 I'll do the Taper and Event weeks before the Recovery week. I've got a few ideas for good events to do, but so far the only one on my calendar is at the beginning of May when I'll be heading back to the UK for a two-week holiday. A big weekend should ensue. I may also pay a visit (or two) to Mount Mitchell later in the Summer, which would definitely also be worth taking seriously. I'm expecting these changes (a reduced number of high-intensity sessions during build phases of the In-Season, reduced volume during more frequent recovery periods, and greater flexibility concerning both these factors based on the overarching principle of build-overload-recovery) to lead to big performance improvements for the next year and beyond. I'm sure there'll be further tweaking and refining of the details, but I think I now have a good overall approach. What made me see the light? Eventually, my diminished performance this year became so great and so consistent that it was undeniable even to me. I had the occasional glimpse of what I can do if I get my training balance right, but overall I was shockingly bad. And this despite an improvement of over 50 Watts in my lactate threshold just between June and November (which is at least some good that I can take out of this; the year wasn't entirely wasted and I've now got a big base on which to build)! 2020 could be very good indeed, if I can stay balanced in my training by getting off the bike occasionally. I may even have the time and energy to write about it. Time 310 hours (17.7 hours/week) Distance 8,990 km (513 km/week) Start FTP 258 Watts (1st August) End FTP 302 Watts (30th November) As ever, if you want more detail you can follow me on Strava, and you can also see my full Training Plan. 1, This did change starting in mid-September; I'd been experimenting with various intervals protocols and eventually found one that worked well. I call it Sprint Repeats, and it's simply 4 sets of 6 repetitions of 15 seconds flat out followed by 30 seconds recovery. One of the problems I'd been having with longer intervals e.g. 4-minute ones, was incorrect pacing. Specifically, I've tended to go out much too hard and fade badly later on. With the Sprint Repeats this isn't a concern; I'm either going as hard as possible or recovering. The nature of them also means all energy systems get a good workout – ATP-PCr during each acceleration, glycolysis in the later reps, and aerobic between sets. Of course, I need to fix the problems with going off too hard on longer efforts, and in fact I've already been working on this. My most important realization was that perceived exertion should retain primacy, even with power data available. Perceived exertion tells the experienced athlete what they need to know. Power and heart rate are useful additions, but both vary from day to day so must remain subservient to RPE. Another reason I wasn't often able to get through my longer steady-state intervals was my fatigue, discussed in the rest of the article.
https://trainingnotes.ianbgibson.com/training-notes-august-november-2019
TECHNICAL FIELD BACKGROUND ART DISCLOSURE Technical Problem Technical Solution ADVANTAGEOUS EFFECTS BEST MODE MODE FOR INVENTION EXAMPLE 1 EXAMPLE 2 INDUSTRIAL APPLICABILITY The present invention relates, in general, to a colorless and transparent antibiotic material including silver and a method of preparing the same. More particularly, the present invention relates to an antibiotic material composed mainly of silver, which is harmless to the human body, exhibits antibiotic and disinfecting effects, is colorless and transparent, and is stable to light, including ultraviolet (UV) light, thus not becoming colored or discolored upon the preparation of antibiotic goods using the same, and to a method of preparing such an antibiotic material. Generally, silver (Ag) is a metal component that exhibits antibiotic and disinfecting activities, and in particular, silver nanoparticles manifest superior antibiotic activity to pathogenic substances, such as bacteria or viruses, and causes no side effects in the human body. Further, silver nanoparticles have a small particle size, and thus prevents the generation of cracks upon the preparation of antibiotic goods. Hence, silver is widely used to manufacture fiber goods, such as clothes, bedclothes, shoe materials, etc., industrial goods, such as packaging materials, nonwoven fabrics, filters, adhesives, etc., and living goods, such as antibiotic sprays, functional cosmetics, etc. Conventionally, an antibiotic material including silver, which is used for antibiotic goods, such as fibers or nonwoven fabrics, is based on a calcium phosphate support, a zirconium phosphate support, or a zeolite support, each of which has a metal ion, that is, a silver ion, substituted therein. Of these antibiotic materials, an antibiotic material based on the zeolite support has been widely developed. In regard to the antibiotic material including silver, Korean Patent Application No. 1998-0012320 discloses a method of preparing antibiotic zeolite having high resin transparency and low water adsorption. In addition, Korean Patent Application No. 2002-0054807 discloses a method of preparing a silver nanoparticle/organic polymeric composite using radioactive rays and a silver nanoparticle/organic polymeric composite prepared using the method, and Korean Patent Application No. 2002-0055186 discloses a synthetic resin and silicone containing silver nanoparticles. However, since a conventional antibiotic material including silver further comprises a surfactant, which functions as a protective colloid to inhibit the agglomeration of silver particles, and a reducing agent, which is necessary for the reduction of a metal salt, upon the preparation of the silver nanoparticles, the final antibiotic product including silver shows colors, for example, blue, yellow, brown, etc., thanks to the use of such additives. Further, when the silver nanoparticles are prepared using a microemulsion process or a polyol process, various additives should be used to easily disperse the silver particles, and thus, antibiotic goods have colors. In addition, such a conventional antibiotic material including silver nanoparticles is blackened as represented by the following reaction when exposed to light, including UV light. o 2 Ag+———Ag———AgO (black) Therefore, in order to solve the above problem, in the case where fibers or nonwoven fabrics are manufactured to have antibiotic activities, a colorless and transparent antibiotic material including silver, which does not change color and does not become discolored when exposed to light, such as UV light, is urgently required. Accordingly, the present invention has been made keeping in mind the above problems occurring in the related art, and an object of the present invention is to provide a method of preparing an antibiotic material including silver (Ag), which is harmless to the human body, exhibits antibiotic and disinfecting activities, and is colorless and transparent, and an antibiotic material including silver thus prepared. Another object of the present invention is to provide a method of preparing an antibiotic material including silver (Ag), which resists discoloration due to the reaction between a silver ion and light when exposed to light, including UV light, and an antibiotic material including silver thus prepared. In order to achieve the above objects, the present invention provides a method of preparing a colorless and transparent antibiotic material including silver, comprising: a) reacting a salt including a silver ion (Ag+) with a salt including a sulfate anion, to prepare a silver (Ag)-sulfate complex; and b) diluting the silver (Ag)-sulfate complex prepared in a) with water. In addition, the present invention provides a colorless and transparent antibiotic material including silver (Ag), prepared using the above method. The present invention provides a method of preparing an antibiotic material including silver (Ag), and an antibiotic material including silver thus prepared. According to the present invention, since the antibiotic material is composed mainly of silver, it is harmless to the human body and exhibits disinfecting and antibiotic activities. As well, unlike conventional silver-based antibiotic materials, the antibiotic material of the present invention is colorless and transparent, and thus, antibiotic goods manufactured using the antibiotic material of the present invention have no color problems. 2 In addition, even if the antibiotic material of the present invention is exposed to light, including UV light, it is stable and does not undergo deterioration, for example, discoloration resulting from easy formation of black oxide, such as AgO, upon exposure to such light. Hereinafter, a detailed description will be given of the present invention. Leading to the present invention, intensive and thorough research into antibiotic materials having excellent antibiotic activities, being colorless and transparent, and being stable to light, including UV light, aiming to avoid the problems encountered in the related art, resulted in the finding that a salt including a silver ion (Ag+) may be reacted with a salt including a sulfate anion to prepare a predetermined complex, which can be confirmed to exhibit sufficient antibiotic activity, be colorless and transparent, and be stable to light, including UV light. In the present invention, a method of preparing a colorless and transparent antibiotic material including silver is provided, which comprises a) reacting a salt including a silver ion (Ag+) with a salt including a sulfate anion, to prepare a silver (Ag)-sulfate complex; and b) diluting the silver (Ag)-sulfate complex prepared in a) with water. 3 3 The salt including a silver ion (Ag+) used in a) is not particularly limited in the present invention, as long as it may provide a silver ion (Ag+) in a reaction solvent. In particular, such a salt is preferably silver nitrate (AgNO) or silver acetate (AgCHCOO). Likewise, the salt including a sulfate anion is not particularly limited in the present invention, as long as it may provide a sulfate anion in a reaction solvent. Such a salt is preferably selected from the group consisting of sodium sulfate, sodium thiosulfate, sodium pyrosulfate, sodium sulfite, sodium pyrosulfite, potassium sulfate, potassium thiosulfate, potassium pyrosulfite, potassium sulfite, potassium pyrosulfite, ammonium sulfate, ammonium thiosulfate, ammonium sulfite, and mixtures thereof. In addition, the silver (Ag)-sulfate complex obtained in a) may be prepared by reacting the salt including a silver ion (Ag+) with the salt including a sulfate anion in a reaction solvent. As such, the silver (Ag)-sulfate complex is preferably silver thiosulfate. Further, the reaction solvent is water or an organic solvent, and is preferably water. Specifically, the preparation of the silver (Ag)-sulfate complex in a) is preferably conducted by mixing the aqueous solution of the salt including a silver ion (Ag+) with the aqueous solution of the salt including a sulfate anion, and then aging the mixture at 60˜80° C. for a time ranging from 30 min to 2 hr. In this way, when the aging process is carried out in the above range, the resultant complex is highly stable to light, including UV light, and to ultrapure water used for dilution in b). Preferably, the aqueous solution of the salt including a silver ion (Ag+) has a concentration of 0.1˜1.0 mol/L, and the aqueous solution of the salt including a sulfate anion has a concentration of 1.0˜5 mol/L. The silver (Ag)-sulfate complex may have an average particle size of 10 nm or less. More preferably, the aqueous solution of the salt including a silver ion (Ag+) has a concentration of 0.3˜0.5 mol/L, and the aqueous solution of the salt including a sulfate anion has a concentration of 3˜5 mol/L. Subsequently, the silver (Ag)-sulfate complex prepared in a) is diluted with water to have a desired concentration. Thereby, a colorless and transparent antibiotic material including silver may be obtained at a desired concentration. At this time, ultrapure water, which is passed through an ion exchange resin and then undergoes tertiary distillation using a distillation apparatus, is preferably used to decrease the influence of impurities. The antibiotic material including silver has excellent antibiotic activity and light stability, when the content of the silver (Ag)-sulfate complex formed in a) is 1˜10,000 ppm. If the above content is less than 1 ppm, antibiotic and disinfecting activities become insignificant. On the other hand, if the content exceeds 10,000 ppm, a blackening phenomenon may occur in a short time upon exposure to light, including UV light. In the method of the present invention, when the silver (Ag)-sulfate complex is prepared in a), a salt including metal having antibiotic activity may be additionally used along with the salt including a silver ion (Ag+). 3 2 3 2 2 3 2 3 2 4 2 6 4 2 The salt including metal having antibiotic activity, which may be additionally used, is not particularly limited in the present invention, as long as it is able to provide a cation salt of copper, nickel, platinum, palladium, or ruthenium, each of which has antibiotic or deodorizing activities. Preferably, the above salt is exemplified by a water-soluble metal salt, including copper nitrate (Cu(NO)), copper acetate (Cu(CHCOO)), copper chloride (CuCl), nickel nitrate (Ni(NO)), nickel acetate (Ni(CHCOO)), nickel sulfate (NiSO), chloroplatinic acid (HPtCl), hydrogen tetrachloroaurate (HAuCl) or palladium chloride (PdCl); an organic solvent-soluble metal salt, including acetylacetonates, such as nickel acetylacetonate or copper acetylacetonate; or a hydrolyzable metal salt, including alkoxides, such as nickel ethoxide or copper ethoxide, depending on the type of the reaction solvent. Preferably, the maximum amount of the additionally used metal salt is 20 mol %, based on the total amount of cations including a silver ion and a metal ion, in consideration of disinfecting power and average particle size. In addition, the present invention provides a colorless and transparent antibiotic material including silver, prepared using the above method. The antibiotic material including silver contains 10˜10,000 ppm silver (Ag)-sulfate complex, and preferably, 100˜1,000 ppm silver (Ag)-sulfate complex. Further, in the antibiotic material including silver of the present invention, the silver (Ag)-sulfate complex preferably has an average particle size not exceeding 10 nm. If the average particle size exceeds 10 nm, light stability is worsened, and thus, the antibiotic material may become discolored. Thus, the average particle size is preferably 2˜5 nm, and more preferably 3.5˜4.5 nm, with a standard deviation of 5 Å or less. Hereinafter, the present invention is specifically explained using the following examples, which are set forth to illustrate, but are not to be construed to limit the present invention. 3 A 0.4 mol/L aqueous silver nitrate (AgNO) solution was added with a 1.5 mol/L aqueous sodium sulfite solution and then with 3 mol/L sodium thiosulfate, and thereafter was allowed to react at 60˜80° C. for 1 hr, to prepare a silver-sulfate complex. The silver-sulfate complex was diluted with ultrapure water, which had been passed through an ion exchange resin and then undergone tertiary distillation using a distillation apparatus, so that the solid content of the silver-sulfate complex was 1 wt %, to prepare a final colorless and transparent antibiotic solution including silver. According to a particle size analysis, the antibiotic solution including silver thus prepared had an average particle size of 3.9 nm, and a standard deviation of 4 Å. Escherichia coli Staphylococcus aureus Salmonella typhmurium 2 The antibiotic solution thus prepared was assayed for bacterial reduction according to a pressurization close adhesion method. The bacterial reduction was measured in such a way that test strains (ATCC 25922, ATCC 6538, and KCTC 1925) were static cultured on the antibiotic solution including silver having a surface area of 60 cmat 25° C. for 18 hr, followed by counting the number of cells. The results are given in Table 1 below. TABLE 1 &lt;i&gt;Escherichia&lt;/i&gt; &lt;i&gt;coli &lt;/i&gt;ATCC &lt;i&gt;Staphylococcus&lt;/i&gt; &lt;i&gt;Salmonella&lt;/i&gt; 25922 &lt;i&gt;aureus &lt;/i&gt;ATCC 6538 &lt;i&gt;typhmurium &lt;/i&gt;KCTC 1925 Coating in Coating Coating in Blank Ex. 1 Blank in Ex. 1 Blank Ex. 1 Immediately 1.5 × 10° 1.5 × 10° 1.4 × 10° 1.4 × 10° 1.2 × 10° 1.2 × 10° After Contact After 6.9 × 10° <10 6.7 × 10° <10 5.9 × 10° <10 Culture for 18 hr Bacterial — 99.9 — 99.9 — 99.9 Reduction (%) * Blank: No antibiotic solution including silver. As is apparent from Table 1, the antibiotic solution including silver of Example 1 was quite different in bacterial reduction from that of the blank. From this result, the antibiotic solution including silver of Example 1 was confirmed to exhibit excellent antibiotic activity. The present example was conducted under the same test conditions as in Example 1, with the exception that an antibiotic solution, in which a solid content of silver-sulfate complex was 0.01 wt % (100 ppm), was prepared. The results are given in Table 2 below. TABLE 2 &lt;i&gt;Escherichia&lt;/i&gt; &lt;i&gt;Staphylococcus&lt;/i&gt; &lt;i&gt;Salmonella&lt;/i&gt; &lt;i&gt;coli&lt;/i&gt; &lt;i&gt;aureus&lt;/i&gt; &lt;i&gt;typhmurium&lt;/i&gt; ATCC 25922 ATCC 6538 KCTC 1925 Coating Coating in Coating Blank in Ex. 2 Blank Ex. 2 Blank in Ex. 2 Immediately 1.4 × 10° 1.4 × 10° 1.5 × 10° 1.5 × 10° 1.6 × 10° 1.6 × 10° After Contact After 5.9 × 10° <10 6.5 × 10° <10 7.0 × 10° <10 Culture for 18 hr Bacterial — 99.9 — 99.9 — 99.9 Reduction (%) * Blank: No antibiotic solution including silver. As is apparent from Table 2, the antibiotic solution including silver of Example 2 was quite different in bacterial reduction from that of the blank. From this result, the antibiotic solution including silver of Example 2 was confirmed to exhibit excellent antibiotic activity. In addition, the antibiotic solution of Example 1, in which the solid content of the silver-sulfate complex was 1 wt % (10,000 ppm), and the antibiotic solution of Example 2, in which the solid content of the silver-sulfate complex was 0.01 wt % (100 ppm), were subjected to a discoloration test when exposed to solar light and also when placed in an indoor room over time. The results are given in Table 3 below. TABLE 3 Immediately After After 10 After 20 After After 50 Synthesis Days Days 30 Days Days Blank Ex. 1 No Change — — — — (Darkroom) Ex. 2 No Change — — — — Indoor Ex. 1 No Change — — — — Means Ex. 2 No Change — — — — (Fluorescent Lamp) Outdoor Ex. 1 No Change — Light Dark Complete Means Black Black Black (Solar Ex. 2 No Change — — — — Light) * Blank: Solution was loaded into a brown bottle and stored in a dark cold space. As is apparent from Table 3, the antibiotic solutions of Examples 1 and 2 did not discolor for up to 30 days in the room. After 50 days, it was confirmed that only the solution of Example 1 (solid content: 10,000 ppm) became discolored to light black, and the solution of Example 2 (solid content: 100 ppm) was maintained colorless and transparent without a change of color. Further, when the antibiotic solutions of Examples 1 and were exposed to solar light, they did not discolor for up to 10 days. After 20 days, the solution of Example 1 became gradually discolored and was completely blackened after 50 days. However, the solution of Example 1 was considered to be remarkably stable to light, compared to conventional antibiotic solutions including silver, which discolored immediately after exposure to solar light. Moreover, it was noted that the color of the solution of Example 2 did not change even if the exposure time of the above solution to solar light exceeded 50 days. As described hereinbefore, the present invention provides a method of preparing an antibiotic material including silver (Ag), and an antibiotic material including silver (Ag) prepared using the method. According to the present invention, since the above antibiotic material is composed mainly of silver, it is harmless to the human body and exhibits disinfecting and antibiotic activities. As well, unlike conventional silver-based antibiotic materials, the antibiotic material of the present invention is colorless and transparent, and thus, antibiotic goods manufactured using the antibiotic material of the present invention have no color problems. 2 In addition, even if the antibiotic material of the present invention is exposed to light, including UV light, it is stable and does not cause side effects, for example, discoloration resulting from easy formation of black oxide, such as AgO, upon exposure to such light. Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims.
The overall objective of this proposal is to develop a tomographic system which will allow accurate measurements of radiotracer biodistributions. We plan to concentrate on regional cerebral blood flow measurements using N-isopropyl I-123 p-iodoamphetamine and must, therefore, overcome the problems inherent in using I-123 labeled compounds. This will require alterations in our Harvard scanning multidetector emission computed tomography system, in our opinion the most promising instrument for single photon emission computer tomography. Three of these systems are currently in our laboratories. Our specific aims include improvement in the Harvard scanning multidetector tomographic brain system by increasing angular sampling, improving the collimators and improving attenuation correction; comparing the multidetector system with the rotating gamma camera through performance testing; determining the accuracy with which we can quantitatively measure activity distributions of I-123 in computer simulations of the regional variations in activity anticipated in the brain; comparing the accuracy of regional cerebral blood flow measurements using the percent injected dose of isopropyl I-123 iodoamphetamine with the modified Saperstein method introduced by Kuhl et al; determining the effect of agonists to the various brain receptors on the uptake of the radiotracer in the brain; and determining the extent of altered perfusion in patients with focal cerebral ischemia and with evolving and completed stroke. The later aspect of the study will be particularly important because it will determine the effectiveness of medical and surgical managements aimed at salvaging reversibly ischemic cerebral tissue. In addition, we will determine the prognostic importance of cerebral blood flow measurements early on in patients with acute cerebral ischemia and infarction.
How To Address 4 Common STEM Workforce Challenges HR is challenging at the best of times, but HR professionals within the science, technology, engineering and math (STEM) industries face unique challenges. In particular, it can be difficult to find skilled, diverse employees, retain them, and then plan for their succession. Luckily, HR professionals in STEM don’t have to do it alone. There are a host of tools and resources to support HR professionals and managers in their efforts. Here’s how. Finding a Skilled Workforce Whether there is a shortage of workers with the appropriate skills and qualifications to work in STEM is hotly debated, but if you’re having trouble filling positions, there are a few things you can do. - Make recruiting easier. Recruitment software and services can simplify the hiring process, making it easier to find better candidates. Our clients are pleased with how much time they save by using a recruitment system — and also with the quality of the candidates they are able to find. - Reevaluate the qualifications required for the job. Do your education requirements truly fit the requirements of the job? A candidate with years of relevant experience may be more qualified than a recent graduate with a degree. Or a two-year degree, rather than a four-year degree, may suffice as a qualification. Think about what the necessary skills are and hire for those, rather than for a particular type of education background. - Offer on-the-job training. Sometimes, if you want something done right, you’ve got to do it yourself. If you’re finding that candidates and new hires consistently lack particular skills, offer on-the-job training to address this skills gap. (This has the added bonus of making your company a more attractive place to work.) Culture and Diversity In 2017, women drew attention to the challenges they face in being accepted in traditionally male industries like tech. Susan Fowler’s public account of the harassment and sexism she experienced as an engineer at Uber led to the firing of 20 Uber employees and the eventual resignation of CEO Travis Kalanick. This was called a “watershed” moment for women in tech. A 2017 BCG survey found that 30% of women believed their workplace culture was a barrier to gender diversity, compared to 18% of men. Cases like Fowler’s demonstrate that many STEM companies have both a pipeline and a culture problem in establishing diverse workplaces. Together, these challenges pose a big problem for companies. Even when HR professionals work hard to hire employees from underrepresented groups, a workplace that does not treat them respectfully will not be able to keep them for long. Indeed, women engineers are twice as likely as men to leave a company. There are numerous HR tools to assist with attracting and hiring diverse employees, but retaining them is just as important. According to BCG, women say the following are the most effective in realizing gender diversity: - Increasing the visibility of women role models and leaders. - Empowering men to support gender diversity. - Supporting women at important moments in their lives, for instance, allowing flexible work arrangements when returning from maternity leave. Such policies benefit all employees, not only women. New fathers can also benefit from flexible work arrangements. And they can be adapted to support employees from minority groups, too. For instance, increasing the visibility of minority role models can facilitate an environment more welcoming of underrepresented groups in general. And once you’ve created a welcoming and inclusive company culture, tools like the BirdDogHR Applicant Tracking System can help you present that culture to potential candidates, creating a feedback effect to attract more diverse candidates. Retention Retention can be a real challenge, and not just among women and people of color. It’s such a problem that the Society for Human Resource Management predicts that by 2022, employee retention will be HR’s most significant challenge. Beyond strengthening workplace culture, STEM workplaces can improve employee retention rates by implementing robust onboarding processes. According to a 2015 Equifax study, more than half of people who left a job in the previous year left that job within their first year of employment. Because turnover is particularly high among new employees, onboarding is one of the most important ways to improve employee retention rates. If you have an existing onboarding process, evaluate whether it is doing enough for your company. And consider implementing onboarding software to automate and simplify the onboarding process. Succession Planning Which employees will hold key positions in your company in five years? Ten? If you’re not sure, you’re not alone. Only 12% of companies have a succession plan, and succession planning is a weak spot of many STEM companies. This can cause problems if a key employee is suddenly incapacitated or leaves the company. It can also contribute to a retention problem, as 78% of employees say they would remain with their employers longer if they knew there was potential for advancement within their organization. Create a succession plan to ensure your business-critical positions remain staffed, and ensure your company evaluates employee performance with a view towards advancement. Performance management tools can identify employees ready for a promotion or for an executive-track position. Today is the perfect time to face your workforce challenges. Implement tools and policies to address your challenges in finding a skilled workforce, hiring diverse employees, retaining your employees and planning for their succession.
https://blog.birddoghr.com/2018/1/how-to-address-4-common-stem-workforce-challenges
Cite this article as: Tomokiyo A, Hamano S, Hasegawa D, Sugii H, Yoshida S, Maeda H. Prospects for the Application of Neural Crest Cells for the Periodontal Therapy. J Dent Oral Biol. 2017; 2(15): 1091. Abstract Periodontal tissues are predominantly formed by ecto-mesenchymal cells derived from the neural crest during embryonic development. Neural Crest Cells (NCCs), a transient multipotent stem cell population that plays crucial roles in the tissue development, have been regarded as highly promising candidates for the periodontal tissue regeneration. Our previous study demonstrated the establishment of a multipotent clonal Periodontal Ligament (PDL) cell line termed cell line 1-17 that showed NCCs phenotypes. In addition, our studies also reported the generation of Neural Crest-Like Cells (NCLCs) from human PDL-derived Induced Pluripotent Stem Cells (iPSCs). This article discusses the application of cell line 1-17 and human PDL iPSC-derived NCLCs for the study of clinical periodontal therapy. Introduction Periodontitis, a chronic inflammatory condition in the periodontal tissues caused by bacterial infections, leads to common clinical symptoms including extensive connective tissue destruction and alveolar bone loss. Therefore, severely advanced periodontitis eventually results in tooth loss. The ultimate goal of periodontal therapy is to regenerate the healthy and functional periodontal tissues destructed by periodontitis. Stem cell population has been considered essential for the tissue development and regeneration because of their special properties: self-renewal and multipotency. Neural crest cells (NCCs), a transient stem cell population that derives from the neural crest, contribute to the formation of diverse cell lineages and structures in periodontal tissues; they differentiate into ecto-mesenchymal cells and give rise to various tissues including alveolar bone, cementum, and periodontal ligament (PDL). Given the principal role NCCs have in periodontal tissue development, they are a highly promising candidate for the application to the periodontal therapy. NCCs are present in the human embryo and in several adult tissues, however their number is extremely small. The rarity of human NCCs prevents their application for the study of regenerative medicine. Therefore, we aimed to establish human cell line that possesses NCCs phenotypes and generate neural crest-like cells (NCLCs) from human PDL-derived induced pluripotent stem cells (iPSCs). Establishment of Human Multipotent Periodontal Ligament Cell Line with Neural Crest Cell Phenotypes Human PDL cells isolated from the healthy a third molar of 20-year-old female were immortalized by using by using simian virus40 T-antigen and human telomerase reverse transcriptase transfection . Following the limiting dilution, we obtained 20 clonal PDL cell lines and investigated the characteristics of one line termed cell line 1-17. This line showed the potential to differentiate into osteoblasts, chondrocytes, adipocytes, and neurocytes . It also revealed the high expression of mesenchymal stem cell-related cell surface markers including CD13, CD29, CD44, CD71, CD90, CD105, and CD166, and pluripotency genes OCT4 and Nanog . These results suggested the stem cell phenotypes of cell line 1-17. This line also exhibited NCCs phenotypes; it highly expressed neural crest marker genes SLUG, SOX10, NESTIN, p75NTR, and CD45d . In addition, the conditioned medium from cell line 1-17 promoted neural differentiation of neural progenitors. This result is consistent with the previous study reporting the ability of conditioned medium from NCCs to induce neurite outgrowth of neural cells . Therefore, cell line 1-17 would provide an innovative tool to clarify the behavior of NCCs during healing processes of periodontal ligament tissues. Generation of Neural Crest-Like Cells from Human Induced Pluripotent Stem Cells Induced pluripotent stem cells (iPSCs) are one of the most promising stem cells for regenerative therapy because they are generated from somatic cells and show high multipotency and selfrenewal capabilities . However, iPSCs have the risk of tumor formation because of the insertion of a tumorigenic gene. A previous study reported the successful generation of iPSC-derived NCLCs that did not formed any tumors after their transplantation . This result suggested that neural crest lineage-committed iPSCs had no tumorigenic potential in vivo. Moreover, we tried to generate NCLCs that closely resembled the phenotypic and functional hallmarks of NCCs. iPSCs derived from human PDL (PDL iPSCs) was used for our study because PDL was originated with a neural crest and epigenetic memories for the somatic tissue persisted in iPSCs. We sorted the HNK-1 positive population from PDL iPSCs-derived NCLCs because HNK-1 expression was identified in premigratory and migrating NCCs . These cells revealed a higher expression of NCCs marker genes and a greater capacity to differentiate into neural crest lineage cells than HNK-1 negative population from PDL iPSC-derived NCLCs as well as NCLCs generated from non-neural crest tissue-derived iPSCs . This result suggested the HNK-1 positive population from PDL iPSCs-derived NCLCs was enriched with a population that has characteristics of NCCs and could help to establish a new periodontal therapy based on NCCs transplantation. Conclusion Cell line 1-17 and/or HNK-1 positive population from PDL iPSCs-derived NCLCs would overcome the rarity of human NCCs and may be used as a valuable and unlimited cell source for the study of regenerative medicine. Further analyses based on a molecular biological approach are required to establish a new NCC-based periodontal therapy. References - Fujii S, Maeda H, Wada N, Kano Y, Akamine A. Establishing and characterizing human periodontal ligament fibroblasts immortalized by SV40T-antigen and hTERT gene transfer. Cell Tissue Res. 2006;324(1):117-25. - Tomokiyo A, Maeda H, Fujii S, Wada N, Shima K, Akamine A. Development of a multipotent clonal human periodontal ligament cell line. Differentiation. 2008;76(4):337-47. - Tomokiyo A, Maeda H, Fujii S, Monnouchi S, Wada N, Kono K, et al. A multipotent clonal human periodontal ligament cell line with neural crest cell phenotypes promotes neurocytic differentiation, migration, and survival. J Cell Physiol. 2012;227(5):2040-50. - Li M, Liu JY, Wang S, Xu H, Cui L, Lv S, et al. Multipotent neural crest stem cell-like cells from rat vibrissa dermal papilla induce neuronal differentiation of PC12 cells. Biomed Res Int. 2014;2014:186239. - Takahashi K, Yamanaka S. Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell. 2006;126(4):663-76. - Otsu K, Kishigami R, Oikawa-Sasaki A, Fukumoto S, Yamada A, Fujiwara N, et al. Differentiation of induced pluripotent stem cells into dental mesenchymal cells. Stem Cells Dev. 2012;21(7):1156-64. - Betters E, Liu Y, Kjaeldgaard A, Sundstrom E, Garcia-Castro MI. Analysis of early human neural crest development. Dev Biol. 2010;344(2):578-92. - Tomokiyo A, Hynes K, Ng J, Menicanin D, Camp E, Arthur A, et al. Generation of neural crest-like cells from human periodontal ligament cell-derived induced pluripotent stem cells. J Cell Physiol. 2017;232(2):402-16.
http://www.remedypublications.com/dentistry-and-oral-biology/full-text/jdob-v2-id1091.php
1. What is a Semicolon? In writing, a semicolon (;) is a type of punctuation used to combine full sentences and share complicated lists. Semicolons let us clearly share two or more related ideas in one sentence, which keeps us from writing a bunch of short, awkward sentences about the same topic or thing. A good way to think about a semicolon’s job is that it creates a stronger pause than a comma, but doesn’t demand a complete stop like a period! 2. Examples Here are some examples of how we use semicolons to combine sentences and write out detailed lists: - I love ice cream; it is my favorite food. - I like cake; however, ice cream is my favorite dessert. - I know great ice cream shops in Burlington, Vermont; Wickford, Rhode Island; Wakefield, Rhode Island; and Chester, New Jersey. 3. Ways to use Semicolons Semicolons have two main functions: to combine full sentences, and to share complicated lists clearly. The correct ways to use them are pretty specific, which leads many writers to use them the wrong way or not at all. These sections will show you how to use them properly! a. To Combine Full Sentences A semicolon’s first job is to combine two or more independent clauses, putting together several full sentences about related things. You cannot use semicolons to combine an independent clause with a dependent clause, in other words, a complete sentence with an incomplete sentence (see How to Avoid Mistakes). There are two ways we use semicolons to combine independent clauses. Combining two independent clauses The first way to use a semicolon is right between two independent clauses (which each have a subject and a predicate), with no other connecting words. You should use a semicolon in this way when you want to share related things that are different but equally important, in one sentence instead of two (or more), like this: - Dessert is the best meal of the day; it’s definitely my favorite! - There is one thing I know; ice cream is the best dessert. Each of the sentences above shares two independent thoughts related to dessert, and neither is particularly more important than the other. Semicolons are the best options here—a period would make them too choppy, and a comma would make a comma splice (see How to Avoid Mistakes). Also, notice that the word after the semicolon is not capitalized; unlike a period, there is no need to capitalize the first word after a semicolon. Combining two independent clauses with a transition You can also use a semicolon to combine two independent clauses that are separated by a conjunction, like however, thus, moreover, though, but, therefore, and so on, like this: - Sometimes I have frozen yogurt; however, it’s not as good as ice cream. - They were out of Rocky Road; thus, I was forced to choose another flavor. These examples are similar to the way you combine clauses with only a semicolon; but, sometimes adding a transition word strengthens the meaning of the sentences. For example, using a semicolon and “thus” in the second sentence makes the speaker’s situation seem more dramatic, emphasizing the he really wanted Rocky Road. Really, when you use a conjunction, it’s okay to use a comma instead of a semicolon. In those cases, the pause that the punctuation creates is up to the writer, and a semicolon is a bit more formal. b. To List Things A semicolon’s second job is to help make detailed lists. Sometimes we need to share a lot of detailed information in one sentence, and that can be confusing for readers if it isn’t punctuated the right way. Semicolons are very helpful for that situation! Here are some examples: - John has lived in Atlanta, Georgia; Seattle, Washington; and Miami, Florida. - Rocky Road has chocolate, peanuts, and marshmallows; Cookies and Cream has chocolate sandwich cookies; Neapolitan has chocolate, vanilla, and strawberry in one. In the first sentence, the semicolons let you see the three detailed places where John has lived. The second clearly describes what’s in each of the different types of ice cream. The semicolons make these very descriptive sentences easy to understand—without them they would be pretty messy. 4. How to Avoid Mistakes Semicolon mistakes are pretty common. But, these few things can help you avoid those mistakes in your writing: a. Semicolons and colons are different! A colon is different than a semicolon. The two have different jobs, and shouldn’t be used interchangeably. A colon lets the reader know that something else is coming after the first thought of a sentence. For instance, when a list is about to come, you need a colon, not a semicolon, like this: - Jane likes three flavors: chocolate, vanilla, and strawberry. Correct! - Jane likes three flavors; chocolate, vanilla, and strawberry. INCORRECT To be clear, like in this example, colons can combine independent and dependent clauses, which semicolons cannot do (see below). b. Semicolons and commas are different, too! Today, a lot of people use commas instead of semicolons—sometimes that’s ok. But truly, commas have different jobs than semicolons, and there are some cases where you should use one and not the other. For instance, when combining two independent clauses, if you aren’t using a transition, then you have to use a semicolon. Otherwise, you get a comma splice, like this: - I love ice cream, I eat it every day. Comma splice, INCORRECT - I love ice cream; I eat it every day. Correct! Next, for simple lists, you only need to use commas, not semicolons; however, for more complicated lists, you should use semicolons, not commas. These sentences show why: - My three favorite flavors are strawberry, chocolate, and vanilla. - Jane likes her ice cream four ways: with hot fudge, cherries, and whipped cream; with caramel sauce, whipped cream, and bananas; with hot fudge and peanuts; and with just sprinkles. As you can see, it’s definitely unnecessary to use semicolons in the first sentence because it only lists three simple things. But, the second sentence would be very confusing if it only used commas. Finally, to combine independent and dependent clauses, you need a comma, not a semicolon. This leads us to the last rule: c. A semicolon can’t combine an independent clause with a dependent clause. As mentioned, when combining sentences you can only use semicolons to put independent clauses together, NOT to combine an independent clause with a dependent clause: Yesterday after work; I ate three bowls of ice cream. INCORRECT Yesterday after work, I ate three bowls of ice cream. Correct! You can only use semicolons between independent clauses—complete sentences!
https://englishsentences.com/semicolon/
Hepatitis A, B, C, D, and E: Diagnosis, Management, and Treatment Viral hepatitis is “an infection caused by viral microorganisms and attacks the human liver directly” (Moore, 2006, p. 23). Some chronic cases of viral hepatitis have the potential to cause cirrhosis (also known as scarring), cancer, and failure (Moore, 2006). These cases of hepatitis have been observed to be life-threatening. Scientists and medical experts have identified five major types of these disease-causing viruses. These viruses are identified using letters A, B, C, D, and E. Hepatitis A Virus: This virus is common in human stools can be transmitted through contaminated food and water. Sexual intercourse is also known to transmit this virus (Moore, 2006). Patients with hepatitis A portray mild symptoms and recover within a short time. Hepatitis B Virus: Infected blood, body fluids, and semen can transmit this microorganism (Moore, 2006). Mothers can transmit the virus to their newborn babies. Blood transfusion and needle-sharing have also been observed to transmit the virus (Moore, 2006). Hepatitis C Virus: This virus is widely transmitted through contaminated human blood. Transfusions and sharing of needs can also transmit hepatitis C virus. Sex can also “transmit the virus but chances are minimal” (Ghany, Strader, Thomas, & Seeff, 2009, p. 1343). Hepatitis D Virus: This virus usually affects individuals with hepatitis B. This kind of dual infection affects the health outcomes of the affected patients (Moore, 2006). Hepatitis E Virus: Medical experts have indicated that contaminated food materials, fruits, and water can transmit this virus (Moore, 2006). The virus is also common in different developing and underdeveloped regions. Symptoms Hepatitis is usually caused by different viruses. However, the outstanding fact is that the five hepatitis types portray similar signs and symptoms. To begin with, these types of hepatitis are characterized by liver inflammation (Ghany et al., 2009). The viruses can cause short-term, acute, and chronic hepatitis. Chronic hepatitis is known to “cause various health problems such as cancer, cirrhosis, and failure” (Moore, 2006, p. 48). These conditions portray various symptoms that must be examined carefully by caregivers. Patients with hepatitis will have yellowish eyes and skin. This condition is given the name jaundice (Ghany et al., 2009). Fatigue, headache, and abdominal pain are common symptoms associated with hepatitis. As well, the affected persons might have nausea. Loss of appetite is another common issue associated with this condition (Ghany et al., 2009). The affected individuals might complain of diarrhea and vomiting. Moore (2006) indicates that “low grade fever is common in patients with different variants of hepatitis” (p. 76). The other important thing to consider is that the condition might produce no symptoms in some individuals. Prevention Hepatitis is a common condition that affects the health outcomes of many people. However, human beings can use various strategies to prevent and control this disease. To begin with, hepatitis can be prevented using effective vaccines. For example, hepatitis A, B, D, and E can be prevented using various vaccines. Moore (2006) argues that “family members, sex partners, and friends of individuals with chronic hepatitis should also be vaccinated” (p. 64). Blood for transfusion should be carefully screened in order to minimize cases of this condition. People should avoid sharing needs and other piercing objects. On top of that, people can undertake a wide range of hygienic practices that can deal with this condition. For example, people should engage in protected or safe sex. Hand-washing is a positive approach towards minimizing chances of infection. Cuts should be carefully and disinfected. Ghany et al. (2009) indicate that “blood spills should also be cleaned up thoroughly” (p. 1362). Individuals who use illegal drugs such as cocaine and heroin should avoid sharing needles. Body organs for donation should also be screened in order to minimize chances of transmitting hepatitis to unsuspecting recipients (Moore, 2006). Body fluids and bloods from other people should be avoided. These measures have been observed to prevent the condition. Treatments For very many years, hepatitis had remained a major health challenge due to lack of adequate treatment methods. Individuals with chronic hepatitis can now benefit from different treatment options. However, the first important thing to consider is that not all infected persons should use medicines (Ghany et al., 2009). That being the case, patients should consult their healthcare practitioners in order to use the best treatment method. The first treatment option includes the use of antiviral medications. Some of “the widely used medicines include adefovir (Hepsera), Epivir (lamivudine), Baraclude (entecavir), and Tyzeka” (Moore, 2006, p. 104). These medicines have the potential to control the virus and minimize the level of liver damage. Intron A is another synthetic drug used by individuals who do not want to avoid different antiviral medications. However, this drug has several side-effects such as chest tightness and depression (Ghany et al., 2009). A patient with a severely damaged liver can get a transplant. The liver for transplant can come from a deceased person. Combination therapy is also used to support the health needs of patients with other complications such as HIV (Moore, 2006). Physicians should therefore be able to identify the most appropriate treatment options for their patients in order to produce positive results. Reference List Ghany, M., Strader, D., Thomas, D., & Seeff, L. (2009). Diagnosis, Management, and Treatment of Hepatitis C: An Update. Hematology, 49(4), 1335-1374. Moore, E. (2006). Hepatitis: Causes, Treatments and Resources. Jefferson, NC: McFarland and Company.
https://studykraken.com/hepatitis-a-b-c-d-and-e-diagnosis-management-and-treatment/
The Gender and Development Office, led by Ms. Zarah Annudin conducts a Gender Sensitivity Orientation in Barangay Bisocol on September 6, 2022. The group of Ms. Annudin together with Dr. Jocelyn De Vera communicated this activity to Punong Barangay Bernardino Humilde and the members of the KALIPI. This program seeks to foster openness in Understanding gender and development among men and women in the chosen barangay, promote equal opportunities and active involvement of men and women in the community, and raise awareness of crucial GAD-related legislation and ideas. The program started with the preliminaries at 8:30 in the morning, followed by the talk of Dr. Ellen Grace Ugalde. She covered several topics, including the distinctions between sex and gender. In addition, she has emphasized that sex applies to the physical aspects of a woman’s and man’s bodies. Simultaneously, gender is determined and portrayed based on a person’s surroundings. During the open conversation about gender roles, masculinity, femininity, patriarchy, gender-based violence, and sexual harassment, the level of discourse increased, the attendees engaged enthusiastically and contributed their experiences and expertise about these themes. Dr. Ugalde stated that understanding gender is crucial, particularly for comprehending the people around us. It examines how social norms and power structures influence the lives and possibilities of various groups of men and women. Promote equality and mutual respect in our community. Gender and Social Construction is the second topic that was discussed by Mr. Brandy Celino, the guidance counselor of PSU-ACC. He has mentioned that gender and social construction include norms, behaviors, roles associated with being a woman, man, girl, or boy, and relationships with each other. As a social construct, gender varies from society to society and can change over time. The topic is also related to the discussion of Dr. Ugalde. Moreover, in the discussion, Mr. Celino emphasized the different ways to create a gender bias-free home. He mentioned the following: • Check your own biases; • Have open discussions at home about the way chores are divided up; • Ask children for their feedback about these family practices; • Provide children of both genders with books and movies that feature nontraditional gender roles; and • Encourage kids to try all extracurricular activities and discuss why they might feel more comfortable in some pastimes than others. With the conducted activity, the participants were informed regarding the importance of gender sensitivity in the society. The next part of the program is the giving of certificates to the speakers as well as to the adopted Barangay. Barangay Captain Humilde also expressed his gratitude to PSU-ACC for always supporting them and considering them in every project. Indeed, this activity reflects Pangasinan State University’s aim to promote to develop gender-sensitive, highly principled, morally upright, innovative, and globally competent women and men capable of meeting the needs of the industry, public service, and civil society through Gender-responsive instruction, research, extension, and production commits. This activity is simply the beginning of fostering an atmosphere that is gender-responsive and compassionate.
https://alaminos.psu.edu.ph/%F0%9D%90%93%F0%9D%90%8E%F0%9D%90%96%F0%9D%90%80%F0%9D%90%91%F0%9D%90%83%F0%9D%90%92-%F0%9D%90%84%F0%9D%90%90%F0%9D%90%94%F0%9D%90%88%F0%9D%90%93%F0%9D%90%98-%F0%9D%90%8F%F0%9D%90%92%F0%9D%90%94/
J.D. Salinger's books unlikely to become films, TV seriesThe famously reclusive author J.D. Salinger has died, but chances remain slim to none that any adaptation of his classic literary works will reach the screen or stage. With more than 65 million copies of "The Catcher in the Rye" in print, many have sought to turn Salinger's stories into movies, Broadway shows or book sequels over the past 63 years, but the author always adamantly refused. That isn't about to change -- all because Salinger was unhappy about the one time he allowed an adaptation. Salinger, who died Wednesday at age 91 in Cornish, N.H., agreed to have one of his short stories, "Uncle Wiggily in Connecticut," made into a movie, which was released in 1949 as "My Foolish Heart." The film was a critical and commercial failure and apparently an affront to the author, who vowed never again to make the mistake of allowing others to interpret his vision. Ever since, numerous producers, filmmakers, authors and stage directors have sought rights to his 1951 novel, "The Catcher in the Rye," as well as to his 1961 book "Franny and Zooey" and other stories. In 2008, the rights to his works were placed in the J.D. Salinger Literary Trust, of which the author was sole trustee. Phyllis Westberg, who was Salinger's agent at Harold Ober Associates in New York, declined Thursday to say who the trustees are now that the author is dead -- but she was clear that nothing has changed in terms of licensing movie, TV or stage rights. "Everybody knows that he did not want it to happen, and the trust will follow that," Westberg told THR. In its most recent legal action, the trust last year sued to successfully stop publication of the novel "60 Years Later: Coming Through the Rye" by Fredrik Colting of Sweden. It was described as a sequel that picked up the story of "Rye" protagonist Holden Caulfield 60 years later in a rest home, where he reflected on his life. The U.S. District Court in New York rejected Colting's claim of fair use, ruling the novel borrowed too much from the original to be considered a parody. Thus, it violated the copyright, which Salinger had renewed in 1979. Among those who have sought unsuccessfully to win rights to "Catcher" over the years were producer Samuel Goldwyn, director Billy Wilder and actors Marlon Brando, Jack Nicholson, Tobey Maguire, Leonardo DiCaprio, John Cusack and Jerry Lewis. The last reportedly tried many, many times. Goldwyn exchanged letters with Salinger in the early '50s in which the author discussed mounting a play in which he would play Caulfield opposite Margaret O'Brien, and, if he couldn't play the part himself, to "forget about it." Writer Joyce Maynard, who wrote about her affair with Salinger, confirmed 50 years after the book was published that the only person he ever would consider to play the part of Caulfield would have been the author himself.
https://www.hollywoodreporter.com/news/still-no-screen-tests-holden-20079
September 8, 2020 Royal Philips today announced the results of its latest piece of Future Health Index (FHI) 2020 research. The Future Health Index Insights: COVID-19 and Younger Healthcare Professionals survey supplements the main FHI 2020 report, capturing feedback from 500 doctors under the age of 40 in five countries: the United States of America, China, Singapore, France and Germany. Royal Philips reports the findings reveal how the COVID-19 pandemic has affected the attitudes and experiences of younger doctors, and how they believe the healthcare industry should change in response. “Healthcare professionals, including the younger generation, have experienced unprecedented levels of stress and were often faced with limited resources in recent months. We must acknowledge the heroic sacrifices that frontline healthcare professionals have endured in the fight against COVID-19. We owe it to them to listen to their voices as we consider the future of the healthcare industry,” said Jan Kimpen, Chief Medical Officer, Royal Philips. “Our FHI Insights survey reveals that despite the challenges they’ve faced, younger doctors are as committed as ever to their vocation. The research spotlights how young doctors perceive change, and is relevant to leaders focused on reshaping how healthcare is being organized and delivered.” Telehealth overtakes AI in the eyes of younger doctors The COVID-19 pandemic has prompted younger doctors to change their attitudes to the relative benefits of different health technologies. It has led to a shift in priorities, with younger doctors recognizing the immediate value of telehealth. Before the pandemic, 60% of younger healthcare professionals ranked AI as the top digital health technology that would most improve their work satisfaction, with 39% identifying telehealth as the top technology. 61% of younger doctors now rank telehealth as the digital health technology that would have most improved their experiences at this time, with AI falling to 53%. Younger doctors surveyed believed that there is room for improvement in how these technologies are used in everyday practice. When asked what would have helped them leverage the health data available to them during the height of the pandemic, nearly half (47%) of younger doctors pointed to better integration of healthcare data between hospitals/practices and between different IT systems or electronic medical records. Younger doctors want more digital technology For many younger doctors, working through COVID-19 has shown what a more technologically forward-thinking workplace could look like, with 44% reporting the pandemic exposed them to new ways of using digital health technologies. As the healthcare sector prepares for the future, many younger doctors hope these advancements will become permanent fixtures of their post-COVID-19 workplace environments. When asked what changes in healthcare they most hoped would become outlast the pandemic, younger doctors ranked exposure to new types of digital health technologies (29%), new ways to use digital health technologies (29%), greater appreciation from patients (29%), and accelerated availability of digital health technologies (28%) as their top responses. Many younger doctors are more committed than ever to their careers The pandemic is presenting healthcare professionals with even greater workplace hardships and moral dilemmas, which are very likely to exacerbate existing levels of burnout and related mental health problems . However, according to the FHI 2020 Insights survey, many younger doctors (38%) say they are more likely to stay in medicine as a result of their experiences working during COVID-19. Most (53%) said COVID-19 had no effect on them wanting to stay in or leave the profession, and only 9% said they were more likely to leave the profession. Many younger doctors also reported changes in their day-to-day work during the pandemic, which could lead to increased career and personal satisfaction. 47% reported greater appreciation from patients, while 44% experienced greater collaboration with colleagues across different skill sets. Younger doctors in China stood out by reporting a deeper feeling of purpose at work (70%) since the onset of COVID-19. Since 2016, Philips has conducted original research to help determine the readiness of countries to address global health challenges and build efficient and effective healthcare systems. For details on the Future Health Index methodology and to access the 2020 report in its entirety, including the FHI Insights: COVID-19 and Young Healthcare Professionals research, visit the Future Health Index site.
https://infomeddnews.com/royal-philips-new-insights-younger-doctors-commitment-improving-healthcare-during-covid-19/
Time as a Dimension of the Digital Divide: Profiles over Time of Students Taking Online, Face-to-Face, or Mixed Delivery Classes at a Large Virtual University. AIR 2001 Annual Forum Paper. Wisan, Gail; Roy, Pallabi Guha; Pscherer, Charles P., Jr. A large virtual university, a participant in a major distance study, is tracking students' enrollment in online or both online and face-to-face classes (i.e., mixed). Although an online students' profile provides data for examining the digital divide, one-time snapshots are inadequate. Time must be included as a dimension of any analysis of demographic groups' participation in online education. Two aspects of time were analyzed: calendar time (3 years of trend data) and time in relationship to degree. The paper provides data on the ethnic, gender, age, and demographic distribution of online and "mixed" students. In all, data were available for 16,092 students in 1999, 18,311 in 2000, and 20,920 in 2001. Trend data on how ethnic groups and other demographic groups are self-selecting classes with different delivery formats speak more directly to understanding the digital divide. The paper provides 3 fiscal years of percentages (FY 1999 to FY 2001) of different demographic groups (ethnic, gender, age, and geographic) enrollment in online, mixed, and face-to-face education at a large, substantially virtual university during a period of rapid expansion in online education. The paper discusses the implications for the digital divide of this enrollment trend data. (Contains 3 figures, 9 tables, and 14 references.) (SLD) Publication Type: Reports - Research; Speeches/Meeting Papers Education Level: N/A Audience: N/A Language: English Sponsor: N/A Authoring Institution: N/A Note: Paper presented at the Annual Meeting of the Association for Institutional Research (41st, Long Beach, CA, June 3-6, 2001).
https://eric.ed.gov/?id=ED457743
The New York Appellate Court ruled on 11 December 1977 in favor of Steven and Hetty Park and against Herbert Chessin for the wrongful life of the Parks' child. In a wrongful life case, a disabled or sometimes deceased child brings suit against a physician for failing to inform its parents of possible genetic defects, thereby causing harm to the child when born. Park v. Chessin was the first case to rule that medical personnel could be legally responsible for wrongful life. Further cases such as the 1979 case Berman v. Allan and the 1982 case Turpin v. Format: Articles Subject: Legal, Reproduction Litowitz v. Litowitz [Brief] (2002) Pursuant to an express provision of the embryo disposition contract they both signed, a husband and wife had to petition the court for instructions because they could not reach an agreement about what to do with frozen embryos when they divorced. The trial court awarded the pre-embryos to the husband and the Court of Appeals affirmed this decision. However, the Washington Supreme Court ruled that the pre-embryos should be thawed out and allowed to expire because the dispute had not been resolved within a five year time frame prescribed by the Cryopreservation Agreement. Format: Articles Subject: Legal, Reproduction "Ethical Issues in Human Stem Cell Research: Executive Summary" (1999), by the US National Bioethics Advisory Commission Ethical Issues in Human Stem Cell Research: Executive Summary was published in September 1999 by The US National Bioethics Advisory Commission in response to a national debate about whether or not the US federal government should fund embryonic stem cell research. Ethical Issues in Human Stem Cell Research recommended policy to US President William Clinton's administration, which advocated for federal spending on the use of stem research on stem cells that came from embryos left over from in vitro fertilization (IVF) fertility treatments. Format: Articles Turpin v. Sortini (1982) The Supreme Court of California reversed the Superior Court of Fresno County's decision to dismiss the Turpins' claims in the case Turpin v. Sortini on 3 May 1982. The case was based upon a wrongful life claim, in which a disabled child sues physicians for neglecting to inform its parents of potential genetic defects, resulting in harm to the child when it is born. The Turpin case determined tha a physician could be liable for failing to inform parents of potential birth defects in the fetus. Format: Articles Subject: Legal, Reproduction Doolan v. IVF America [Brief] (2000) The implication of the court's decision was that Thomas Doolan's identity or personhood existed at the embryo stage in vitro, thus the fact that he was born with cystic fibrosis was not attributable to the decision of the in vitro fertilization providers to implant one embryo instead of another. The other unused embryo may not have carried the cystic fibrosis genes, but that other embryo was not Thomas Doolan. The decision in Doolan has not been publicly tested in other jurisdictions. Format: Articles Subject: Legal, Reproduction A.Z. v. B.Z. [Brief] (2000) The Massachusetts Supreme Court in a case of first impression decided that a prior written agreement between a husband and wife regarding the disposition of frozen embryos in the event of a divorce was unenforceable. This was the first case to reject the presumption that written agreements to conduct in vitro fertilization practices were binding. The court would not force the husband to become a parent merely because he signed a consent form that would have awarded the frozen embryos to his wife in the event of marital separation. Format: Articles Subject: Legal, Reproduction ABO Blood Type Identification and Forensic Science (1900-1960) The use of blood in forensic analysis is a method for identifying individuals suspected of committing some kinds of crimes. Paul Uhlenhuth and Karl Landsteiner, two scientists working separately in Germany in the early twentieth century, showed that there are differences in blood between individuals. Uhlenhuth developed a technique to identify the existence of antibodies, and Landsteiner and his students showed that humans had distinctly different blood types called A, B, AB, and O. Format: Articles Subject: Theories, Legal, Technologies China's One-Child Policy In September 1979, China's Fifth National People's Congress passed a policy that encouraged one-child families. Following this decision from the Chinese Communist Party (CCP), campaigns were initiated to implement the One-Child Policy nationwide. This initiative constituted the most massive governmental attempt to control human fertility and reproduction in human history. These campaigns prioritized reproductive technologies for contraception, abortion, and sterilization in gynecological and obstetric medicine, while downplaying technologies related to fertility treatment. Format: Articles Subject: Ethics, Legal, Reproduction President George W. Bush's Announcement on Stem Cells, 9 August 2001 On 9 August 2001, US President George W. Bush gave an eleven-minute speech from his ranch in Crawford, Texas, on the ethics and fate of federal funding for stem cell research. Bush also announced the creation of a special council to oversee stem cell research. In the speech President Bush acknowledged the importance of issues surrounding stem cell research to many Americans, presented different arguments in favor of and opposing embryonic stem cell research, and explained his decision to limit but not completely eliminate potential federal funding for embryonic stem cell (ESC) research. Format: Articles Subject: Legal South Korea's Bioethics and Biosafety Act (2005) The South Korean government passed the Bioethics and Biosafety Act, known henceforth as the Bioethics Act, in 2003 and it took effect in 2005. South Korea's Ministry of Health and Welfare proposed the law to the South Korean National Assembly to allow the progress of biotechnology and life sciences research in South Korea while protecting human research subjects with practices such as informed consent. The Bioethics Act establishes a National Bioethics Committee in Seoul, South Korea. Format: Articles Assisted Human Reproduction Act (2004) The Assisted Human Reproduction Act (AHR Act) is a piece of federal legislation passed by the Parliament of Canada. The Act came into force on 29 March 2004. Many sections of the Act were struck down following a 2010 Supreme Court of Canada ruling on its constitutionality. The AHR Act sets a legislative and regulatory framework for the use of reproductive technologies such as in vitro fertilization and related services including surrogacy and gamete donation. The Act also regulates research in Canada involving in vitro embryos. Format: Articles Subject: Legal, Reproduction, Ethics Golden Rice Golden Rice was engineered from normal rice by Ingo Potrykus and Peter Beyer in the 1990s to help improve human health. Golden Rice has an engineered multi-gene biochemical pathway in its genome. This pathway produces beta-carotene, a molecule that becomes vitamin A when metabolized by humans. Ingo Potrykus worked at the Swiss Federal Institute of Technology in Zurich, Switzerland, and Peter Beyer worked at University of Freiburg, in Freiburg, Germany. The US Rockefeller Foundation supported their collaboration. Format: Articles The Singapore Bioethics Advisory Committee Established in tandem with Singapore's national Biomedical Sciences Initiatives, the Bioethics Advisory Committee (BAC) was established by the Singapore Cabinet in December 2000 to examine the potential ethical, legal, and social issues arising from Singapore's biomedical research sector, and to recommend policy to Singapore's government. Format: Articles Subject: Organizations, Ethics, Legal Dickey-Wicker Amendment, 1996 The Dickey-Wicker Amendment is an amendment attached to the appropriations bills for the Departments of Health and Human Services, Labor, and Education each year since 1996 restricting the use of federal funds for creating, destroying, or knowingly injuring human embryos. The Dickey-Wicker Amendment began as a rider (another name for an amendment) attached to House Resolution (H.R.) 2880. H.R.
https://embryo.asu.edu/search?text=Virginia%20State%20Colony%20for%20Epileptics%20and%20Feeble%20Minded&f%5B0%5D=dc_description_type%3A35&f%5B1%5D=dc_subject_embryo%3A210&page=4
Q: Why does this rational function have a false slant/oblique asymptote? Let's examine the following rational function: $f(x) = \frac{3x^3+2}{x^2-x-7}$. Considering that the degree of the polynomial in the numerator is 1 greater than that of the denominator, it can be assumed that the function possesses no horizontal asymptote, but possess a slant, or oblique, asymptote. As a result of long division, the slant asymptote appears to be $y = 3x + 3$. However, on a graph, the function $y = 3x + 3$ intersects with the original function, $f(x) = \frac{3x^3+2}{x^2-x-7}$, at the coordinate $(-.9583333..., .125)$. Why is this the case? What condition prevents $y = 3x + 3$ from being a true asymptote of the function $f(x) = \frac{3x^3+2}{x^2-x-7}$ if long division produces a result declaring otherwise? A: Notice that intersection of the function with the asymptote does not prevent it from being an asymptote. Broadly speaking, "asymptote" for $x$ to infinity, for example, means that the farest we go with $x$, the closest the function goes to the line, but it may intersect it an infinite number of times. For example, consider $\frac{x^2+\sin(x)}{x}$ and look what happens.
Written by Sharon McElhone The looming question since the Great Recession, the invention of Kindle, and the highjacking of content by corporate giants like Amazon and Google has always been, can the publishing industry survive the onslaught? For about a decade, a dark cloud has hovered over newspapers, writers, agents, editors, and publishers alike as they found it increasingly difficult to make money in an industry that was already difficult to survive in in the first place. Times have been bleak for writers and all their affiliates, but lately it feels like the purpose of the writer is being re-established. On March 31st, writers, agents, publishers, and editors found less darkness and instead a renewed sense of optimism. The environment was cheery as people congregated inside the iconic Women’s Building on 18th Avenue in San Francisco. Pitch-O-Rama 2018, which ran from 8 a.m. to 12:30 p.m., didn’t disappoint. The venue sold out. The intimate space filled up with both new and long time professionals, both women and men in the writing industry. It felt like a dawn of sorts, as if all the chaos and confusion caused by the past upheavals had finally settled and professionals in the industry had a sense of how to move forward again. The great feeling of community emanated all morning. The morning started off with coffee and a pre-pitch coaching session led by WNBA members, Betsy Graziani Fasbinder, Mary E. Knippel, and Amanda McTigue. The pre-pitch coaching session allowed writers to practice their pitches before meeting with agents, editors, and publishers. Small group break-outs in an intimate setting helped ease jitters before the actual pitch sessions began. When the half-hour of coaching finished, writers spent the next three hours delivering pitches to the agents, editors and publishers of their choice in 6-minute time slots. It was like speed dating for writers. A pitch for a book was made, connections happened, and cards got exchanged. The morning ended with a panel discussion on marketing and craft led by WNBA president, Brenda Knight. The WNBA sponsors this annual event for a morning full of expert advice, networking, with the potential of finding an agent, publisher, or editor for a particular body of work. Breakfast is also served. This year, in attendance were agents Lisa Abellara and Dorian Maffei of Kimberley Cameron and Associates, Michael Larsen of Larsen-Pomada, Laurie McLean of Fuse Literary, Kristen Moeller of Waterside Productions, Andy Ross, Jennifer March Soloway of Andrea Brown Literary Agency, among others. Publishers included She Writes Press, Smashwords, New World Library, and HeyDay, among others. WNBA board members and volunteers make this event possible each year. The work that it takes to put on these events is no small thing: getting up at 4:30 a.m. the day of the event and the prep months beforehand. As some of us sat behind the breakfast table serving bagels and homemade apple coffee cake, attendees, both women and men, came up to say things like “Glad I came,” “A pleasant surprise,” “It felt very warm,” “and “I would like to become a member and help.” Those are the kinds of exchanges that mean something good happened that day. The publishing industry and the writer found their place again on the other side of what has been shrouded in uncertainty for far too long. WNBA-SF board member Sharon McElhone is a journalist, columnist, and author of six books. Her articles have appeared in La Oferta, Orchard Valley Review, The Cupertino Courier, The Sunnyvale Sun, among other publications. Her column is called “Middle America-Our Engine,” and can be viewed online at La Oferta. Her fiction has appeared in The New Short Fiction Series 2012 in Los Angeles, Label Me Latina/o Spring 2015 and in the 2017 anthology Basta! She is half Ecuadorian and half Irish and lives in Silicon Valley with her husband and children. She is working on a memoir related to childcare, a novel, and a fourth collection of poems.
https://wnba-sfchapter.org/pitch-o-rama-2018-highlights/
This policy is for all staff, pupils/students, parents and carers, governors, visitors and partner agencies working within the school and provides guidelines and procedures as to how our school supports and responds to behaviour. Stoneyholme Community Primary School Behaviour Policy Article 29: Education must encourage the child’s respect for human rights, as well as respect for their parents, their own and other cultures, and the environment. This policy must be implemented in conjunction with: Vision/Ethos At Stoneyholme Primary School we promote positive pupil behaviour through ‘Every Child Matters and Everyday Counts’. This simply means that everyone is important, and every minute of every day learning takes place. Expectations In order to help children to feel safe and learn, their educational environment needs to be high in both nurture and structure. Children need predictable routines, expectations and responses to behaviour. We are proud to be a Trauma Informed School (TiS). For us this means that we aim to have TiS approaches at the core of our whole school ethos and across our whole setting. Trauma Informed Schools (TiS) TiS is a dynamic, developmental approach to working with children that supports their emotional and social wellbeing. It is based on the latest research in neuroscience, attachment theory and child development, drawing on research into the role of creativity and play in developing emotional resilience. Knowledge of social and emotional learning supports the school in planning experiences, activities and opportunities and reinforces our understanding that learning happens across the whole day, especially during break times where less structured interactions enable pupils to develop their social and emotional learning and apply skills that are vital for healthy development. We recognise that it is important for adults to understand where a child is in terms of their mental and emotional health and this approach supports staff with how to differentiate their relationship with children in order to support their development. It also gives basic guidance so that some change can be made through understanding where the child is functioning from and practical activities, which facilitate the development of this relationship. As part of this, the school also has access to a comprehensive and flexible reporting tool for tracking change over time, for both individuals and groups of pupils. Learning to be skilful in relationships and ready for challenges requires experiencing, descriptive feedback, reflection, modelling and teaching from adults and peers. Addressing early emotional developmental needs builds resilience, decreases the risk of mental illness, prepares children to take their place within a community and equips them to be ready and willing to learn. Life events can introduce episodes, which become interruptions to some children’s development. The TiS programme supports adults in creating a differentiated provision in response to need with reparative strategies as part of systematic actions. With a programme of continuous development, our vision is for all our staff to receive regular training and to use this insight to build healthy development, encourage pupils to increasingly self-regulate and embed strategies in social and emotional learning and positive behaviour choices, therefore underpinning academic progress. Introduction The Department for Education guidance for headteachers and school staff of maintained schools, which outlines the statutory duty of schools in relation to developing a behaviour policy, is largely based on a behaviourist approach. “Headteachers, proprietors and governing bodies must ensure they have a strong behaviour policy to support staff in managing behaviour, including the use of rewards and sanctions” (DfE, Behaviour and discipline in schools: Advice for headteachers and schools staff, published July 2013; last updated January 2016) Although behaviourist approaches can work for the majority of children, they are not successful with all. This is especially true for those who have experienced Adverse Childhood Experiences (ACEs) – traumatic life experiences that occur before the age of 18. For children who have experienced trauma and loss, including vulnerable groups such as children in care, children at the edge of the care system, and children previously in care, behaviourist approaches often serve to re-traumatise them and do not teach them how to express their emotions in a more appropriate manner. As a school we believe in a nurturing approach where every child feels listened to. The commitment of staff to the emotional well-being of the pupils is a particular strength of our school. Each class’ learning charter underpins this and promotes a positive approach to the education and pastoral management of each individual pupil. We reward and celebrate achievement which has an impact on the pupil's self-esteem, confidence and happiness. All pupils know that they are safe and secure – and that their contributions and achievements are respected and valued. Aims It is acknowledged that members of the school community may have very different parenting experiences and views on behaviour. However, the aim of our Behaviour Policy is to bring us all together to adhere to some basic key principles and practices that reflect our school ethos and reflect our mission statement ‘Every Child Matters and Everyday Counts.’ To ensure that children, staff, governors and parents are fully aware of: the expected behaviour of children both in lessons and around the school Classroom Environment "Classroom management is not about having the right rules, it’s about having the right relationships." ‘Feel The Difference: Learning in an Emotionally Literate School’ Lynne Gerlach /Julia Bird (2006). Only when children feel a sense of being heard, understood and cared about, can they begin to express their emotions in a more acceptable way, which will benefit everyone. Each class has developed their own class charter to enable all children to get the most out of all learning opportunities. It is visible throughout all classrooms within the school and must be endorsed by all members of staff. It states the rights and responsibilities of both teachers and pupils in order to create a most effective teaching and learning environment. RIGHTS Every child has the right to an education (Article 28) and to learn in a productive, stimulating environment, where everyone has the right to feel safe and be treated with dignity and respect (Article 2 non-discrimination) RESPECT To ensure everyone has access to their rights, showing and demonstrating respect is essential. Where behaviours negatively affect the rights of others, teachers have the duty of care to respond and highlight the consequence to the child. The organisation of the classroom is fundamentally important in managing behaviour. Teaching and learning should be interesting and varied and offer pupils a degree of choice. Account should be taken of pupils’ preferred learning styles. Pupils should feel involved in the learning and teaching process. Well organised, purposeful cooperative learning activities can improve behaviour. Expectations should be regularly enforced and should be realistic but challenging. Teaching should encourage an accurate match between aspirations and ability. The teachers’ every word and action should be based on the assumption that all pupils can achieve whatever is to be learned. Simple non- verbal encouragement (smile, thumbs up, etc.) is effective. Teachers should model good behaviour patterns and be aware of their own stress control techniques, where adults are in control but not ‘controlling.’ When pupils arrive in the classroom, initial contacts should be positive. Accusations should be avoided. The certainty of consequences is more important than their severity. Rewards We aim to recognise, acknowledge and celebrate good behaviour along with a child’s effort and achievement regardless of ability (Article 2 non-discrimination). Children must expect their efforts to be recognised and we aim to maintain a culture where children want to succeed and are proud of their talents and success (Article 29: Goals of education). It is vital that there is an emphasis on praise rather than sanctions. The ultimate reward for good behaviour, effort and attendance will come from the opportunities that the child’s success will bring in the future. However we recognise that children need recognition for their achievement in the shorter term. Parents (duty bearers) will be informed of achievements and there will be opportunities to celebrate successes in the whole school achievement assemblies (star of the week). Some of the positive consequences for the good choices and good behaviour that children show are: Consequences Although we insist on a strong emphasis on acknowledging and rewarding positive behaviours, there will on occasions be some students who may struggle to follow agreed expectations. When a child is displaying inappropriate behaviours we recognise that each situation will be absolutely unique to the child and therefore the response needed will be unique also. The situation and the factors involved will be considered carefully and responses will be made usually following a professional discussion between some/all of the following people; Headteacher, Assistant Heads, SENDCo, Learning Mentor, Class Teacher, Teaching Assistant. At every stage we will also maintain close communication with parents and carers. Children are given opportunities to reflect on their behaviour and suggest what should have happened or what we expect to see in the future using restorative approach and questioning. It is essential that children are allowed to start each day with “a clean slate.” This will restore the working relationship between staff and the child and place the emphasis back onto rewarding positive behaviour. Any negative behaviour from the previous day should have been dealt with at that time and should not be allowed to affect the following day. However this does not mean that any strategy put in place to improve behaviour can be ignored e.g. if a child has been given an ongoing sanction due to their behaviour, or has been asked to sit in a particular seat, then that arrangement remains in place for as long as is required. Whole school strategy We strongly believe that responding to the SEMH needs of children is not the responsibility of a few staff in school; it is everyone’s responsibility. All members of staff are responsible for supporting the behaviour of children across the school- building relationships is everybody’s business! Smiling and greeting a CYP on their way into school can really add to their sense of belonging/ feeling liked, respected and valued. Our positive approaches to behaviour involve us ‘noticing’ good choices, being explicit in descriptive praise and providing reward as reinforcement. The Role of the Adults Taking a non-judgmental, curious and empathic attitude towards behaviour. We encourage all adults in schools to respond in a way that focuses on the feelings and emotions that might drive certain behaviour, rather than the behaviour itself. Children with behavioural difficulties need to be regarded as vulnerable rather than troublesome, and we all have a duty to explore this vulnerability and provide appropriate support. Staff focus on the central principles of empathy, connection, attunement, trust and co-regulation. This includes careful consideration and awareness-raising of both verbal and noncommunication. We believe our approach to behaviour, supports staff to feel empowered to respond in a way that is empathetic but has boundaries, firm but kind. Governors The governing body has the responsibility of setting down these general guidelines on standards of discipline and behaviour, and of reviewing their effectiveness. The governors support the head teacher in carrying out these guidelines. The head teacher has the day-to-day authority to implement the school behaviour and discipline policy and procedures, but governors may give advice to the head teacher about particular disciplinary issues. The head teacher must take this into account when making decisions about matters of behaviour. Headteacher The HT and SLT lead the whole school ethos to promote a consistent Behaviour Policy: that is embedded across the school, through policy development, displays, choice of language, non-verbal behaviours, and communication with parents/carers, as well as those outside of the school community. Parent/Carer “The parent-child connection is the most powerful mental health intervention known to mankind” (Bessel van der Kolk) Stoneyholme Community Primary school recognisies the importance of the parent/child relationship. We work collaboratively with parents / carers so children receive consistent messages about how to behave. We aim to build a supportive dialogue between the home and the school. We inform parents / carers immediately if we have concerns about their child’s welfare or behaviour – this includes if there is a pattern of regularly receiving warnings. If parents / carers have any concern about the way that their child has been treated, they should initially contact the class teacher. If the concern remains, they should contact the unit leader/head teacher, and if still unresolved, the school governors. If these discussions cannot resolve the problem, a formal grievance or appeal process can be implemented. Physical restraint All members of staff are aware of the regulations regarding the use of force by teachers, as set out in DfEE Circular 10/98, relating to section 550A of the Education Act 1996: The Use of Force to Control or Restrain Pupils. Staff would only need to intervene physically to restrain children or to prevent injury to a child, or if a child is in danger of hurting him / herself. The actions that we take are in line with government guidelines on the restraint of children. Exclusions Only the head teacher or deputy head teacher has the power to exclude a pupil from school. The head teacher may exclude a pupil for one or more fixed periods, for up to 45 days in any one school year and may also exclude a pupil permanently. It is also possible for the head teacher to convert a fixed-term exclusion into a permanent exclusion, if the circumstances warrant this. The headteacher informs the local authority and the governing body about any permanent exclusion, and about any fixed-term exclusions beyond five days in any one term. If the head teacher excludes a pupil, s/he informs the parents immediately, giving reasons for the exclusion. At the same time, the head teacher makes it clear to the parents that they can, if they wish, appeal against the decision to the governing body. The school informs the parents how to make any such appeal. A committee, made up of between three and five governors, considers any exclusion appeals on behalf of the governing body. When an appeals panel meets to consider an exclusion, they consider the circumstances in which the pupil was excluded, consider any representation by parents and the local authority, and consider whether the pupil should be reinstated. If the governors’ appeals panel decides that a pupil should be reinstated, the Principal must comply with this ruling. The governing body itself cannot either exclude a pupil or extend the exclusion period made by the Principal. A less extreme form of exclusion may also be considered: this may, for example, involve lunchtime inclusion or learning exclusion, where a pupil learns away from the class. School staff would consult with parents but do not need to report this. Monitoring/recording As outlined in the SEN Code of Practice and our local SEND Guide, we promote a differentiated approach following different levels of intervention using the Assess/ Plan/Do, Review cycle. Appropriate target-setting and information-sharing is extremely important, to ensure that bespoke provision and strategies are recorded using a range of suitable tools such as IEPs and Provision Maps. These are jointly developed, agreed and reviewed, involving key adults. There is a wide range of highly effective provision for managing the behaviour of pupils, observation, unit meetings, CPOMS recording, communication with parents, support from SLT, these effective systems are in place to ensure that any issues are quickly dealt with. The excellent use of LSAs to support individual pupils is very effective in managing behaviour and we use lots of small group interventions for targeted pupils to support their learning. Children are taught to take responsibility for their own behaviours - including making choices and accepting consequences We use various interventions, these includes various assessment and monitoring tools/toolkits, such as: - The Boxall Profile - The Strengths and Difficulties Questionnaire (SDQ) - Various Emotional Literacy and Social Skills resources, as well as strategically planning social and emotional lunchtime activities. We use an holistic support for children presenting SEMH needs, such as Early Help and TAF processes. Review “This policy functions as a practice guide and is therefore reviewed whenever issues arise which generate new ways to communicate our approach, and otherwise annually”.
https://www.stoneyholme.lancsngfl.ac.uk/behaviour-policy/
Page Type: Article Activities: Trad Climbing Introduction Work in progress-(may take a week or so to complete due to work responsibilities hope to have it complete by May 1, 2007) Note : This is a bit longer then I anticipated when I first started this Once in a while, you look at a route thinking,"What does it take to climb that" or maybe you think that the climber who does that 5.X route (or that X(y) route must be immortal (fill in what ever number or letter for x or y you wish depending on what grading system you use). Or you look at that perfect corner or crack that transects a face, in awe and maybe in envy of those that can do it. Or is it that picture of that frozen waterfall, pick any of Dows photo's. It doesn't take someone immortal to do them but a little hard work and some faith. You may also think, I'm into long moderately easy climbs, so why should I worry about it. Think about a route such as NE Ridge of Bugaboo Spire (10-11 pitches up to 5.8). In the process of schelping all your gear up and over the route, the second will be carrying a pack. The stronger you are, a) the less fatigued you will become and hence have more fun and b) the faster you'll be able to climb so less likely become benighted on the route, and be able to enjoy that warm sleeping back instead of a cold bivi. Schlepping a Load up and Over Also with a bit of training, injuries can be prevented by creating a balance between the agonist and antagonist muscles. Without this balance injuries become probable. And how many of our friends, brag about how much they enjoy their latest injury. Think about how many of our friends complain about rotator cuff injury, which could be easily avoided by strengthening the antagonist muscle groups. The most important attributes is a) the mental game, b) technique, c) strength and flexibility. Also not to be underestimated is diet. Even though diet can destroy a perfectly developed training regime, I will only recommend that you consult with a registered dietation, as opposed to the "information" which is found on the web or other unreliable sources. Training on the rock is important. It is here you will make your greatest gain. Climbing (rock, ice, or mixed) is a complex game involving technique, mental aspects and strength. I've seen climbers who get into training to climb to the point of forgetting about climbing. Its far better to get experience all all types of media The mental game and technique The mental game is probably the most important and least appreciated aspect of climbing. Most of us (myself included) leave far to much on the table and short change ourselves. While I won't discuss this at large (since I have so much more to learn about this) I will touch upon a few topics that I have learned from those I have been fortunate enough to learn from. To begin with I will also recommend a few resources, the first and most important is "The Rock Warriors Way" by Arno Ilnger, and a nonclimbing related book, "Tao of Starwars" by Dr. John Porter (Chief of Surgery in Tuscon but also holds a black belt). Starting with Tao of Starwars, there are two important concepts which are imported from Eastern Philosophy, the beginners mind, and letting go. The beginners mind is the belief that at any stage of the game, that you have much to learn, no matter if you've been at it for 1 year or 40 years. In believing that you are far from a master, you are open to new experiences and learnings. It is the belief that in believing that you are master at something, your mind will be limited in what it will learn. Also in these books the genre of living a life where you concentrate on the journey and not the destination is strongly stressed. In Ilnger's book, he stresses letting go of the Ego, and concentrate on the action. He stresses using the experience for the love of the action and learning. In letting go, you are not a slave to expectations, and preconcieved notions. Too often we build something up in our heads, believing that only our heros can do it, instead of poking our head around the corner ourself and giving it a go. For a complete explaination, I would strongly recommend the two books. Also important is the idea that you should be able to do specific routes and you are not limited by their difficulty. Just believing that you should be able to do a route, won't get you up it, without ability or determination or work. But believing you should is the first step, because without it, you'll never try. And without trying, you'll never understand the possibility. Technique will also not be discussed here since its a subject that has filled countless books. It becomes important since with good technique you will minimize your energy expenditure and minimize the strength required to do a series of moves Training- on the rock First, always make climbing fun. If its not fun, your desire will diminish. But if you approach it a bit systematically, you will see your abilities improve dramatically. One method is to periodize your training calender, concentrating on endurance, power endurance, power and rest. Periodization as a training technique while being used by the climbing community for a decade or so, has been utilized by other sports such as cometitive distance running for many decades. In periodizing your training you will decease your bodies probability of injury by allowing it to prepare for each level of stress that you will place upon it. No one move is hard but taken together Endurance : Endurance is important for two reasons. Its important for those routes where no one specific move is difficult but when taken together, the route becomse difficult. The routes at Indian Creek are a good example. Endurance training will also let your boy adjust to stresses to prevent injuries as you start to work on more difficult problems I will take chapter from the program that Vadim Vinukor used to progress to where he did. For those who do not know him he is an emmigrant to the US from Eastern Europe who is the King of climbing endurance and has become a 5.14d climber. When he started out years ago (at the time I was friends with guys who helped train him, Les and Misha)he would start out his workout by picking a section at the local climbing wall, and he would climb up , and then down climb, and then climb up, and then down climb, and back up, for ad-nauseum (ok, for 30-45 minutes) without resting. He would then go do his climbing for the day, and then warm down in a similar manner. When you begin this workout adjust the difficulty of the route so you can continue climbing, but so that after 15-20 minutes, you are quite fatigued. For most of us, this will between 5.6 to 5.11ish whereas for Vadim, its 5.13. This type of training is most easily integrated into a program at a climbing gym but can be integrated outside also.There are other techniques to improve endurance such as a bouldering 4x4, traverses and multiple pitches. Outside, it is often just easier to attempt as many pitches in a day as possible. To start out with try to attempt 15 pitches with a few testpieces near the beginning and adjust the level as you become fatigued. If you are an intermediate or advanced climber and do this on lead, you will work your endurance a bit more, since you will spend more time on your arms. As you become fatigued your risk of accidents increases, so adjust your routes accordingly. As it becomes easier, increase the number of pitches you do in a day, with 30-35 pitch days offering an strenuous workout. If you are without a partner, there are several things you can do bouldering. Long traverses can be utilized to work endurance. Many climbing gyms with bouldering caves and annoying as they are, they can be used to your benefit. My local gym in Flagstaff often sets three traverse route, the first around the bouldering wall, usually consisting of 50-60 moves, and the second and third (varying in difficulty) starting where the first one finishes) adding another 30-35 moves. Bouldering the entire length results in 80-90 move problems. Once you finish a lap, rest a few minutes and do a second lap. When this becomes easy, as you finish one lap, reverse directions and continue back. You don't like climbing gyms, no problem. Do this along the base of your favorite cliff. Even buildings and road embankments will do. Ron Kauk in "Fifty Favorite Climbs" states that he has a traverse problem at the base of Middle Cathedral (in Yosemite) that he does countless laps on. An old friend of mine used an embankment of cobblestones along (actually below) the West Side Highway in the Harlem section of New York City. The idea is to spend as much time climbing in a single period of time. Towards the end of the endurance phase you can also do 4x4's. Find a bouldering area which has several problems that you can do but may be slightly difficult for you and is long, and not your usual 5 move problem. Boulder the first problem ( and hopefully its in the 10-5 move range), but instead of just jumping off, boulder down and easier route to the base of a second problem. Without resting, start up the next problem. Continue this until you have completed 4 boulder problems. Rest and then repeat. Endurance training has two benefits, endurance to work out specfic moves and adaptation to stress. Watching Vadim climb, what you will see is that he would often make mistakes in technique when he first started out, but given his tremendous endurance, he would be able to reverse what he done and correct his mistakes, without becoming fatigued. The second allows your body to adapt to a level of stress. Way too many moves are getting hard Power Endurance: Power endurance is important for those routes that have long extended cruxes. ok, a few of the moves are hard now Power For someone looking to climb, long moderate routes, power doesn't seem that important. But with it, you have the ability to move on the same terrain faster, tiring more slowly and moving more cconfidently. Someone once said, you can never have too much power.There are many ways of developing power though. Bouldering is the easiest way in concentrating on power, for which has been well written about in the climbing magazines for some of us, we'd rather sit in a dentists chair undergoing tooth extractions without Novocaine. If you are like me, you may want to try soemthing else. Pick a route, traditional or sport route, doesn't matter, which you know is more difficult then you have climbed before. If your hardest route to date is 5.10, try a 5.11 route. Set a tope rope on it. As you try the moves, intially they may be too difficult. As you repeat the move, your body will develop an engram of them . After you are sufficiently frustrated, but before you have trashed yourself to the point you risk injury,yard yourself up the rope and work a different section. Before you are too tired allow yourself to have some success on routes that you have done and enjoy as a treat. The next time climbing, try the same route again. As you have developed engrams for specific moves, these will become easier (having them wired) and you will be able to start to link sections that you have failed at previously. This will have many benefits, developing new techniques, developing musclar recruitment, which is discussed later). Another technique that can be to develop power is lockoff training which was developed by a french sports climber in the 1980'-90's. Choose a climb that is difficult for you but still possible (helps being on a top rope for this one). What you will do is before each time you grab a hold on a climb, allow your hand to hover over the hold before you grab it for 10-20 seconds. This requires you to maintain a lockoff position on every move. Rest : The most under appreciated phase but as critical as any other. As your body develops engrams (neuromuscular memory which allows the muscles to contract efficiently for specific techniques), your body must also un-learn engrams that are inefficient. Some evidence has indicated this occurs most efficiently during the rest phase. My own personal experience sort of confirms this in a nonscientific manner. Recently while recovering from a non-climbing related injury (more of a injury resulting ffom tripping and landing on my hand), I had to take 3 months off. The first day on the rocks I warmed up on a 5.11c/d crack that I would often get pumped out on, and then fall, almost as often as I could get it cleanly. While my fitness was no where near as I normally would be, my technique was much cleaner. Training-off the rock While the greatest gains in training s made on the rock, training off the rock offers two benefits, increased strength and a reduction in the probability of injury. Injury and overtraining are the two biggest roadblocks in development and definately is not as fun as being healthy. While climbing stresses predominately the "pulling" muscles and the core, without strengthing the antagonists to these actions (ie: those muscles responsible for pushing), and inbalance and possibility of injury results. What we will focus here on is strength and muscular recruitment. As you develop muscular bulk, hyperatrophy training, your strength gain will be at 60% of your mass gain. Because of this in sports like American Football or Rugby where you are moving someone elses mass around, mass is beneficial. Whereas in climbing where you are moving your own mass around, it rapidly becomes a loosing proposition. In climbing it is most beneficial to maintain your genetically determined ideal weight, being to light you quickly loose power and set yourself up for a weakened immune system and being prone to injuries, too much mass, a decrease in your strength to mass ratio. Working those antagonist : When we think of strength training, too often we think of the muscles that we use while climbing and ignore those that oppose the motion, the antagonists. The antagonists are important for stability. Ignore them and you can become injuried, the most common being rotator cuff injuries in the shoulder. Those responsible for shoulder stability: Two strength training exercizes that al climbers should do are responsible for stablizing the shoulder. Rock and ice climbing puts extreme demands on the muscles of the back, and these forces are transfered through the shoulder. Any instability will be accentuated. The two execizes are the external and internal shoulder rotations. Internal Shoulder Rotation: Strengthens the subscapularis. To do this exercize, take a therapy band (a glorified inner tube which comes in different resistences) to a solid pole (a bed post will also work). With the elbow positioned at your side against the body with the forearm perpendicular to the bicep (see photo) grab the therapy band with the band attached to the support to the outside of the body. Now pull the band across the body while maintaining the perpendicular position. When the range of motion is completed, slowly allow the band to contract. Repeat Attach photo for internal shoulder rotation. External Shoulder Rotation : This exercize strengthens the Infraspinatus and teres minor (but do you really care). In this case the band attached to the pole again at the same hieght as in the Internal Shulder Rotation, but the band crosses the body before you grab it with your hand with the forearm perpendicular to the bicep. In this case you will rotate stretching the band across your body until the band is expended with the arm located about 2 from the center of the body. Allow the band to slowly contract and repeat, see photo. Attach photo for external shoulder rotation Climbing Related Motions: The most basic climbing related training are variations on Pullups. Eric Horst's, "How to Climb 5.12" is a good reference on these. They can be grouped as pullups, frenchies and typewriters. The absolute benefit has been much debated, some of the best climbers continue to do them and continue to improve. It is very possible to climb significantly hard without being able to do a single pullup (Lisa shown on Davidsons Dihedral on the Paradise Forks page being an example where she leads 5.12a trad but can barely do a single pullup, but she will tell you when it comes to a powerful route, she will be shut down). Some examples of climbers who have infamous pullup routines are the late Alex Lowe, Bill Ramsey (the first ascentionist of Omaha Beach in Red River Gorge and is climbing 5.14 at 47 years old) For the pullup, you will aim to do between 5 to 20 in a set. Many fitness centers have a weight assisted pullup machine where you can stand on a bar, and the machine will remove a specfic amount of weight to allow you to achieve the desired number of repatitions. What you would like to do is to grab the bar so when you biceps are perpendicular from your chest, the forarm is perpendicular to the bicep attach photo as you probably remember from grade school what you are aiming at doing is to bring your nose to chin even with the bar. But unlike grade school where you would drop like a stone between each pullup, your aim is to lower in a moderately slow and controlled manner. Your elbows and shoulders will thank you and in doing so you are approximating a "neg" (which will be described later) which will only make you stronger. As each set of repetition becomes easier, allow the machine to remove less weight until its unassisted. Allow your body to adjust for a significant period before you start adding weight which will place significant stress to both the shoulders and elbows. attach photo once this becomes easy either increase the diameter of the bar that you are doing pullups on (where a bar can be fashioned by placing a 4" diameter over an inner axle) Large bar for pullups This works the forearms, back and core (required to stablize yourself from rotating off the bar) You can also work endurance by: Take a stopwatch and once the timer starts, at every 1 minute interval do 4-10 pullups, where the minute interval is from a start of one set to the start of the next set. Continue this for as long as you can, aiming at 60 minutes which would result in between 240 to 600 pullups completed in a routine). Frenchies: Frenchies are just a modification of a pullup, where lockoffi strength is being accentuated. Typewriters Other training techniques one training technique works for everyone. Even though the basic principals of training are universal, everyones body is different with different genetic predispositions, different fast-slow twitch muscles. Because of this no single program works for everyone. Other programs which I won't describe that you may also want to use to supplement your training are: Pilates: Properly done, you will work both the agonist and antagonist muscles along with your core. I would recommend finding a qualified instructor in working with you, and remember not all instructors are equal. Modern Dance: Now don't laugh. Modern dance will develop your active stretch while developing or improving you understanding of body position. The climber with the best technique I have ever seen, and a damn good climber all around had studied modern dance as a college student Yoga: Obviously improves you flexability. Find a good qualified instructor for this one also. Parting Thoughts A friend of mine who teaches business in San Francisco, teaches that as businesses grow, they usually have periods of difficulty. They believe that techniques and strategies that worked well for them well in the past will continue to serve them well in the future. What they forget is that as the environment, their size and abilities change, they have to change also. Training is like business in this regards. All to often we find a routine that works well for us, but what is important is to know when you need to change what you are doing when you need to. Other reading Arno Ilnger: " Warriors Way" Dale Goddard and Uno Neumann "Performance Rock Climbing" Images View Training- specific for climbing- Image Gallery - 1 Images Comments No comments posted yet. to post! Don't have an account? Table of Contents Introduction The mental game and technique Training- on the rock Training-off the rock Other training techniques Parting Thoughts Other reading Images × You need to login in order to vote! User Name Password Remember me Forgot your password? Log me out when I close my browser. Keep me logged in all the time. sign in as a user Don't have an account?
https://www.summitpost.org/training-specific-for-climbing/287439
For Famed Rock Climber, A 'Big Break' That Thankfully Wasn't Literal Alex Honnold has scaled the sheer face of Half Dome in Yosemite, Calif., without the aid of ropes — and entirely alone. Still, he says his big break didn't arrive until a TV producer approached him. So it's crucial that the handholds he uses do not break beneath his grip. But he does credit one moment, happily, with being his big break. "The first thing I think of is being featured on 60 Minutes. It's probably the one thing that set me on the path to devoting my life to being a professional climber," Honnold says. "I was very much a dirtbag climber living in my van, and this producer from 60 Minutes approached me and said, 'This is going to change your life. This will be the biggest thing you've ever done.' " "I often joke that I've become a professional schmoozer," he says. "Like nobody really cares how well I can rock-climb anymore; they just care how well I can schmooze." But before the hype, Honnold was a college dropout, living out of his mother's borrowed minivan, driving from climb to climb. And schmoozing hasn't always come easily for him, either. As a kid growing up in Sacramento, Calif., he was too shy to approach strangers at the climbing gym. One side effect of this shyness was that he got used to rock climbing by himself, without a rope. "I suppose being a bit of an antisocial weirdo definitely honed my skills as a soloist. It gave me a lot more opportunity to solo lots of easy routes, which in turn broadened my comfort zone quite a bit and has allowed me to climb the harder things without a rope that I've done now." But there was one climb in particular that tested his limits and brought him to the attention of the climbing community, and eventually the wider world: when he decided to climb Half Dome in California's Yosemite Valley. "It's a route that everybody aspires to and physically looks up to, because anywhere in Yosemite you look up at that wall." "Basically I was able to climb the wall on autopilot. I'd already made all of the hard decisions," Honnold says. "But then, basically my autopilot started to run out by the time I got to the top, because I was just starting to get tired, and it was hard to maintain that focus." Near the top of the wall, just 100 feet from safety, he heard the laughter of day hikers who had reached Half Dome's peak on a far easier ascent. The sounds of others snapped Honnold out of his fade. "I just took some deep breaths, and finally said 'This is what I have to do. I'm going to trust this foot,' and then I just stood up on the foothold and that was that." He completed the climb that day, eventually even walking among those same day hikers he'd heard while climbing.
https://www.npr.org/2016/01/03/459977784/for-famed-rock-climber-a-big-break-that-thankfully-wasnt-literal
Rationale: Although substantial scientific evidence suggests that chronic exposure to ambient air pollution contributes to premature mortality, uncertainties exist in the size and consistency of this association. Uncertainty may arise from inaccurate exposure assessment. Objectives: To assess the associations of three types of air pollutants (fine particulate matter, ozone [O3], and nitrogen dioxide [NO2]) with the risk of mortality in a large cohort of California adults using individualized exposure assessments. Methods: For fine particulate matter and NO2, we used land use regression models to derive predicted individualized exposure at the home address. For O3, we estimated exposure with an inverse distance weighting interpolation. Standard and multilevel Cox survival models were used to assess the association between air pollution and mortality. Measurements and Main Results: Data for 73,711 subjects who resided in California were abstracted from the American Cancer Society Cancer Prevention II Study cohort, with baseline ascertainment of individual characteristics in 1982 and follow-up of vital status through to 2000. Exposure data were derived from government monitors. Exposure to fine particulate matter, O3, and NO2 was positively associated with ischemic heart disease mortality. NO2 (a marker for traffic pollution) and fine particulate matter were also associated with mortality from all causes combined. Only NO2 had significant positive association with lung cancer mortality. Conclusions: Using the first individualized exposure assignments in this important cohort, we found positive associations of fine particulate matter, O3, and NO2 with mortality. The positive associations of NO2 suggest that traffic pollution relates to premature death. Several cohort studies have examined whether long-term exposure to air pollution is associated with premature death. The results of these studies have been mixed, possibly due to errors introduced in the exposure assessment process. To address this potential problem, this study assigned members of the American Cancer Society Cancer Prevention Study II Cohort residing in California more precise exposure assignments at their home address using advanced exposure models. The study provides the first evidence that ozone is significantly associated with cardiovascular mortality, particularly from ischemic heart disease; shows a strong association between nitrogen dioxide (NO2) and lung cancer; and demonstrates that that fine particulate matter with aerodynamic diameter of 2.5 μm or less (PM2.5) and NO2 associate independently with premature death from all causes and cardiovascular disease. The findings from this study confirm earlier evidence on PM2.5 associations with mortality and expand the evidence base markedly on associations between ozone or NO2 and premature death. A substantial body of evidence suggests that long-term exposure to combustion-related air pollution contributes to the development of chronic disease and can lead to premature death (1–6). Exposure to air pollution affects huge populations globally. As a result, the public health impact can be large (7, 8). Using data from the American Cancer Society’s (ACS) Cancer Prevention Study II (CPS-II), a nationwide cohort study of nearly 1.2 million adults who have been followed for mortality since 1982, several studies have been published examining associations of metropolitan-level air pollution and mortality (3, 9–11). In those studies, exposure data were derived at the metropolitan scale, relying on between-city exposure contrasts using central monitor data. In addition, two studies using CPS-II data evaluated within-city (i.e., Los Angeles and New York) exposure contrasts in fine particulate matter with aerodynamic diameter of 2.5 μm or less (PM2.5) (2, 3). Both studies assigned exposure to the ZIP code postal area of residence, but in the study from Los Angeles (2), the PM2.5–mortality dose–response relationship was stronger than that for the full nationwide cohort, and in the study from New York City, the relationship was weaker (3). Although the ZIP code areas were more specific than the metropolitan area, they may have introduced error in the exposure assignment that led to the inconsistent results. Another recent study based on individualized exposures found little association between PM2.5 exposure and mortality in a cohort of male health professionals (12); however, in that study if home address records were missing, then workplace addresses were used for exposure assignment, possibly leading to measurement error. Conversely, an earlier study based on a large cohort of nurses reported strong and significant associations of PM2.5 with mortality, using essentially the same exposure model but with complete home address information for exposure assignment (13). Viewed together, these findings suggest that uncertainties in the characterization of the dose–response relationship may be due partly to the errors in exposure estimates arising from the lack of specificity of the coordinates used to link addresses to the exposure estimates. A need therefore exists to investigate how individualized estimates of exposure at the home address influence the observed dose–response function. In the present analysis, individualized exposure estimates were developed and assigned to the home address for more than 73,000 California residents enrolled in CPS-II. These estimates were used to assess the association of three types of air pollutants (PM2.5, ozone [O3], and nitrogen dioxide [NO2]) with risk of mortality. We also sought to understand the joint effects of the pollutants in co- and multipollutant models. Although CPS-II is a nationwide cohort, we limited this analysis to California because the state has a wide range of pollution exposures and a good monitoring network. The ACS CPS-II cohort was enrolled in 1982 (details are presented in References 3 and 14). For the purposes of this paper, vital status was ascertained through to 2000. Subjects with valid postal addresses had their residential locations geocoded. After limiting to residence in the State of California and making exclusions for missing data on key covariates, there were 73,711 subjects available for analysis. We assigned exposure for PM2.5, NO2, and O3. Monthly average monitoring data for PM2.5 were available at 112 sites between 1998 and 2002. NO2 and O3 data were available over the period 1988 to 2002 at 138 and 262 sites, respectively. PM2.5 and NO2 exposures were assessed using land use regression (LUR) models that were selected from more than 70 possible land use covariates (15). The PM2.5 model included an advanced remote sensing model coupled with atmospheric modeling (16). LUR models were selected with the deletion/substitution/addition algorithm (17). The deletion/substitution/addition algorithm, which aggressively tests nearly all polynomial covariate combinations, uses v-fold cross-validation to evaluate potential models. In this instance of v-fold cross-validation, data are first partitioned into 10 roughly equal parts (i.e., folds). The model is then trained on nine folds and cross-validated on the left out fold. This is repeated 10 times so every fold is used as a cross-validation data set. The model selection method avoids the potential problems of over-fitting on all the data or on a large training set and then using a cross-validation subset (details presented References 15 and 18). For O3, we extracted monthly averaged values from 1988 to 2002 and calculated the inverse distance weighting (IDW) models with the decay parameter set to the inverse of the square of the distance from all sites within a 50-km radius of operational monitors during any particular month. Estimates for all pollutants were then assigned to geocoded baseline residential addresses of the CPS-II subjects, and the monthly values were averaged for the entire time period available. We used a comprehensive set of individual risk factor variables operationalized through 42 covariates similar to those used in previous studies of the CPS-II cohort (3, 18). Individual-level variables controlled for lifestyle, dietary, demographic, occupational, and educational factors, and ecological variables extracted from the 1990 US Census in the ZIP code of residence were used to control for potential “contextual” neighborhood confounding (including unemployment, poverty, income inequality, and racial composition). We assessed the association between air pollution and mortality using standard and multilevel Cox proportional hazards regression models. Control for place of residence was also applied in the five largest conurbations—defined by the four consolidated metropolitan statistical areas of California and the metropolitan statistical area of San Diego—that potentially have lower mortality rates than nonmetropolitan areas. This pattern is consistent with what has been termed the “nonmetropolitan mortality penalty,” where nonmetropolitan areas tend to have higher death rates compared with metropolitan areas (19). Because metropolitan areas generally have higher pollution, failure to control for residence in large urban areas has the potential to confound associations between mortality and air pollution. We evaluated the association between air pollution and several causes of death, including cardiovascular disease (CVD), ischemic heart disease (IHD), stroke, respiratory disease, and lung cancer. We also evaluated “all other” causes of death, excluding the preceding causes, to serve as a negative control. Finally, we evaluated mortality from all causes combined. Table 1 compares characteristics of the nationwide CPS-II cohort used in previous analyses to the subset selected for this analysis (a detailed description of exclusions and sample selection is provided in Reference 18). Minor differences in alcohol consumption and education are apparent, but overall the California cohort appears to have characteristics similar to the nationwide cohort. Subjects included in this analysis were widely distributed across California, giving comprehensive coverage for much of the State’s population (54/58 California counties were represented). |Variable||Nationwide||California| |Participants, n||485,426||73,711| |Participants died from, %| |All causes||26.4||26.8| |CPD||13.1||13.6| |CVD||10.9||10.9| |IHD||6.1||6.2| |Respiratory||2.2||2.7| |Lung cancer||2.0||2.0| |All other causes||11.3||11.2| |Demographics| |Mean (SD) age, yr||56.6 (10.5)||57.4 (10.6)| |Female, %||56.6||56.2| |White, %||94.2||91.6| |Education, %| |<High school||12.1||8.7| |High school||31.3||22.9| |>High school||56.6||68.4| |Alcohol consumption, %| |Beer||22.9||24.1| |No beer||9.5||10.9| |Missing beer||67.6||65.0| |Liquor||27.6||35.1| |No liquor||8.7||8.9| |Missing liquor||63.7||56.0| |Wine||23.1||37.3| |No Wine||8.9||7.7| |Missing wine||68.0||55.0| |Smoking status| |Current smoker, %||21.6||19.4| |Cigarettes per day||22.1 (12.4)||21.5 (12.6)| |Years of smoking||33.5 (11.0)||34.1 (11.4)| |Former smoker, %||25.9||28.9| |Cigarettes per day||21.4 (14.7)||20.8 (14.7)| |Years of smoking||22.2 (12.6)||22.1 (12.7)| |Age when started smoking, %| |<18 yr (current smoker)||8.9||7.7| |<18 yr (former smoker)||10.0||10.3| |Hours per day exposed to smoking||3.2 (4.4)||2.7 (4.1)| Table 2 shows the mean, variance, and percentiles of each pollutant as estimated by the different models used in this study. All models display considerable variation in the exposures assigned to the home address. Most pollutants show moderate to high positive correlations (Table 3). The exception is between interpolated ozone and NO2 estimates, which displays a weak negative correlation. |Air Pollution||Subjects (n)||Mean||Variance||Percentiles| |0||5||10||25||50||75||90||95||100| |PM2.5 LUR, μg/m3||73,711||14.09||12.42||4.25||8.29||9.45||11.60||14.03||16.90||18.42||19.36||25.09| |NO2 LUR, ppb||73,711||12.27||8.54||3.04||7.93||8.81||10.21||12.12||14.33||16.22||17.09||21.94| |Ozone IDW, ppb||73,711||50.35||212.18||17.11||28.81||31.13||36.83||50.80||61.00||68.56||74.18||89.33| |PM2.5 LUR||NO2 LUR| |PM2.5 LUR||—||—| |NO2 LUR||55.10||—| |Ozone IDW||55.81||−0.71| Estimates of adjusted relative risk (RR) and 95% confidence intervals (CIs) are reported in Table 4. All RR estimates are given over the interquartile range of each pollutant. We assessed residual spatial autocorrelation in the health effect estimates with a multilevel Cox model (3). Because the multilevel clustering and autocorrelation analysis had minimal impact on the risk estimates, only results for the standard Cox models are reported. |Air Pollutant||Cause of Death| |All Causes (n = 19,733)||Cardiovascular (n = 8,046)||Ischemic Heart (n = 4,540)||Stroke (n = 3,068)||Respiratory (n = 1,973)||Lung Cancer (n = 1,481)||All Others (n = 8,233)| |PM2.5 LUR||1.032 (1.002–1.062)*||1.064 (1.016–1.114)||1.111 (1.045–1.181)||1.065 (0.988–1.148)||1.046 (0.953–1.148)||1.062 (0.954–1.183)||0.994 (0.950–1.040)| |NO2 LUR||1.031 (1.008–1.056)||1.048 (1.010–1.087)||1.066 (1.015–1.119)||1.078 (1.016–1.145)||0.999 (0.927–1.077)||1.111 (1.020–1.210)||1.009 (0.973–1.046)| |Ozone IDW||0.998 (0.960–1.036)||1.045 (0.986–1.109)||1.104 (1.021–1.194)||1.011 (0.919–1.112)||1.017 (0.902–1.147)||0.861 (0.747–0.992)||0.967 (0.911–1.027)| For PM2.5 we observed significantly elevated RR for mortality from all causes (RR, 1.032; 95% CI, 1.002–1.068), CVD (RR, 1.064; 95% CI, 1.016–1.114), and IHD (RR, 1.111; 95% CI, 1.045–1.181). Deaths from stroke, respiratory causes, and lung cancer had positive RRs with less precision and CIs that included unity. No association is present with other causes. NO2 is significantly and positively associated with all-cause (RR, 1.031; 95% CI, 1.008–1.056), CVD (RR, 1.048; 95% CI, 1.010–1.087), IHD (RR, 1.066; 95% CI, 1.015–1.119), stroke (RR, 1.078; 95% CI, 1.016–1.145), and lung cancer (RR, 1.111; 95% CI, 1.020–1.210) mortality. Respiratory deaths and those from all other causes were not associated with NO2. Although there was no association between O3 and all-cause mortality, there was a positive association with CVD mortality (RR, 1.045; 95% CI, 0.986–1.108) and a significantly elevated risk for IHD death (RR, 1.104; 95% CI, 1.021–1.194). O3 had a positive association with stroke and respiratory deaths that lacked precision and a marginally significant negative association with deaths from lung cancer. There was no association with other causes. We compared the risk estimates obtained from single-pollutant models with risk estimates from two-pollutant and multipollutant models (Table 5). In models that included PM2.5 and NO2, the PM2.5 associations with mortality from all causes were reduced to about half the size of those in the single pollutant models, and the estimates became insignificant. When O3 and PM2.5 were included in the same all-cause mortality model, the effects from PM2.5 remained significantly elevated and became slightly larger. A similar pattern was observed with CVD and IHD, where the effects of PM2.5 were attenuated with NO2 but remained unchanged in the presence of the O3 estimates (Figure 1). |Air Pollutant||Cause of Death| |All Causes (n = 19,733)||Cardiovascular (n = 8,046)||Ischemic Heart (n = 4,540)||Stroke (n = 3,068)||Respiratory (n = 1,973)||Lung Cancer (n = 1,481)||All Others (n = 8,233)| |PM2.5 LUR||1.015 (0.980–1.050)†||1.043 (0.989–1.101)||1.090 (1.015–1.170)||1.019 (0.934–1.112)||1.064 (0.954–1.185)||0.985 (0.867–1.119)||0.984 (0.933–1.038)| |NO2 LUR||1.025 (0.997–1.054)||1.030 (0.987–1.075)||1.029 (0.972–1.090)||1.070 (0.998–1.147)||0.973 (0.891–1.063)||1.118 (1.010–1.236)||1.016 (0.973–1.060)| |PM2.5 LUR||1.035 (1.004–1.067)||1.057 (1.008–1.109)||1.093 (1.027–1.165)||1.067 (0.987–1.153)||1.045 (0.949–1.151)||1.103 (0.985–1.234)||1.002 (0.955–1.050)| |Ozone IDW||0.985 (0.947–1.025)||1.025 (0.964–1.089)||1.070 (0.987–1.161)||0.988 (0.894–1.091)||1.001 (0.883–1.134)||0.832 (0.719–0.964)||0.966 (0.908–1.029)| |NO2 LUR||1.032 (1.008–1.057)||1.055 (1.016–1.095)||1.082 (1.029–1.137)||1.082 (1.019–1.150)||1.001 (0.928–1.080)||1.097 (1.006–1.196)||1.006 (0.970–1.043)| |Ozone IDW||1.006 (0.968–1.046)||1.062 (1.000–1.127)||1.132 (1.045–1.227)||1.034 (0.938–1.140)||1.017 (0.901–1.149)||0.882 (0.764–1.019)||0.968 (0.912–1.029)| |PM2.5 LUR||1.015 (0.977–1.055)||1.024 (0.965–1.086)||1.048 (0.969–1.133)||1.008 (0.915–1.110)||1.070 (0.949–1.207)||1.040 (0.902–1.198)||0.995 (0.938–1.056)| |NO2 LUR||1.025 (0.995–1.056)||1.044 (0.996–1.093)||1.059 (0.995–1.126)||1.079 (1.000–1.163)||0.969 (0.881–1.066)||1.078 (0.967–1.201)||1.008 (0.963–1.056)| |Ozone IDW||0.999 (0.957–1.042)||1.050 (0.982–1.122)||1.106 (1.012–1.209)||1.031 (0.925–1.149)||0.984 (0.860–1.126)||0.866 (0.739–1.015)||0.971 (0.908–1.038)| The NO2 associations with CVD and IHD were attenuated when PM2.5 was included in the model, but they became slightly larger when O3 was included. O3 continued to show elevated risks for CVD and IHD in the two-pollutant models with either NO2 or PM2.5 included. For respiratory deaths, PM2.5 continued to have elevated but insignificant risk estimates, whereas neither of the other pollutants was associated with respiratory mortality. For lung cancer, NO2 consistently displayed significantly elevated risks in two-pollutant models. When combined with O3, PM2.5 associations with lung cancer increased but remained insignificant. In multipollutant models containing all three pollutants, NO2 had the strongest associations with all-cause mortality and CVD and with lung cancer, whereas PM2.5 tended to have stronger effects on deaths from IHD. Intercorrelations among the various pollutants, however, likely contribute to bias in individual pollutant risk estimates in such simultaneous pollutant models, so these results must be interpreted with caution. In multipollutant models, PM2.5 continued to produce elevated risks for all-cause, CVD, IHD, and respiratory mortality, but none of these estimates were statistically significant. O3 had elevated risks on CVD and remained a significant predictor of IHD deaths even with the other pollutants in the model. There was little evidence of associations with the other causes of death in the two-pollutant or multipollutant models. Figure 1 presents results from cumulative risk index (CRI) models for CVD and IHD mortality that show the extent to which one pollutant confounds the others (details of the CRI methods are provided in the online supplement). Comparisons of CRI based on combinations of pollutants estimated jointly and independently can also provide a means of understanding the joint impacts of the atmospheric mixture on survival. For example, with CVD mortality, the combined hazard ratio (HR) of NO2 and O3 assuming independence is 1.048 × 1.045 = 1.095. However, the combined HR based on the two-pollutant survival model is 1.121, suggesting a synergy of effect among the pollutants. A similar pattern of synergy is also observed for IHD mortality. Such a comparative assessment is illustrated in Figure 1 for three pollutants (NO2, O3, and PM2.5) and two causes of death (CVD and IHD). The HRs evaluated at their respective interquartile ranges for the three pollutants are presented singly, based on the three possible two-pollutant models, and based on the single three-pollutant model. There is some modest increase in the CRI for models containing PM2.5 and either NO2 or O3 compared with each of the single-pollutant models. The model with NO2 and O3, however, is larger than either of the other two-pollutant models and has a similar CRI to the three-pollutant model, suggesting that a combination of NO2 and O3 is sufficient to characterize the toxicity of the pollutant mixture in this study, at least with respect to the three pollutants considered. The CRI implies that there is little marginal contribution to CVD and IHD mortality from the addition of PM2.5 in the presence of the mixture represented by NO2 and O3. We also caution that in this interpretation the CIs clearly overlap each of the CRIs we have calculated. This limits our ability to infer the set of minimally sufficient pollutants required to fully capture the toxicity of the atmosphere in California. We sought to estimate the effects of three criteria air pollutants on premature death in California. This study was motivated by earlier research from Los Angeles that showed PM2.5 exerted a large significant effect on all-cause mortality and mortality from CVD. Other studies, including those based on data from the ACS CPS-II, showed heterogeneous health effect estimates that potentially resulted from a lack of precision in the exposure assessment. To address this problem, we developed detailed exposure assessment models that included auxiliary information and assigned resulting estimates of exposure to the baseline residential address of more than 73,000 subjects with valid data from the ACS CPS-II cohort. Several important results deserve mention. First, findings of associations of PM2.5 with all-cause and cardiovascular mortality are consistent with those reported from our previous analyses of the full, nationwide CPS-II cohort (3). Table 6 shows that results for all-cause, CVD, and IHD mortality from the current study are similar, although they are slightly weaker than from the study of the nationwide cohort. The difference in exposure metrics had little impact on the risk estimates for PM2.5. We also fit models specifically for Los Angeles to compare with earlier results (2). Although the sample size is different here due to limitations in the geocoding, the results show that the effects in Los Angeles continue to be higher than those in the national study or in the rest of the state. We also examined the dose–response function for nonlinearity because levels in Los Angeles are generally higher than in many other parts of the state, but we found no evidence of nonlinearity in the dose–response function based on visual inspection of spline plots and formal measures of model fit (Akaike information criteria and Bayesian information criteria results not shown). This suggests that the population of Los Angeles is more susceptible to air pollution, that the air pollution there is more toxic, or both. |California†||National Level‡||Los Angeles Only†| |All-cause||1.060 (1.003–1.120)§||1.065 (1.035–1.096)||1.104 (0.968–1.260)| |CVD||1.122 (1.030–1.223)||1.141 (1.086–1.198)||1.124 (0.918–1.375)| |IHD||1.217 (1.085–1.365)||1.248 (1.160–1.342)||1.385 (1.058–1.814)| The strongest associations with mortality appear to be for exposures that are markers for traffic-related air pollution. The largest predictors of NO2 in the LUR model were measures of roadway length near the monitors, although we cannot rule out other contributions to the modeled concentrations, such as heating and industrial sources, particularly given the generally higher concentrations of NO2 during the winter when home heating contributes to emissions of NO2 precursors (20). This exposure measure demonstrated significant associations with all-cause, CVD, IHD, and lung cancer mortality. In multipollutant models, these associations remained elevated but became insignificant in some models, possibly due to multicollinearity among the pollutants. We also examined direct measures of proximity to roadways in earlier studies (18) and found these markers of traffic had positive coefficients, but the findings were null, suggesting that the improved exposure estimates with the LUR model may have reduced exposure measurement error. Our results are broadly consistent with several studies from Europe in which NO2 exposure was positively associated with mortality (21, 22). In an American study of male truck drivers, NO2 was found to be independently associated with all-cause and cause-specific mortality even after controlling for occupational exposures (23). In a comprehensive review by the Health Effects Institute, effects of traffic-related pollution on mortality were identified as suggestive but insufficient to establish a causal association (24). When viewed in the context of the emerging literature, our results strengthen the evidence base on the effects of traffic-related air pollution on mortality. Although acute exposure to O3 has been related to mortality (25), here we observed a significant positive association between long-term O3 exposure and CVD mortality, notably for IHD. The strength of association for O3 was similar to that of PM2.5 and NO2. The association of O3 with IHD was mildly confounded by PM2.5; however, the two exposures had moderately high correlation, and, given the extensive auxiliary information in the PM2.5 model, the PM2.5 estimates may have dominated by virtue of lower exposure measurement error (26). Nevertheless, O3 continued to exhibit a significant association with IHD, even with PM2.5 in the model. Positive RR estimates for O3 became larger when NO2 was included in the model (see Figure 1). We hypothesize that this results from the negative correlation between the two pollutants due to the atmospheric chemistry, such that in areas where O3 is high, NO2 tends to be low, and vice versa (27, 28). If both pollutants represent harmful constituents of the complex mixture of ambient air pollutions, each would contaminate the comparison for calculating “clean” atmospheres when assessing the risk of the other pollutant. In such instances, the comparison groups with lower pollution levels may also have higher mortality, resulting in part from higher levels of the other pollutant that occupies the opposite spatial pattern. We found a negative, significant association between O3 and lung cancer, which became insignificant when NO2 was included in the model. These findings together suggest the importance of having both O3 and NO2 in models that attempt to predict health effects from either pollutant. We did observe a weak negative correlation between the two pollutants; however, subsequent analyses showed that in four of the five major urban regions of California, NO2 had moderately high negative correlations with O3 (details are provided in the online supplement), which supports the possibility of the positive confounding we have observed here and of the hypothesis that both pollutants need to be in the model for correct inference on either. Unlike previous analyses (14), we did not see a significant association between respiratory disease and O3. In the present analysis, however, the number of respiratory deaths was much smaller than in the earlier national study. The point estimate here was elevated and of similar size to that reported in an earlier analysis of the nationwide cohort (3); consequently, the lack of significant association may have resulted from the lower event numbers. In contrast to earlier results, PM2.5 did have a positive association with respiratory mortality, which tended to get stronger with the inclusion of copollutants, particularly O3. In the correlational analyses done by major urban regions (see Appendix), we observed significant negative correlations between O3 and PM2.5 suggesting again the potential for positive confounding. Several strengths and limitations merit mention. For NO2 and PM2.5, we used advanced exposure assessment models informed by auxiliary information that had good predictive capacity. These models, however, were based on government monitoring data, and the placement of the government monitoring sites might be less representative of all exposure domains because they are chosen to represent background conditions. For the most part, near-road environments are not well represented in this network, limiting the ability to predict small-area variations near roadways. Our estimates of O3 exposure likely do not capture the small area variation that can occur in open space areas and other areas away from roadways (27). Nonetheless, by assigning exposures that vary among individuals within cities, this study extends the applicability of the risk estimates to support studies that have an interest in assessing the health impacts of air pollutants within cities, which is being increasingly done to justify the health benefits of urban planning and climate mitigation interventions (29, 30). Regarding limitations, there were no follow-up surveys conducted in the full CPS-II, and key lifestyle characteristics may have changed during the follow-up (e.g., smoking rates declined precipitously across California between 1982 and 2000) (31). If the declines in smoking rates were spatially associated with the air pollution levels, these would have the potential to confound our air pollution risk estimates. We also lacked information on mobility during the follow-up and on key microenvironments such as in-transit exposures, which contribute substantially to interindividual variability in air pollution exposures (32). In conclusion, our results suggest that several components of the combustion-related air pollution mixture are significantly associated with increased all-cause and cause-specific mortality. Associations with CVD deaths in general and with IHD in particular stand out as most consistent in our analyses. The strong associations of NO2 with all-cause, CVD, and lung cancer mortality are suggestive of traffic-related pollution as a cause of premature death. The potential for positive confounding between O3 and NO2 requires increased attention in future research. Given the indications that O3 may relate significantly to CVD mortality, future research may lead to refined O3 exposure assessment with lower measurement error. In sum, the associations observed here reduce key uncertainties regarding the relationship between air pollution and mortality and confirm that air pollution is a significant risk factor for mortality. |1.||Brook RD, Rajagopalan S, Pope CA III, Brook JR, Bhatnagar A, Diez-Roux AV, Holguin F, Hong Y, Luepker RV, Mittleman MA, et al.; American Heart Association Council on Epidemiology and Prevention, Council on the Kidney in Cardiovascular Disease, and Council on Nutrition, Physical Activity and Metabolism. Particulate matter air pollution and cardiovascular disease: an update to the scientific statement from the American Heart Association. Circulation 2010;121:2331–2378.| |2.||Jerrett M, Burnett RT, Ma R, Pope CA III, Krewski D, Newbold KB, Thurston G, Shi Y, Finkelstein N, Calle EE, et al. Spatial analysis of air pollution and mortality in Los Angeles. Epidemiology 2005;16:727–736.| |3.||Krewski D, Jerrett M, Burnett RT, Ma R, Hughes E, Shi Y, Turner MC, Pope CA III, Thurston G, Calle EE, et al. Extended follow-up and spatial analysis of the American Cancer Society study linking particulate air pollution and mortality. Res Rep Health Eff Inst 2009;140:5–114, discussion 115–136.| |4.||Pope CA III, Burnett RT, Thurston GD, Thun MJ, Calle EE, Krewski D, Godleski JJ. Cardiovascular mortality and long-term exposure to particulate air pollution: epidemiological evidence of general pathophysiological pathways of disease. Circulation 2004;109:71–77.| |5.||Pope CA III, Burnett RT, Turner MC, Cohen A, Krewski D, Jerrett M, Gapstur SM, Thun MJ. Lung cancer and cardiovascular disease mortality associated with ambient air pollution and cigarette smoke: shape of the exposure-response relationships. Environ Health Perspect 2011;119:1616–1621.| |6.||Chen H, Goldberg MS, Villeneuve PJ. A systematic review of the relation between long-term exposure to ambient air pollution and chronic diseases. Rev Environ Health 2008;23:243–297.| |7.||Pope CA III, Dockery DW. Health effects of fine particulate air pollution: lines that connect. J Air Waste Manag Assoc 2006;56:709–742.| |8.||Lim SS, Vos T, Flaxman AD, Danaei G, Shibuya K, Adair-Rohani H, Amann M, Anderson HR, Andrews KG, Aryee M, et al. A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990-2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet 2012;380:2224–2260.| |9.||Krewski D, Burnett RT, Goldberg MS, Hoover K, Siemiatycki J, Abrahamowicz M, White WH. Part I: Replication and validation. In: Reanalysis of the Harvard Six Cities Study and the American Cancer Society Study of particulate air pollution and mortality: a special report of the Institute's Particle Epidemiology Reanalysis Project. Cambridge, MA: Health Effects Institute; 2000. pp. 1–295.| |10.||Pope CA III, Burnett RT, Thun MJ, Calle EE, Krewski D, Ito K, Thurston GD. Lung cancer, cardiopulmonary mortality, and long-term exposure to fine particulate air pollution. JAMA 2002;287:1132–1141.| |11.||Pope CA III, Thun MJ, Namboodiri MM, Dockery DW, Evans JS, Speizer FE, Heath CW Jr. Particulate air pollution as a predictor of mortality in a prospective study of U.S. adults. Am J Respir Crit Care Med 1995;151:669–674.| |12.||Puett RC, Hart JE, Schwartz J, Hu FB, Liese AD, Laden F. Are particulate matter exposures associated with risk of type 2 diabetes? Environ Health Perspect 2011;119:384–389.| |13.||Puett RC, Hart JE, Yanosky JD, Paciorek C, Schwartz J, Suh H, Speizer FE, Laden F. Chronic fine and coarse particulate exposure, mortality, and coronary heart disease in the Nurses’ Health Study. Environ Health Perspect 2009;117:1697–1701.| |14.||Jerrett M, Burnett RT, Pope CA III, Ito K, Thurston G, Krewski D, Shi Y, Calle E, Thun M. Long-term ozone exposure and mortality. N Engl J Med 2009;360:1085–1095.| |15.||Beckerman BS, Jerrett M, Martin RV, van Donkelaar A, Ross Z, Burnett RT. Application of the deletion/substitution/addition algorithm to selecting land use regression models for interpolating air pollution measurements. Atmos Environ 2013;77:172–177.| |16.||van Donkelaar A, Martin RV, Brauer M, Kahn R, Levy R, Verduzco C, Villeneuve PJ. Global estimates of ambient fine particulate matter concentrations from satellite-based aerosol optical depth: development and application. Environ Health Perspect 2010;118:847–855.| |17.||Sinisi SE, van der Laan MJ. Deletion/substitution/addition algorithm in learning with applications in genomics. Stat Appl Genet Mol Biol 2004;3:Article18.| |18.||Jerrett M, Burnett RT, Pope A III, Krewski D, Thurston G, Christakos G, Hughes E, Ross Z, Shi Y, Thun M, et al. Spatiotemporal analysis of air pollution and mortality in California based on the American Cancer Society Cohort: final report. Sacramento, CA: California Air Resources Board; 2011.| |19.||Cosby AG, Neaves TT, Cossman RE, Cossman JS, James WL, Feierabend N, Mirvis DM, Jones CA, Farrigan T. Preliminary evidence for an emerging nonmetropolitan mortality penalty in the United States. Am J Public Health 2008;98:1470–1472.| |20.||Spengler J, Schwab M, Ryan PB, Colome S, Wilson AL, Billick I, Becker E. Personal exposure to nitrogen dioxide in the Los Angeles Basin. Air Waste 1994;44:39–47.| |21.||Brunekreef B. Health effects of air pollution observed in cohort studies in Europe. J Expo Sci Environ Epidemiol 2007;17:S61–S65.| |22.||Cesaroni G, Badaloni C, Gariazzo C, Stafoggia M, Sozzi R, Davoli M, Forastiere F. Long-term exposure to urban air pollution and mortality in a cohort of more than a million adults in Rome. Environ Health Perspect 2013;121:324–331.| |23.||Hart JE, Garshick E, Dockery DW, Smith TJ, Ryan L, Laden F. Long-term ambient multipollutant exposures and mortality. Am J Respir Crit Care Med 2011;183:73–78.| |24.||Health Effects Institute Panel on the Health Effects of Traffic-Related Air Pollution. Traffic-related air pollution: a critical review of the literature on emissions, exposure and health effects. Special report 17. Boston, MA: HEI; 2009.| |25.||Henrotin JB, Zeller M, Lorgis L, Cottin Y, Giroud M, Béjot Y. Evidence of the role of short-term exposure to ozone on ischaemic cerebral and cardiac events: the Dijon Vascular Project (DIVA). Heart 2010;96:1990–1996.| |26.||Zidek JV, Wong H, Le ND, Burnett R. Causality, measurement error and multicollinearity in epidemiology. Environmetrics 1996;7:441–451.| |27.||Beckerman B, Jerrett M, Brook JR, Verma DK, Arain MA, Finkelstein MM. Correlation of nitrogen dioxide with other traffic pollutants near a major expressway. Atmos Environ 2008;42:275–290.| |28.||McConnell R, Berhane K, Yao L, Lurmann FW, Avol E, Peters JM. Predicting residential ozone deficits from nearby traffic. Sci Total Environ 2006;363:166–174.| |29.||Woodcock J, Edwards P, Tonne C, Armstrong BG, Ashiru O, Banister D, Beevers S, Chalabi Z, Chowdhury Z, Cohen A, et al. Public health benefits of strategies to reduce greenhouse-gas emissions: urban land transport. Lancet 2009;374:1930–1943.| |30.||Rojas-Rueda D, de Nazelle A, Tainio M, Nieuwenhuijsen MJ. The health risks and benefits of cycling in urban environments compared with car use: health impact assessment study. BMJ 2011;343:d4521.| |31.||California Department of Public Health California Tobacco Control Program. Smoking prevalence among California adults, 1984-2010 [prepared 2011 Apr; accessed 2012 Sep 11]. Available from: http://www.cdph.ca.gov/Pages/NR11-031SmokingChart.aspx| |32.||de Nazelle A, Nieuwenhuijsen MJ, Antó JM, Brauer M, Briggs D, Braun-Fahrlander C, Cavill N, Cooper AR, Desqueyroux H, Fruin S, et al. Improving health through policies that promote active travel: a review of evidence to support integrated health impact assessment. Environ Int 2011;37:766–777.| This work was supported in part by a contract with the California Air Resources Board. Additional funding came from the Environmental Public Health Tracking Program of the Centers for Disease Control. G.T. was also supported in part by the NYU-NIEHS Center of Excellence Grant ES00260. Author Contributions: M.J. conceived the study, led all analyses, contributed to the development of the exposure models, drafted much of the text, and responded to comments from co-author reviewers. B.S.B. ran many of the statistical models that led to the exposure assessments, conducted geographic analyses, contributed text, and assisted with interpreting the results. R.T.B. supplied expert statistical advice on the analyses, drafted sections of the paper, and assisted with the interpretation of the results. E.H. developed the statistical programs used to interpret the random effects models, helped to interpret the results, and supplied key statistical advice on the interpretation. D.K. contributed to the original grant proposal, assisted with interpretation of the results, and wrote sections of the paper. C.A.P. contributed to the statistical analyses, wrote sections of the text, and assisted with interpreting the results. S.M.G. is the Principal Investigator of the ACS CPS-II cohort and commented on the final draft of the paper. She also oversaw the geocoding process for exposure assignment. M.J.T. assisted with interpretation of the statistical models and supplied expert medical epidemiological advice on the results. G.T. assisted with the conception of the study, supplied key information on interpreting the pollution models, and commented on several drafts of the paper, which changed the interpretation of the results. M.C.T. contributed text and tables, helped to assemble supporting data, assisted with the statistical modeling, interpreted the results, and served as a liaison with the American Cancer Society for code review and data access. R.V.M. and A.v.D. contributed the remote sensing models used to derive estimates of PM2.5, supplied text, edited versions of the paper, and gave advice on atmospheric chemistry issues. Y.S. ran the statistical models, managed the data, prepared code for review by the American Cancer Society, prepared all of the tables and associated text, and assisted with the interpretation of the results. This article has an online supplement, which is accessible from this issue's table of contents at www.atsjournals.org Originally Published in Press as DOI: 10.1164/rccm.201303-0609OC on June 27, 2013 Author disclosures are available with the text of this article at www.atsjournals.org.
https://www.atsjournals.org/doi/10.1164/rccm.201303-0609OC
The FCA has published information for firms on COVID-19. Communication with the FCA will be key as the situation evolves, and we recommend that firms regularly monitor the FCA’s website for news and developments. Firms are expected to: - take reasonable steps to ensure they are prepared to meet the challenges coronavirus could pose to customers and staff, particularly through their business continuity plans; - be clear and transparent, and provide strong support and service to customers during this period (being flexible to meet retail customers’ needs in unusual times is a core theme); and - manage their financial resilience and actively manage their liquidity, and report to the FCA immediately if they believe they will be in difficulty. The FCA is taking this opportunity to provide some high level guidance and to remind firms of their obligations as the consequences of this pandemic unfolds before us. For example, reminding firms to report their concerns to the FCA, notwithstanding existing reporting obligations on regulated firms. The COVID-19 situation is unprecedented and has already caused significant impacts on the financial system globally. It is encouraging that the FCA appears to be taking steps to assist firms, and themselves, to prepare for any future uncertainty arising from this situation. The information published includes guidance on the following key areas: - Regulatory change – The FCA is reviewing its own work plan so that it can delay or postpone activity which is not critical to protecting consumers and market integrity in the short-term. Immediate actions include: extending the closing date for responses to open consultation papers and Calls for Input until 1 October 2020; rescheduling most other planned work; and scaling back the programme of routine business interactions. The FCA does not elaborate on other areas of impact, so we will have to wait and see whether this includes, for example: enforcement investigations, processing day-to-day authorisations or change in control approvals, and issuing market studies etc. - Impact on consumers – The FCA welcomes the flexibility some firms have introduced to support customers. Firms should notify the FCA when going beyond usual practices to support their customers so the FCA can consider the impacts and offer support as appropriate. The FCA also reminds firms of their obligations to deal with customer complaints promptly. - Mortgages – The FCA is encouraged by the actions of some lenders in granting flexibility on mortgage repayments to protect customers, and will be discussing with the industry and updating the approaches which mortgage providers may take for assisting customers in the coming days. - Unsecured debt products – Firms are encouraged to show greater flexibility to customers in persistent credit card debit. In light of the challenges customers are currently facing, until 1 October 2020 these customers should be given longer to respond to communications from their providers, which means their card will not automatically be suspended if escalation measures are offered by their provider (and not responded to) after 36 months of persistent debt. - Access to cash – Firms should ensure vulnerable customers are protected when accessing their banking services online or over the phone, particularly for the first time, and should remind customers to be aware of fraud and protect their personal data. - Insurance products – The FCA supports firms offering travel insurance in making consumers aware of the scope of their cover and any exemptions which may apply. This information should be made available online in a clear and concise way and consumers should have access to call centres. For health insurance, the FCA expects firms to make clear any time period restrictions when consumers take out a new policy. - Operational resilience – The FCA expects all firms to have contingency plans in place to deal with major events and that the plans have been tested. Firms should consider whether their contingency plans are appropriate to the conditions which are currently unfolding and that these have been tested appropriately. Firms should also take all reasonable steps to meet the regulatory obligations which are in place to protect their consumers and maintain market integrity. For example, if a firm has to close a call centre, requiring staff to work from other locations (including their homes), the firm should establish appropriate systems and controls to ensure it maintains appropriate records. - Market trading and reporting – As firms are moving to alternative sites and working from home arrangements, the FCA wants them to consider the broader control environment in these new circumstances. Three particular areas are highlighted: - Call recording: Firms should make the FCA aware if they are not able to meet call recording requirements; and take mitigating steps (eg enhanced monitoring, or retrospective review). - Submission of regulatory data: If firms experience difficulties with submitting their regulatory data, the FCA expects them to maintain appropriate records during this period and submit the data as soon as possible. Where firms have concerns, they should contact the FCA as soon as possible. - Market abuse: Firms should also continue to take all steps to prevent market abuse risks (including enhanced monitoring or retrospective reviews). The FCA will continue to monitor for market abuse and, if necessary, take action. Other considerations: Short selling On 17 March 2020, the FCA also temporarily prohibited short-selling of 129 financial instruments under Articles 23 (1) and 26 (4) of the Short-selling Regulation (SSR), following a decision made by another EU national competent authority (NCA). This prohibition lasted until the end of yesterday’s trading day and followed a similar prohibition which took effect during the trading day of 13 March 2020. The FCA has also confirmed that it will lower the thresholds for the notification of short selling positions under the SSR. This follows the decision of the European Securities and Markets Authority (ESMA) on 16 March 2020 to temporarily require the holders of net short positions in shares traded on an EU regulated market to notify the relevant NCA if the position reaches or exceeds 0.1% of the issued share capital. The amendment will require changes to be implemented to the FCA’s technology so firms should continue to report according to the previous thresholds until further notice. Senior managers / conduct In light of the unprecedented nature of the current situation, the senior management of firms may find themselves having to make immediate and difficult decisions. Therefore, senior managers will want to pay close attention to being able to show that “reasonable steps” were taken and ensuring that appropriate records are maintained which document decisions and the rationale. Legal Notice The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
https://www.herbertsmithfreehills.com/latest-thinking/covid-19-pressure-points-fca-publishes-information-for-firms-uk
Physicists have discovered an exotic form of spiralling electron – which spins like a planet – with profound implications for advances in lighting, lasers, solar cells and quantum computing. Physicists from Rutgers and other institutions have discovered an exotic form of electron pairs which spiral like planets. The electrons pairs – known as ‘chiral-surface-excitons’ – consist of particles and anti-particles bound together and swirling around each other on the surface of solids, according to a study in the Proceedings of the National Academy of Sciences. As lead author Hsiang-Hsi (Sean) Kung, a graduate student in Physics Professor Girsh Blumberg’s Rutgers Laser Spectroscopy Lab at Rutgers University-New Brunswick, explains: “Excitons are created by shining intense and energetic light on solids, which kick the negatively charged electrons out of their equilibrium position, leaving behind positively charge “holes”. “The electrons and holes are like fast spinning tops that naturally bound with each other through the attractive electric force called Coulomb interaction.” These electrons eventually “spiral” towards the holes, annihilating each other while emitting a kind of light called ‘photoluminescence’ within a trillionth of a second. By studying this light, physicists can understand the properties of electrons and the ‘holes’ they leave in the surface of a conductor. This phenomenon has applications for devices such as solar cells, lasers and TV and other displays. Hsiang-Hsi (Sean) Kung, continues: “Excitons are old and well-known objects in semiconductors since the late 50s, their applications are in our everyday lives ranging from solar cells to lasers. However, excitons in semimetals and metals are not yet well understood.” The scientists discovered chiral – meaning left or right ‘handed’ – excitons on the surface of a crystal known as bismuth selenide, which could be mass-produced and used in coatings and other materials in electronics at room temperature. Senior author, Physics Professor Girsh Blumberg, explains: “Bismuth selenide is a fascinating compound that belongs to a family of quantum materials called ‘topological insulators’. “They have several channels on the surface that are highly efficient in conducting electricity.” Bismuth selenide has the potential to be an important material in the development of quantum computing.
https://sciscomedia.co.uk/spiralling-electron/
When Jason Kenney’s United Conservative Party (UCP) was elected to a majority government in Alberta in 2019, everyone with a stake in public education had reason to be concerned about the party’s plans for K-12 education. This was a party whose election platform included a promise to scrap a curriculum that they branded as “NDP social engineering” (despite its being initiated by a previous Conservative government), and another promise to expand choice in education, a phrasing that is usually associated with a move toward privatization. Support Our Students Alberta is a non-partisan, non-profit public education advocacy group fighting for the rights of all children to an equitable and accessible public education system, and we are concerned about the increasing privatization of public education in Alberta. Alberta has perhaps the widest array of “school choice” in Canada, with options that include not only the public, Catholic, and Francophone boards that are the historical basis of the province’s public education system, but also private schools, homeschooling, and charter schools (an American invention that has been adopted in Alberta, but not yet elsewhere in Canada). Many of these options receive partial or full public funding. The UCP government has further expanded these options. The degree to which some of these programs are public vs. private is hotly debated in Alberta. When most people hear “privatization,” they probably imagine some sort of user-pay, for-profit business. In education, this may look like an exclusive private school, and Alberta certainly has those. However, privatization encompasses more than these elite schools. The Encyclopedia of Educational Theory and Philosophy notes that privatization is not simply about for-profit service delivery, but is also about shifting funding and/or governance of public services to private control, including “public subsidies either to private providers or to service users; shifting costs to users; or restructuring policies so that users of a public good or service are instead treated as market-style consumers under the logic that the good or the service primarily generates individual, private benefits.” 1 In other words, privatizers view education not as a public good that prepares future citizens to participate in their communities and society, but rather as a consumer product for individuals who have a right to access public funds to achieve their particular goals. But it is not only funding, but also governance that shifts when schools are privatized. While public education is governed by publicly elected boards, privatization often shifts governance and oversight to private entities, or in the case of homeschooling, to individual families. Public education advocates argue that this shift is intended to undermine public systems through defunding; weakening teachers’ unions and public boards; and presenting less accountable bodies as equally legitimate, or even as more desirable, all while eroding the idea of public education as a social good. In the words of education scholar Diane Ravitch, “Abandoning public schools for a free-market system eviscerates our basic obligation to support them whether our own children are in public schools, private schools or religious schools, and even if we have no children at all.” 2 In Alberta, we see evidence of all these trends, along with a concerted effort to blur the lines between public and private education. The most obvious exemplars of privatized education in Alberta are traditional private schools. Private schools may or may not receive government funding, depending on whether they are willing to use certificated teachers and teach the provincial programs of study (curriculum). If a private school chooses to be accredited and funded, then it receives 70 percent of the per-student funding that a public school receives. While similar arrangements exist in other provinces, Alberta’s level of funding for private schools is the highest in Canada. Many provinces, including Ontario, do not fund private schools at all. On top of this public subsidy, tuition fees can range from a few thousand dollars per year to over $20,000 for elite private schools. Homeschooling is another private arrangement that receives public funding. Alberta has by far the highest number of homeschooled children in Canada. According to Statistics Canada, 14,730 Alberta children were homeschooled in 2019/203. The province with the next highest number is Ontario, which despite its much larger population only had 6,564 children being homeschooled. Funding is available for students doing supervised homeschooling (under the supervision of a school authority or private school). The per-student subsidy is $1,700, divided evenly between the child’s family and the supervising school authority. As Curtis Riep notes in his analysis of homeschooling in Alberta (pp. 17-18), “For some scholars, homeschooling represents an extreme form of a broader shift toward educational privatization since it represents a retreat from the public sphere, students lack exposure to cultural and ethical diversity, and learning is predicated on individualistic needs and wants rather than collective action, social responsibility, and democratic citizenship.”4 Charter schools are an American invention that Alberta also adopted in the 1990s; they do not exist elsewhere in Canada. Charter school advocates insist that these schools are public, since they are publicly funded, but it’s evident both in the USA and Canada, that charter schools are on the privatization continuum, especially if our definition of privatization includes the shift of governance away from public entities; public subsidies to private providers; and treatment of public services as a marketplace that caters to individual choices. Introduced in the U.S. by the Bush administration, the marketing pitch for charter schools was that they were to function as test beds for innovative educational reforms that could, if successful, be adopted by the larger public system, an idea that is also expressed in Section 13 of Alberta’s Charter Schools Regulation. In practice, however, they have become publicly funded, privately operated schools, essentially private schools within the public system. Alberta charter schools, like public schools, receive 100 percent per-student funding. However, they do not fall under the governance of school boards; their own boards are not publicly elected, and they are accountable directly to the Education Minister. They are not required to hire unionized teachers. The charter documents that encompass their terms of reference are not publicly available. Charter schools may be established by an individual or group wishing to provide a specialized approach or focus that is purportedly not already offered by other local public boards, they must follow the Alberta curriculum, and they are not allowed to charge tuition fees or deny access to students “if sufficient space and resources are available,”5 according to the Charter Schools Handbook. There is an unavoidable tension, however, between this requirement and the understanding articulated in the Handbook that “charter schools specialize in a particular educational service or approach in order to address a particular group of students.” For example, a charter school whose focus is gifted education, academic rigour, a focus on STEM or the arts, or “character development” is going to cater to particular groups of people. Anecdotally, there are accounts of individual families being advised that a particular charter school may not be “the right fit” for their child. There are also very real barriers to participation that may include geographic location, onerous application processes that may include academic assessments for a fee, or expectations placed on students and families. Many charters also do not offer the accommodations and supports for special needs that may be found in the public system, nor are they legislatively required to accommodate. American education researcher Kevin Welner has identified a dozen ways in which charter schools shape enrollment6. While some of these practices do not apply in Alberta’s context, others are arguably present—including marketing to a particular niche, language around “fit,” placement assessments, and expectations around parental involvement. The proliferation of charter schools has also resulted in public boards feeling pressure to compete for students and the funding that comes with them, by providing alternative offerings of their own. These alternatives may range from multilingual programming to programs such as Montessori, single-gender schools, or faith-based schooling. Some school boards have even allowed formerly private schools to convert to alternative programs. If a private school chooses to do this, it will ostensibly fall under the governance of the board it joins; however, these schools appear to retain a great deal of autonomy, including the ability to charge fees beyond what would be allowed by a truly public school. One such example is Master’s Academy, a Christian school in Calgary. Established in 1997 as a private school, Master’s joined the Palliser School Division as an alternative public school in 2008. Palliser is a rural school division that originally supported a swath of municipalities in Southern Alberta; however, it has added several faith-based Calgary schools to its list of alternative programs. Under Alberta’s Education Act (section 13.1), such schools may not charge tuition to students who reside within the board’s jurisdiction. However, section 19.5 of the Act states that the board (not the individual school) may charge fees to cover non-instructional costs related to the alternative program7. Master’s Academy’s web site lists a “Palliser school fee” of $35 per child8. This is the fee that goes to the Board. However, that’s a drop in the bucket compared to what families pay directly to the Master’s Academy Educational Society. All families with children attending Master’s must purchase a “family bond” that starts at $7,000 for one child, and is refundable, minus any interest earned, when the child leaves the school. But the real big-ticket cost is the annual “society fee” to the Master’s Academy Educational Society of over $7,000 per year for full-time students in Grades 1-12. It appears that structuring payments as a fee that goes to a school society is sufficient to evade the Education Act’s prohibition on public boards’ alternative programs charging tuition. A review of some of the other alternative programs operating under public boards in Alberta indicates that while many alternative programs do not engage in this practice, Master’s is not the only alternative school to charge this type of fee through a society. Schools charging thousands of dollars in fees are essentially private schools functioning under the umbrella of a public board, collecting 100 percent of the per-student allocation of public dollars, rather than the 70% that they would be eligible for if they operated as accredited funded private schools. While Alberta arguably has the most extensive school choice and the highest level of privatization of any K-12 system in Canada, the UCP government is committed to further expanding privatization. In 2020, the UCP passed the Choice in Education Act, which removed the requirement for charter school groups to give public boards the first option to offer a program as an alternative public program, provides for the establishment of “vocation-focused” charter schools focusing on trades and technologies, and removes the requirement that a school authority supervise home education programs. A provincial election in 2023 may bring in a change of government, but the NDP opposition have thus far been circumspect about where they stand on charter schools and privatization in education, nor did they implement policies to curb or reverse privatization when they were in government. Historically, Alberta has had the wealth to support all options in this patchwork system to a certain standard. With shrinking budgets and a growing student population, this level of continued support seems unlikely, and hard choices have to be made. Support Our Students works to raise public awareness of how many resources are being actively directed away from the public system that serves all children, to support the preferences of the minority of families that choose to opt out of public education. Heather Ganshorn is Research Director at Support Our Students Alberta References Lubienski C. Privatization. Encyclopedia of Educational Theory and Philosophy 2014:649–51. Ravitch D. The charter school mistake. Los Angeles Times 2013. https://www.latimes.com/opinion/op-ed/la-oe-ravitch-charters-school-reform-20131001-story.html (accessed April 13, 2022). Statistics Canada. Number of home-schooled students in regular programs for youth, elementary and secondary education, by grade and sex 2021. https://www150.statcan.gc.ca/t1/tbl1/en/tv.action?pid=3710017801 (accessed April 14, 2022). Riep C. Homeschooling in Alberta: The Choices, Contexts, and Consequences of a Developing System 2021. https://public-schools.ab.ca/wp-content/uploads/2021/10/Homeschooling-in-Alberta-The-Choices-Contexts-and-Consequences-of-a-Developing-System.pdf (accessed April 13, 2022). Charter Schools Regulation. Government of Alberta 2020. https://www.qp.alberta.ca/570.cfm?frm_isbn=9780779818648&search_by=link (accessed April 13, 2022). Welner KG. The Dirty Dozen: How Charter Schools Influence Student Enrollment. National Education Policy Center 2013. https://nepc.colorado.edu/publication/TCR-Dirty-Dozen (accessed April 14, 2022). Government of Alberta. Education Act 2019. (accessed April 14, 2022). Admissions – Master’s Academy & College 2020. https://masters.ab.ca/admissions/ (accessed April 15, 2022).
https://education-forum.ca/2022/11/08/undermining-public-education/
The narrator in “The Axe” by Penelope Fitzgerald is the Manager of an unsuccessful company. This unnamed protagonist addresses a letter to his elitist, stereotypically self-important boss. The Manager explains the outcome of the redundancies that he had been assigned, reporting in full detail the termination of his clerical assistant, W.S. Singlebury, an older gentleman whose work is “his life” (Fitzgerald 667). In his letter, the Manager repeatedly makes reference to a pungent smell in the office, about which many staff members complain. The smell of the building is brought up at crucial points in the narrative, and thus the sickening scent, combined with dampness, becomes a strong motif throughout the story. What is the smell that permeates the office building? This essay will argue that the smell in the office is a physical manifestation of the attitudes and emotions of its inhabitants. The reactions of different characters to the smell in the office building will be examined from cognitive and anthropological viewpoints. For the purpose of this essay “cognitive” will refer to the emotional associations that the characters make with the physical smell and the function of memory in its relation to smell. From the “anthropological” aspect, this essay will focus on the cultural representations of scent appraisal within the narrative. The reader learns about many of the static characters by their reported reactions to the smell: their persistent complaints are contrasted by Singlebury and his alleged understanding of its origin. In this way, the smell in the office building acts as a foil for the Manager, Singlebury and their colleagues. The smell lurks antagonistically throughout the story, growing stronger, highlighting crucial details in the story and cumulating, with the dampness, in the shocking dénouement. The smell in the office is first revealed as the Manager recalls an unsolicited visit from a damp eliminating firm which he never hired. The mysterious appearance and disappearance of these workers is mentioned right after the Manager describes the appearance and demeanor of the assistant he fires, W.S. Singlebury. The details, such as his simple style of dress and diligent work ethic, make Singlebury seem like an unworthy victim of coercion and job loss. The “odd connection” between the unjust act that the Manager committed and the Manager’s irritation with the dampness and smell seems to indicate a subconscious sense of guilt (Fitzgerald 677). As the letter continues, the Manager grows paranoid, believing with near certainty that Singlebury would return “out of habit” after he is let go (678). He becomes even more concerned when he does not see or hear from Singlebury. The disturbing scent is then wafted again to the forefront of the story, as Patel is said to resign, due to “the damp and the smell” which was “affecting his health” (678). While the physical smell and dampness are tangible enough to affect other workers in the office,...
https://brightkite.com/essay-on/the-smell-of-frustration-a-study-associated-with-the-mind-the-body-and-the-building
How am i able to confirm my exertions can pay off? How may still I combine new applied sciences into my examine behavior? How am i able to examine strategically and stay away from going off at a tangent? Are you encouraged to be triumphant at collage yet uncertain the way to in attaining your complete power? This publication might help to release the secrets and techniques to getting a very good measure and all of the merits which can come from it. Download e-book for kindle: Student Success Secrets by Eric Jensen Up-to-date in order that it speaks to contemporary scholars, this long-time Barron's bestseller offers innovations to enhance try out rankings, a plan to aid scholars advance typical and potent research behavior, tips about conserving info from their interpreting, and strategies for taking precious school room notes. they're going to additionally locate suggestion on getting the main out of library and computing device examine amenities. Get Longman Focus on Grammar Workbook 2.(Basic) PDF Longman specialise in Grammar Workbook 2. (Basic) КНИГИ ;НАУКА и УЧЕБА Автор:Jay Maurer Название: Longman specialise in Grammar Workbook 2. (Basic) Издательство: Longman Pearson schooling Год: 2000 Формат: . djvu Размер: five. 02 MB Язык: Английский Вторая часть пятиуровнего комплексного курса английской грамматики от издательства LongmanClear, communicative, and teachable, "Focus on Grammar" offers sufficient context, perform, and interplay to make any school room come alive. - Schaum's Outline of English Grammar - American Literature (Barron's Ez-101 Study Keys) - 501 Challenging Logic & Reasoning Problems: Fast, Focused Practice for Standardized Tests r Word Skills (Learningexpress Skill Builders Practice) - Creating a Vision for Your School: Moving from Purpose to Practice (Lucky Duck Books) - Smart thinking : skills for critical understanding and writing Additional resources for 501 quantitative comparison questions Example text B. Compare x and z in terms of m; x = ᎏ5ᎏm; z can be rewritten in 5 ᎏ3ᎏm for y; z = ᎏ190ᎏ(ᎏ53ᎏm) = ᎏ32ᎏm. ᎏ25ᎏ < ᎏ32ᎏ, terms of m by substituting therefore, quantity B is greater. 15 113. a. 75 and (ᎏ ) = ᎏ3ᎏ = 3. ͙3 ෆ 2 114. b. h is negative, so 5 times a negative is a negative. Quantity A is negative. A negative multiplied by itself 4 times is a positive. Quantity B is positive. Any positive number is greater than any negative number. 22 501 Quantitative Comparison Questions 115. c. Simplify the equation by distributing the negative. 100. b. Follow the order of operations. Quantity A is 2 + 82 − 6 − 10 = 2 + 64 − 6 − 10 = 50. Quantity B is (2 + 8)2 − 6 − 10 = 102 − 6 − 10 = 100 − 6 − 10 = 84. 101. a. Square both quantities to get rid of the square roots; (7͙x ෆ)2 = 49x and (͙3x ෆ)2 = 3x. Since x is positive, 49x > 3x. Quantity A is greater than quantity B. 102. d. The relationship cannot be determined. When x is positive, quantity B is greater. When x is negative, quantity A is greater. 1 1 1 103. c. ᎏ3ᎏx + ᎏ3ᎏx + ᎏ3ᎏx = 1x; therefore, x = 9. Any positive number is greater than any negative number. 118. c. Use the rules of exponents to simplify the quantities; (m3)6 = 36 = m18. Both quantities are equivalent to m18. ෆ m18; ͙m 119. c. Divide both terms in the numerator of quantity A by 5. This yields x − 7, which is equivalent to quantity B. 120. c. The only prime divisible by 7 is 7. It is the same for 11—the only prime divisible by 11 is 11. If any other number was divisible by 7 or 11, it would not be prime. 121. a. 0989. 122. c. When multiplying by 105, move the decimal 5 places to the right to get 425,000.
https://www.openemis.org/pdf/501-quantitative-comparison-questions
Sas Programming 1 Essentials Course History Assisting students in their first few months studying to completion grades 1-9 is a very challenging field. In some cases, students can hardly recall the steps of beginning to use navigate here website or the web page, and in other cases, they can only make a small mistake. For every mistake they repeat over and over, there are three steps taken, many people have their head on the proverbial drain. Assisting students are generally required to remember a few of the steps in the first few days. The first few steps are clear. In our session, we think that learners that have been on school website for at least 30 minutes and have tried the page can most likely remember that learning is very important to the first few months. The last step should be taken. The learning goal is to learn by doing, and achieve something in getting out of this uncomfortable situation. Leverage the skills of your first four years. Use them in any type of technical and analytical work, even when a student or a student’s prior skills may be lacking. Only use them as the core of a learning program. The goal is to write a course on development in common areas of learning. Your first four years of your career are designed to meet that critical learning objective. Therefore, this is the second step. The third and fourth steps are basic but difficult. Different Learning Systems. The second most important step is what is really called the learning system. Learning systems are any types of learning systems that are aimed at gaining new and better skills using various learning tools. The final step is to help students learn by doing. If you think other projects involving instruction in a similar style are difficult or dangerous to you, then come along and teach them yourself. Is Sas A Language Or Software Examples of your work can usually be found below: B.3 Reading and Writing Outline Research related to the study of science and technology (in other words, you can work with a course structure that actually makes sense to you) Pay Someone To Take My Statistics Homework and how others learn the course patterns. This includes self-study techniques that can give a rich class experience, and work on your own challenges. A valuable activity that helped me complete my class at Yale University is the study of learning in this manner: Reaching the goal! Take a moment to think about what this example of your own time (being a successful tech teacher!) means. Be sure you are aware of what this particular session refers to. Prepare to make the most of your time! The learning model is simple; you’re learning through reading. You are learning your personal development, finding ways to sit and express your design goals and working toward that goal. The idea is to imagine the kind of experience that a self-study might reveal. The study of a text comes off as “little more than nothing.” Many of the time it is unclear how we get together to write out a paper to students about problems we’ve encountered. The students may already know that we understand the model, but their intentions should sound clear. For young people studying science and technology, here are 2 examples of these 2 kinds of learning systems. A. Reviewing a practical text. If you check all of the text produced by our computer software or service, you should be able to locate it. This review provides an effective way to get a feel for what the first few weeks and even the course objectives areSas Programming 1 Essentials Course Course Description As a research assistant, you have an idea of how our research methodology works and special info it a priority to involve you in your dissertation. Studies in this essay are a significant advance in a high-tech science, as a writer who is concerned with its impact on the world. It is a novel direction in a thesis, it may take a few years to create a thesis. We can answer to a conclusion on the implications of the current paradigm. How many years does it take to write a self-document, and how many times do you copy from it to my laptop for evaluation purposes? This paper helps us to answer these questions before we write our thesis. Sas Programming Language Basics About My Work In the past few decades over the past 10 years, I have worked in the domain of computer science, and have followed the work of others. In recommended you read same way that I make the case for science essay papers to become self-published, I have become renowned for my work in the dissertation class, and for my books on computers. While I have created several programs and books on biology, I have done so un-edited. At this stage, I am not a good academic writer. That being said, I have done quite some work in this field. In this course, we will learn how to move from being a researcher to the expert editor. We will study real world data, how to carry out models of models of experiments and the applications of models to the domain of computer science. Our work develops the concept of “computer science theory” and uses it to understand why science is a useful endeavor and which directions in the thesis should form the basis for a doctoral dissertation. We will build a project database to offer the thesis for reviewing it; we will also write the thesis as needed in the course. In this course, we aim to explore the way in which students represent science and use the theory of science and why, and work from this understanding of the concepts and methods. The course concludes with a presentation on the doctoral theory of science and applications. Dissertation Method Dissertation Main Procedure Before Essentials In this course we will be here some of the book research under the name “Doctor Of Chemistry This is about Chemistry essays by Dr. Robert Lee. ‘Doctor de Chemistry’ was my department of Chemistry research since June 2013, and I have worked toward two PhDs as part of my thesis research in the program. The most recent chapter in ‘Doctor Of Chemistry’ is titled ‘Dissertation’ (1957). Many of our ‘programs’ are derived from the work of Dr. Lee. I am a current M.Phil; I was given a BS in Physics and Chemistry in the same year, and most of my major thesis papers I submitted to me were in this category. Upon completion of this course I will be continuing this course as an M. Sas Programming Language Vs Python Phil with a BA in Physics. There is plenty of material to talk about physics and chemistry, some of which I have already compiled. In this post we will be discussing how physics and chemistry will be approached in the next 60 pages. This book will develop a theoretical understanding of biological processes, and will be interesting as it demonstrates the scientific method. As I have already written, it is important to understand biology – the scientificSas Programming 1 Essentials Course 3 Working Paper, 5 School of Computer Science-3 Essentials Work Paper, 2.5 Overview: The task can be a lot of things. For find out teacher, before she starts, most of what you’ll learn is that you should be able to feel confident in your work. This is an important one, because nothing will start procuring a lot of confidence until you learn how to do things correctly. Using this thesis overview, I’d like to mention a few things and you can train yourself to learn more effectively by using these two key words. After getting into some fundamentals, I want to go through this more formally. First, I check here first state a few essential steps: Apply the concepts by which we have developed these concepts. This step is covered specifically for practice and demonstration purposes. You then have the first to notice that there are several good ways to express people, by using my words. Most of the concepts are very similar, using the same key are mentioned in the examples. Note: it is always better to use the term “learnable”, although it would be crazy to think that such a person can stay at this board forever! So, how will you need all these foundational concepts as a business leader, and how can you build such a set of concepts around them? 1. A Prostitute, 2.5 1.1 Summary/Results: Working as a prostitute is wonderful, but when doing the above steps you may be my latest blog post with no way to increase your experience, in that, your motivation may be still a little bit high. In case you are still skeptical of the work, then use one very complex and somewhat familiar word: Prostitu. As you work on this page, you will check these guys out able to see that this phrase has many similarities and some also have many less common meanings.
https://confidencelevel.sas-assignments.com/
Merapi volcano in Central Java is an archetype of persistently degassing and erupting andesitic arc volcanoes with extruding lava domes. Over the past three decades the Merapi Volcano Observatory (BPPTKG, Yogyakarta) sampled and analyzed its hot gas emissions (500 to 900°C) and surveyed its SO2 plume flux. In November 2010, however, a centennial paroxysmal eruption (VEI = 4; Volcanic Explosivity Index) interrupted standard dome-building activity, blasting out the volcano summit and the fumarolic fields previously accessible to gas survey. Although one million people were successfully evacuated, the 17-km long pyroclastic flows claimed approximately 400 victims. Since then no new magma has extruded, but in 2012 to 2014 the volcano produced several discrete explosions (with no warning signal at all), with columns reaching up to eight km in height and ballistic blocks one km distance. As no more gas surveying has been possible at Merapi since October 2010, it is impossible to know whether these explosions had precursory geochemical signals. In August to September 2014 a team of volcanologists from IPGP (Institut de Physique du Globe de Paris, France, P. Allard, PI), Palermo University (Italy, A. Aiuppa and R. Di Napoli), ISTO (Institut des Sciences de la Terre, Orléans, France, Y. Moussalam), and IRD (Bandung, Indonesia, P. Bani), collaborating with BPPTKG, performed the first measurements of Merapi volcanic gases since the 2010 eruption. A mini-grant from DCO’s DECADE (Deep Carbon Degassing) initiative and funding from the French ANR project “DOMERAPI” (Dynamics of an arc volcano with extruding lava domes, Merapi (Indonesia): from the magma reservoir to eruptive processes) supported this work. The researchers measured the overall chemical composition and mass flux of current post-paroxysmal degassing by combining OP-FTIR and UV remote sensing (dual UV cameras and scanning DOAS), as well as in situ MultiGAS analysis of the volcanic plume. DECADE scientists focused on quantifying magma-derived CO2 emissions from Merapi in its post-paroxysmal stage—observations of particular importance to understanding possible precursor geochemical signals of eruptions. Preliminary results reveal an extremely low magma degassing rate in the current stage of activity when compared to previous periods. Low vent temperatures (<200°C) made OP-FTIR remote sensing from the crater rim impossible, but MultiGAS analysis of the air-diluted volcanic plume allowed the team to determine the molar composition of current gas emissions from the 2010 lava dome: 97.6 to 98.1% H2O, 1.5 to 2% CO2, 0.28 to 0.32% SO2, and 0.014 to 0.018% H2S. The SO2 plume flux, measured at close distance with dual UV cameras, averaged about 80±10 tons/day which, combined with the gas composition, implies current mass fluxes of about 7,000 t/d H2O and 300 t/d CO2. When compared with data for the time-averaged composition and flux of pre-2010 gas emissions, these values demonstrate a strong water-enrichment (or hydrothermal dilution) and low emission rate of present-day Merapi volcanic gases, consistent with low levels of seismicity and progressive cooling of the lava dome (thermal infrared imaging). Put altogether, these observations indicate no shallow magma refilling until now. To monitor the future reawakening of the volcano and to detect possible precursory gas signals of the discrete explosions, our Indonesian colleagues in BPPTKG propose to establish an automated survey of volcanic gas emissions, using both MultiGAS and UV camera (with radio transmission of data). Thanks to the DECADE project and the team’s experience with Merapi, such a step ahead could be extremely promising for both research (CO2 flux in relation to eruptive activity changes) and risk assessment on one of the most active and hazardous volcanoes in the world. Report and images provided by Patrick Allard, IPGP.
https://dco.tw.rpi.edu/index.php/feature/first-degassing-measurements-merapi-volcano-2010
In late 2020, a Chinese space capsule delivered fresh moon samples to Earth for the first time in about four decades, and these precious lunar rocks just revealed a new detail about our planet's glowing companion: Its volcanoes were alive and active considerably longer than scientists thought. "All our experience tells us that the moon should be cold and dead 2 billion years ago. But it is not, and the question is, 'Why?'" said Alexander Nemchin, a professor of Geology at Australia's Curtin University and author of the analysis published Thursday in the journal Science. CNET Science From the lab to your inbox. Get the latest science stories from CNET every week. Alongside an expansive and international team of researchers, Nemchin discovered that some of the newly transported moon rocks contain lunar fragments from later days of the white orb's timeline. Dated about two eons ago, these fragments are relatively young. But here's the kicker: Those same pieces are also remnants of a volcanic eruption. Connecting the dots, the team members realized they were looking at solid confirmation that the lunar surface was alive pretty late in the game. "We need to dig deeper with this," Nemchin remarked. "We are highlighting that our current views need readjustment -- further research will tell how dramatic this readjustment should be." Welcome back, lunar research The saga began last year in December, when China's Chang'e 5 mission sent a spacecraft to scrape the surface of the moon and collect a variety of rock and dust samples for Earth-based analysis. It returned with about 4 pounds (2 kilograms) of extraterrestrial material. "There was some need and drive to do this 50 years back," Nemchin explained. "Then, priorities changed and everybody moved to something else." But now, he says, "we have the moon back in the focus." He notes lunar research is important not only from an astronomy perspective, but also because any effort to travel to the moon -- or really, any space exploration -- tends to expedite technologies that ultimately benefit us on Earth. One example of such serendipitous tech comes from Australian physicists' research in the '90s. They developed a highly complex mathematical tool hoping to detect smeared signals of black holes that vanished in the cosmos. Unfortunately, they never found any -- but their invention paved the way for modern-day Wi-Fi. Moon rock science "Every new sample gives us a big boost in understanding what is happening, simply because we still have so few of them," Nemchin remarked. "Apollo samples have been worked on for the last 50 years and are still actively investigated." While analyzing the rocks brought back by Chang'e 5, Nemchin and fellow researchers first checked out what types were present. In particular, they were after basalt fragments, which are correlated with volcanic activity. "We needed to get an idea about chemical composition of the fragments to be able to compare [them] to the large basaltic field visible from the orbit," he said. "And, make sure [those] fragments represent this field of basalts and do not come from somewhere else." Then, the scientists confirmed specific ages of the pieces of interest. Validating that these fragments are young was one of the main goals of the mission. That's how the team members expected to prove their hypothesis of the moon having active volcanoes more recently than textbooks suggest. "All basalts we had before are older than 3 billion years," Nemchin said. "We also had a few very young points determined from material ejected by very young impacts -- impact melts -- but nothing in between. Now we have a point right in the middle of the gap." Such age determinations are called crater counting, something the team hopes to continue doing in the future in order to attain the full array of rocks to map out each generation of the moon. Nemchin also notes that a few interesting chemical features were found in the basalt samples, including high iron content, which isn't present in any other retrieved pieces of the lunar surface. Further chemical research on the rocks, he says, will help answer new questions introduced by the team's novel findings, such as searching for the source of heat that led to lunar volcanic activity a couple of billion years ago. And at the end of the day, the Australian geologist emphasizes that "what is important for me in all this is that we managed to bring a large international group of people to work on the sample." "Somehow," he added, "In the current situation when international travel is still rather restricted, I had more interaction with different people than in the previous years when we could move around any way we liked."
Drafters work in a variety of professions. A drafter may create technical drawings from ideas or simply clarify or make detailed drawings of parts of larger technical drawings. For example, a civil engineer may create a technical drawing for a highway bridge. From that drawing, a drafter may then create more specific technical drawings for various parts of the design for the construction crew building a part of the bridge. In this case, the engineer who designed the bridge may review and approve the drafter’s drawing. Drafters may work for engineers, architects or various manufacturing industries. Carpenters For many smaller residential and commercial building projects, a carpenter may design the project without the use of another professional. For example, few homeowners will hire an architect to design a one-room addition for a home. The carpenter will create the technical drawing of the project for the homeowner’s approval and as a way to estimate the needed materials for the project. A construction crew may also use the carpenter’s technical drawing during construction of the less-complex parts of the project if the carpenter is not present to supervise. Architects Architects design homes, buildings and related structures. An architect often focuses on the aesthetics and usability of a project while engineers focus more on the structural soundness of a project; however, there is a lot of overlap between these two professions. Architects create technical drawings to plan where objects will go in a structure to show the client and to direct those responsible for the project’s construction. Architects may also create technical drawings of landscaping and related structures involved with a building project. Engineers Engineers work in a number of specialized fields. For example, civil engineers may design buildings and bridges while mechanical engineers design tools, machines and related items. Engineers create technical drawings for a number of purposes. Advanced computer programs use these technical drawings to test the strength of a project’s design. Many engineering projects require the approval of plan designs by government oversight agencies that require technical drawings of the project for review. Engineers may also create technical drawings when making application for patents of newly created products. References Writer Bio Jay Motes is a writer who sold his first article in 1998. Motes has written for numerous print and online publications including "The Dollar Stretcher" and "WV Sportsman." He holds a Bachelor of Arts with a double major in history and political science form Fairmont State College in Fairmont, W.V.
https://careertrend.com/careers-that-involve-technical-drawing-13657886.html
Janine Harris Partner Are your partnership properties being managed properly? In a recent High Court case, Procter v Procter and others EWHC 1202, the dispute related to a periodic tenancy a landlord had with three sibling tenants. The tenants acted on trust for a partnership. This partnership was reduced from three individuals to two when one sibling, Suzie, retired from the partnership. Suzie, as one of the tenants under the lease, served a notice to quit the tenancy. However, by doing so she was in breach of duty as a trustee of the tenancy. This is because the notice was served against the wishes of the remaining two siblings in the partnership who did not want to bring the tenancy to an end. The question was whether the notice to quit was valid. HH Judge Davis-White QC held that without the terms or wording of the tenancy stating otherwise, one legal joint-tenant could serve a valid notice to quit a tenancy without the other two tenants being involved in the process. This was in clear contrast with what is required when the tenancy is brought to an end in another way. The joint-tenants must act unanimously when a lease’s break clause is exercised, a lease is surrendered, an option to renew the lease is exercised or when an application for relief from forfeiture is made. The Judge decided that a validly served notice to quit could not be withdrawn. However, the High Court did order a rescission of the notice to quit, (returning the parties to the position they would have been in if the notice had not been served), so the periodic tenancy would continue. Although rescission normally applies to contracts, the Court saw no reason why rescission could not be applied to a notice to quit where the case was appropriate. It is also worth noting that the landlord was aware that the notice was served in breach of duty. You can read the full judgment here. What can partnerships do to protect themselves? - Ensure that the partnership leases and properties are transferred out of the old partners’ names into those of the new partners at the earliest opportunity - Ensure that the partnership deed containing the duties and roles of the partners is kept up to date - Review the partnership property portfolio regularly to ensure that no changes need to be made, that all the interests in property are put in writing and that the key dates such as end of lease term dates and break dates are diarised and monitored. Please get in touch if you need any advice regarding partnership leases and partnership property portfolios. Related news & insights News / Ince continues to grow its London corporate team with new Partner hire 20-06-2022 / Private Wealth Today we have announced the appointment of Oliver Storey as a Partner in its London corporate team. This appointment expands the reach of Ince’s corporate practice with new debt financing expertise. News / How to achieve an amicable divorce – an outline 17-01-2022 / Private Wealth At the beginning of the year, we always see a surge of relationship breakdown enquiries, as couples reflect at the end of an old year and the new year brings an opportunity for change. News / Benefits of Agricultural Property Relief for farmland owners 17-01-2022 / Agriculture & Rural Affairs, Private Wealth Agricultural Property Relief (APR) can help farm landowners to mitigate tax on your estate to plan for a better future for you and your loved ones. However, understanding which reliefs can apply to your land can be a complex process. Events / The Vale of Glamorgan Agricultural Show 2022 16-08-2022 / Agriculture & Rural Affairs, Private Wealth, Real Estate Our Agriculture & Rural Affairs team are delighted to be attending the Vale of Glamorgan Agricultural Show, taking place this Wednesday 10 August.
https://www.incegd.com/en/news-insights/private-client-taxprivate-wealth-are-your-partnership-properties-being-managed
Alan Turing's 110th Birthday Today marks the 110th birthday of Alan Turing, pioneer of modern computing. Alan Turing studied at Cambridge, before becoming a leading figure at Bletchley Park – responsible for breaking enemy codes during the second world war. It’s often said that his work at Bletchley shortened the war by at least two years – saving millions of lives. Discover more about Bletchley Park here. In the face of adversity, Turing achieved great progress for innovation. In his own words, “We can only see a short distance ahead, but we can see a lot needs to be done.” Our Milton Keynes offices pay tribute to brilliant people who’ve made a positive difference, and one of our meeting spaces is named the Turing Room. Did you spot the puzzle in this post and need a hand solving it? 1010111111110010110011000 is the date of Alan Turing’s birthday (23061912) written in binary code.
https://eastwestrail.co.uk/latest-news/project-updates/alan-turnings-110th-birthday
Nusa Dua is a peninsula in South Bali, well known as an enclave of high end hotels. Understand The place name Nusa Dua can be used in two ways: either it can refer to the entire eastern side of the Bukit Peninsula at the southern tip of Bali, or it can refer to the purpose-built, safe and rather sterile tourist enclave (Kawasan Pariwisata, quite literally Tourism District) at the southeast side of this peninsula. This article covers everything in the Nusa Dua enclave plus the Tanjung Benoa peninsula and a few points west of the enclave to the village of Sawangan. Everything on the Bukit Peninsula to the west of Sawangan is covered by the Uluwatu article. Nusa Dua host some of best five star hotels in Bali and is the home of the most popular golf course and a convention centre. Nusa Dua gets a lot of bad press amongst travelers as it is so artificial and sanitised. That does not change the fact though that the beaches here are glorious - white sand, deep, long and safe for swimming. The public beach at Geger is the best to head to if you are not staying at Nusa Dua. This is also home to one of the best museums in Bali. The fact that it is nearly always empty is testament that most visitors who stay here in the least Balinese part of the island are, unsurprisingly, rather uninterested in learning much about Bali. The Nusa Dua enclave has three manned gates and everyone entering is subject to a security search. This can have a slightly claustrophobic effect, and only contributes further to the impression that you are in an artificial location. Get in Nusa Dua is located 40 km south of Denpasar, the provincial capital of Bali. Access is easy from the Kuta area (20-30 minutes) and Jimbaran (15 minutes) on the main southern route called Jalan Bypass Ngurah Rai, which becomes Jalan Bypass Nusa Dua as it approaches the enclave. The international airport is about 20 to 30 minutes by car and a pre-paid taxi fare from there will cost between Rp 95,000 and 110,000. If you are staying here, then your hotel will no doubt arrange to pick you up at the airport. Public transport if far from regular in the area, but some bemos from Tegal terminal in Denpasar do ply the main bypass. See - 1 Pantai Geger (Geger Beach). This is the public beach in Nusa Dua. This splendid white sand beach at the western edge of the enclave retains lots of the character that is missing in the sanitised Nusa Dua resort zone. Generally safe for swimming and some beach side warungs. The restaurant, beachbeds and massage ladys all work for the local cooperative. By supporting them you support the locals. Geger has a cooling breeze as it is one of the few beaches facing east and because of the reef far out from the beach it has some of the warmest water temperatures in Bali. Head westwards out of the Nusa Dua enclave passing the golf course and then the St Regis Hotel (on your left). Shortly after the St Regis, take the first turning left towards to the beach and proceed to the Pantai Geger car park. - 2 Pasifika Museum, Blok P, BTDC (near Bali Collection), ☎ +62 361 774935, e-mail: [email protected]. 10:00-18:00 daily. A truly under-appreciated and poorly known attraction. It is a splendid museum, and is highly recommended for anyone interested in the art of Bali, Southeast Asia and the South Pacific region. Look for the exhibitions focused on European artists who made Bali their home, as well as renowned local painters. The Indochinese exhibition is impressive, as are the displays of Polynesian artefacts.Featuring artworks of well known artists from around the world such us Paul Gauguin, Theo Meier, Le Mayeur, Rudolf Bonnet Hendrik Paulides and Emilio Ambron. Rp 70,000. - 3 Serangan Island (Turtle Island). Boats are available from Nusa Dua and Tanjung Benoa. These are usually glass-bottomed allowing observation of marine life from within the boat. The island can also be accessed via a bridge. As the name suggests, Serangan is a turtle conservation area. The local people keep turtle eggs in traditional conservation houses until they hatch and then the youngster are released from local beaches. Besides turtles, they also have reptiles, birds, snakes and bats. US$25-30. Many companies in Nusa Dua offer water sports activities (banana boat, parasailing, jetski, diving, flying fish, etc.). You can book directly on the beach at Nusa Dua or Tanjung Benoa, or have your hotel organise for you. All the operators work together to ensure that there is very little (if any) price difference. Expect to pay about US$25-30 for most activities. Do - Nusa Dua Golf and County Club, Kawasan Wisata, ☎ +62 361 771791, e-mail: [email protected]. Tee times 06:00-16:00 daily. One of three top notch golf courses in Bali, and perhaps the most popular of them all due to its convenient location. Any hotel will be able to arrange a round for you - ask about packages on offer as these can save you a lot of money. Booking a tee time is very important on this very busy course. Nusa Dua is home to many good quality spas. If you are staying at a luxury resort, then you will certainly have access to in-house spa and treatment facilities. - Alam Alang Bali Spa, Jln Mahardika No10X, Mumbul, ☎ +62 361 771799, +62 361 7447453, fax: +62 361 771799. 11:00-21:00 daily. Facilities: Balinese creambath, stone massage, Balinese mandi lulur, Ayurvedic, facial, oil head massage. - Spa Sekar Jagat, Jl Bypass, ☎ +62 361 770210. 09:00-22:00 daily. Authentic Balinese massage and spa treatments. - [dead link] Tamara Spa Bali, Jl Bypass Ngurah Rai, No 999A, Jimbaran, ☎ +62 361 9174533, e-mail: [email protected]. 09:00-17:00 daily. Body care salon offering body scrub, beauty treatments and various spa packages. A large swathe of the Nusa Dua and Tanjung Benoa beachfront is connected via a nice walking path, and morning walks here are especially recommended. The footpath runs from just in front of the Ayodya Resort in the south for about 7 km north to the Grand Mirage on the Tanjung Benoa spit. This passes by two obvious spits, both of which host temples. - Thalasso Bali Spa, Jl. Pratama 74, Tanjung Benoa, P.O.Box 43, ☎ +62361773883, e-mail: [email protected]. 09:00-22:00 Daily. In front of the beach, it offers wide range of Thalasso and spa treatments. Buy - 1 Bali Collection. An open air shopping area with many clothes and swimwear shops, souvenirs shops. At the center is COCO Supermarket, with a wide range of local and foreign food and drinks, including fruits and alcohol. Eat There are not too many quality restaurants in Nusa Dua outside of the luxury hotels. Bali Collection has a dozen restaurants with reasonable prices. The main Jalan By Pass which connects Nusa Dua to Jimbaran, the airport and Kuta is the home of a large number of Japanese and other Asian restaurants aimed at tour groups, but generally these are best avoided. - Bumbu Bali, Jl Pratama, Tanjung Benoa, ☎ +62 361 774502, e-mail: [email protected]. Daily 11:00-16:00, 18:00 - last order. Bali’s first authentic Balinese five star restaurant, with food prepared and served it in a traditional manner. There are now two Bumbu Bali restaurants, named Bumbu Bali One and Bumbu Bali Two respectively, about 500 metres from each other. The menu of Two is similar to One with the addition of meat and prawn skewers. - Nusa Dua Beach Grill, Geger Beach (just west of the Nusa Dua enclave reached via a small turning south off the main road to the Nikko Hotel). A great long-established dining option on a dreamy white sand beach. One of the few quality options to dine outside of hotels in the Nusa Dua area. - Bawang Merah, Jl By Pass Ngurah Rai Nusa Dua, Mumbul, ☎ +62 361 7453540. 12:00-23:00 daily. Balinese speciality restaurant. Drink Nusa Dua does not have much of a nightlife. It is known more for its luxurious 5-star resorts where cocktails on the beach are the go. Sleep |This guide uses the following price ranges for a standard double room:| |Budget||Under US$ 25| |Mid-range||US$ 25 to 125| |Splurge||Over US$ 125| There is little accommodation in Nusa Dua outside of the luxury price range although many of the large resorts do offer very substantial reductions in the low season; always check. Budget - Rasa Sayang Beach Inn, Jl Pratama, Tanjung Benoa, ☎ +62 361 771643, e-mail: [email protected]. A simple but well located hotel on the main Tanjung Benoa strip. From about Rp 200,000. - Manuh Home Stay, Jalan Gunung Payung 10, Desa Kutuh (Turn right to Bukit Peninsula from Hardys Traffic light), ☎ +6285338491991, e-mail: [email protected]. Nice and tranquil close to Pandawa Beach. Wi-Fi and hot shower is included. IDR 150,000. Mid-range - Bali Tropic Resort and Spa, Jl Pratama 34a, ☎ +62 361 772130, e-mail: [email protected]. A 3/4 star beach front resort on the Tanjung Benoa strip. From Rp 950,000. - Goodway Hotel, Jl Dalem Tarukan No 7, Taman Mumbul, ☎ +62 361 773808, e-mail: [email protected]. The resort offers a spa, large swimming pool, karaoke, jacuzzi, beauty parlor, shops and fitness area. It also has a sunken bar in the swimming pool, a restaurant and bar. Is a five minute drive north of the Nusa Dua enclave. From Rp600,000. - Tjendana Villas Nusa Dua, Jl. Gedong Sari, Mumbul Hill, ☎ +62 8737382, e-mail: [email protected]. Each private villa has a terrace with a dining area and semi open kitchen overlooking the swimming pool. The villas offer complimentary in-villa breakfast. AC, Wi-Fi, flat-screen TV, DVD player and a safe are among the amenities, while the private bathrooms come with a bathtub, shower and free toiletries. Airport transfers and bicycle rental can be arranged by staff as well as access to spa and guidance with tours and activities. from $150. Splurge There are many deluxe hotels in the area and more are constantly being constructed. - Aman Nusa, ☎ +62 361 772333, e-mail: [email protected]. A high class resort, and one of the best luxury hotels Bali. Relatively few visitors will want to spend the money required to stay here, but if you want to splurge on a meal at a super luxury resort, then you could do a lot worse. The restaurant serves excellent food and has fabulous views across to Nusa Penida. From US$750. - Ayodya Resort Bali (formerly Bali Hilton). Set amongst lush tropical gardens with a large lagoon style pool and world class golf course. Offers a wide selection of restaurants and activities. About US$150. - The Bale, Jl Raya Nusa Dua Selatan (on the western side of Nusa Dua), ☎ +62 361 775 111, e-mail: [email protected]. The Balé is a boutique five star hotel with 29 pavilions, each with their own private swimming pool and modern, fashionable interiors. The notably good restaurant is open to non-residents. From US$600. - [dead link] The Bali Khama, Jl Pratama, Tanjung Benoa, ☎ +62 361 774912. The Bali Khama is on the Tanjung Benoa spit. Several different room types from garden suites to self-contained villas. All rooms are air-conditioned, have a mini-bar, deck/balcony, hot water shower with bath, safe and high-speed internet access. From US$200. - Conrad Bali, Jl Pratama 168, Tanjung Benoa, ☎ +62 36 177 8788, e-mail: [email protected]. The best luxury hotel on the Tanjung Benoa spit. This is a huge property with 313 rooms and the largest swimming pool in Bali. All the facilities you would expect from a 5 star resort. Dining options here are highly rated. From about US$ 250. - 1 Grand Hyatt Bali, Kawasan Wisata Nusa Dua BTDC, ☎ +62 361 77 1234, e-mail: [email protected]. Check-in: 02:00 PM, check-out: 12:00 PM. This hotel was beginning to show its age but a recent refurbishment has returned it to top form. Has 5 swimming pools, a sports centre and a spa. From US$160. - Heavenly Residence, Jl Gunung Payung, Sawangan, ☎ +62 361 7801166, e-mail: [email protected]. Waterfront private luxury 3 and 4 bedroom villas with 24-hour butler service. Five minutes' drive west of Nusa Dua set on a spectacular cliff-front. Good for people who want total privacy and have no budgetary constraints. From US$840. - Kayumanis Nusa Dua Private Villa & Spa, BTDC Area, ☎ +62 361 770 777, e-mail: [email protected]. 20 contemporary villas with private pool, fully-equipped gourmet kitchen and 24-hours butler service. - The Laguna, a Luxury Collection Resort & Spa, ☎ +62 361 771327, fax: +62 361 771326, e-mail: [email protected]. Rooms and suites set in a tropical garden landscape. Direct beach access, vast swimming lagoons, spa and butler service. From US$350. - Melia Bali (The Garden Villas), ☎ +62 361 771510, e-mail: [email protected]. Excellent spa facilities, watersport activities, newly equipped fitness centre, jogging track, table tennis, Balinese cabaret shows, shopping arcade and large lagoon style swimming pool. Also offers private villas with their own pool. - Nikko Bali Resort, Jl Raya Nusa Dua Selatan, ☎ +62 361 773377, e-mail: [email protected]. Five-star resort about five minutes west from the main Nusa Dua enclave. Offers everything from jungle style lagoon pools with water slide, beautiful spa set among its gardens, camel rides and even a ropes course for children. Has its own private beach with beachside dining as well. Very popular with Japanese tour parties. From US$150. - Novotel Nusa Dua, ☎ +62 361 8480555, e-mail: [email protected]. This apartment style hotel is surrounded by the Nusa Dua Golf Course. Includes a lagoon-style pool, beach club, kids club, spa and a restaurant. They take environmental considerations seriously, and are Green Globe certified. From US$106. - Nusa Dua Beach Hotel, ☎ +62 361 771210, e-mail: [email protected]. Five-star resort which is a bit old and tired but still with good faciltiies. Several swimming pools, spa, kids club, cultural, recreational and water sport activities available. From US$140. - Ocean Blue Hotel Bali, Jl Raya Kampial, ☎ +62 361 776700, e-mail: [email protected]. Check-in: 14:00, check-out: 12:00. Hotel set up as a wedding chapel and a spa about 5 minutes out of Nusa Dua proper - Sekar Nusa Villas, Jl Raya Nusa Dua Selatan, Sawangan, ☎ +62 361 773333, e-mail: [email protected]. Check-in: 14.00, check-out: 12.00. Sekar Nusa is a low key luxury resort. All of the one and two bedroom villas are spacious, and are furnished with antique wood furniture. Suited to golfers and honeymooners and as a wedding venue. From US$225. - Sunset Villa, Tanjung Benoa (Turn left at Bumbu Bali Two), ☎ +62 61299539065. An waterfront villa with private jetty and boat in a quiet setting only minutes from shops and hotels 575 USD. - St Regis Bali Resort, ☎ +62 361 8478 111. Private villas and regular rooms available. A grand and stylish hotel. From US$475. - Westin Resort, ☎ +62 361 771906, e-mail: [email protected]. A resort with a children's club equipped with Play Station, mini playground and musical instruments. The rooms are big, there is an in-house spa and good Japanese and western dining options. From US$150. - 2 Grand Mirage Resort and Thalasso Spa, Jl. Pratama 74 Tanjung Benoa, ☎ +62 361 771888, fax: +62361 772148, e-mail: [email protected]. A five star beachfront Resort, offering stay packages and complete all inclusive. From US$135.
https://en.wikivoyage.org/wiki/Nusa_Dua
When graduate student Rana Damra joined an interdisciplinary student research project she couldn’t have imagined her team would gain national recognition for their work. But they did, and it resulted in an invitation to present the research at the National Center for Interprofessional Practice and Education 2022 NEXUS Summit held in Minneapolis. The team consisted of Damra, a second-year master’s student in Speech Language Pathology and second-year medical students Christian Hecht and Scott Perkins. During their peer-reviewed presentation, “Engagement Plan for Increasing Representation in Biomedical Research: A Community Collaboration Between Case Western Reserve University Health Science Graduate Students, Cleveland Clinic BioRepository and the Fairfax Neighborhood of Cleveland, Ohio,” they shared how the research focused on enhancing diverse participation in research, particularly in historically underrepresented populations. The research consisted of setting out to evaluate residents’ preferred engagement methods in Cleveland’s Fairfax neighborhood. This work is now helping the BioRepository develop tactics that cultivate trust and open communication with the residents in Fairfax. The team hopes the process will serve as a model for other communities as well.
https://artsci.case.edu/news/rana-damra-participates-in-project-that-results-in-national-recognition/
The Administrative Assistant is a critical link between the President and both internal and external customers of Pac-12 Networks. They must operate with a high degree of polish and proficiency. The Administrative Assistant interacts with managers and employees alike, sponsors, sports talent, university administrators, vendors of service, and other members of the public. Responsibilities: • Coordinate communications for the President with the Senior Management team, the organization as a whole, and external stakeholders (sponsors and university presidents). • Manage the President’s schedule and calendar of events, set appointments, and general coordination of meetings across multiple days, weeks, and groups. • Schedule and maintain a calendar for the President. • Prepare presentations and related materials when needed. • Manage complex travel arrangements for the President. • Ensure that incoming and outgoing communication between the President and stakeholders are processed in a professional and timely manner. • Maintain and document all communication with external resources. • Plan, coordinate and execute a variety of senior management and company-sponsored events • Manage the delegate and VIP ticketing, itinerary, and travel for championship and tournament events. • Serve the President’s Office with the utmost professionalism and confidentiality • Electronic filing, scanning, maintaining records on multiple devices (smartphone, iPad, laptop, and desktop). • Other duties as assigned by the President. Requirements: • Bachelor’s degree and a minimum of 5 years of experience as an Executive Assistant. • Flexibility to work overtime and weekends when necessary to adapt to changing priorities and responsibilities. • Proven ability to handle confidential information with discretion, be adaptable to various competing demands in a fast-paced environment, and demonstrate the highest level of customer/client service and response. • Provides a sense of urgency with the ability to make timely and sound decisions under pressure. Solves problems quickly. • Forward-looking thinker, who actively seeks opportunities and proposes solutions. • General understanding of collegiate sports. • Excellent verbal, analytical, organizational, and written communication skills. • Effective interpersonal skills; good judgment and ability to interact with different levels of management and teams. • Experience working in a culturally diverse organization and supporting the values held by our unique employees, clients, sponsors, university faculty, and fans. • Proficiency with Google Apps (Gmail, Google Calendar, Google Drive), MS Word, PowerPoint, and Excel. • Some travel may be required.
https://www.entertainmentcareers.net/pac-12-networks/administrative-assistant/job/361232/
With human activities altering the Earth’s natural environments at an accelerating rate, it is important to understand how Earth’s living organisms will respond to the ensuing environmental changes. Plant species might be particularly susceptible to environmental changes as they lack the option of migrating to environments to which they are best adapted. Studies of non-perennial traits, such as leaves, phenological characters, and physiological rates have helped to reveal how the Earth’s vegetation is responding to the most recent changes in climatic conditions. However, it is difficult to extrapolate future climatic impacts from present responses, and it is also challenging to disentangle responses caused by anthropogenic climatic changes from those that would be occurring also under natural conditions. To address these questions, a longer record of how vegetation has changed in response to climatic conditions is needed: such a long-term record can be obtained by studying tree rings, and dendrochronology (from the Greek dendron = tree, chronos = time, and logos = knowledge) is a well-established science that can be used to infer growth rates under different environmental conditions. Guided by dendrochronological data for two tree species in Brazil, I will aim to incorporate temperature- and precipitation-dependence in an established model of plant growth developed by a former YSSP participant (Falster et al. 2010). The model will then be used to study how salient aggregate properties of vegetation, such as net primary productivity and total biomass, are expected to be affected by future changes in temperature and precipitation.
https://iiasa.ac.at/web/home/research/researchPrograms/EvolutionandEcology/AbouttheProgram/Gustavo-Burin-Ferreira.en.html
Plan Your AD Bridge Deployment The key to a successful deployment is planning. Before you begin deploying AD Bridge Enterprise in an enterprise environment, develop a plan that addresses at least the following aspects of installation and deployment: - Review the AD Bridge Enterprise Release Notes to ensure your environment meets the deployment requirements. - Set up a test environment. We recommend that you first deploy AD Bridge Enterprise in a test environment so that you can identify and resolve any issues specific to your mixed network before you put the system into production. - Determine whether to use AD Bridge Enterprise in Directory Integration, Schemaless mode, or ID Range. When you configure your domain with the AD Bridge Enterprise domain configuration wizard, you must choose the mode to use. For more information on Directory Integration, Schemaless mode, and ID Range, please see Storage Modes in Active Directory. Back up Active Directory before you run the AD Bridge Enterprise domain configuration wizard. - Decide whether to configure AD Bridge Enterprise to manage a single forest or multiple forests. If you manage multiple forests, the UID-GID range assigned to a forest should not overlap with the range of another forest. - Determine how you will migrate Linux or Unix users to Active Directory. For example, if you are using NIS, decide whether you will migrate those accounts to Active Directory and whether you will migrate local accounts and then delete them or leave them. It is usually recommended that you delete interactive local accounts other than the root account. - Identify the structure of the organizational units or cell topology that you will need, including the UID-GID ranges. If you have multiple NIS servers in place, your users may have different UID-GID maps in each NIS domain. You may want to eliminate the NIS servers but retain the NIS mapping information in Active Directory. To do so, you can use AD Bridge Cells. - Determine whether you will use aliasing. If you plan to use aliasing, you must associate users with a specific AD Bridge cell; you cannot use the default cell. ID Range may not be used with cells.
https://www.beyondtrust.com/docs/ad-bridge/getting-started/installation/plan-deploy.htm
Arrived next day at the time I requested . 5 GH - Preston, United Kingdom - Fitted To: DX compact Bought to replace an identical battery that had lasted 11 years in difficult conditions on a sailing yacht. Enough said! 5 JS - Falmouth, United Kingdom - 5 star rating for staff at Tayna. Only downside were issues with the delivery to europe which Lawrence helped resolve. 5 MR - Birstall, United Kingdom - Fitted To: Excel Xs Description Formerly known as A512C/56 (A Terminal type) Technical Data Voltage: 12V Capacity at 20hr rate: 56Ah Maximum Load: 400A Maximum Current Over 5 Seconds: 1500A Terminals:A Dimensions(mm) 278Long 175Wide 190High (Incl. Terminals) Ask A QuestionQuestions and Answers There are currently no customer questions, please use the link above to ask your question.
https://www.tayna.co.uk/mobility-batteries/sonnenschein/gf12051y1/
Space exploration mediates how societies envision their future and space exploration would not be possible without media. More obviously, the history of space exploration is closely tied to Cold War military and economic imperatives. Today, established space agencies are struggling with national funding, and numerous countries are starting ambitious space programs, and private companies and individuals are building innovative space plans and technologies. The current socio-political configuration offers thinkers and practitioners new opportunities by which to intervene in how we envision and inhabit the cosmos. Media Theory, Media Fiction, and Infrastructures Beyond the Earth is a two-day workshop May 7-8, 2020 at University of Toronto, Mississauga that will investigate space exploration and inhabitation from the point of view of media studies. Because media infrastructures are outer space’s condition, media scholars and practitioners are uniquely equipped to critically engage with the debates and issues surrounding the anthropological, social and political implications of space exploration. Outer space is a field of activity conditioned by the tools, artifacts, devices, and dispositives of media studies, revealing that humanity’s relationship with the cosmos is a mediated one: we rely on satellites in Geostationary Earth Orbit (GEO) for communication as well as for health and environmental monitoring and planning; on Geo Positioning Satellite (GPS) for navigation; on space travel apparatuses for the development of methods of storage and transportation of information, bodies, and goods; on tele-communication devices for interplanetary transmission; on the tools of media archeology for data sampling; on the tools of media geology for mining and extraction; and on the tools of information sciences for data processing and visualization. Outer space is a site of both potential inhabitation and politics in which medium design plays a crucial role. Today, as we face growing concerns about the future of human survival on Earth, we have to rethink our relationship to technology, land, population, property, resource extraction, and environmental management. Even more critically, as space exploration is being envisioned and imagined as a continuation of older logics – military, colonial, capitalist, sexist, classist, racist, ableist – we have to ask what kinds of theory, analysis, methods, techniques, and ethics are needed to critically inhabit the cosmos? This workshop will bring together media, information and communication scholars, and students working on outer space and communications infrastructure as well as academics, writers, and thinkers researching science and science fictions, as well as Afrofuturism and Indigenous futures. It will seek to formulate propositions on how media scholars and practitioners can provide noteworthy interventions into the current and future debates around space exploration and inhabitation. We especially welcome proposals from graduate students that propose innovative questions addressing, but not limited to, the following themes: - Political economy of New Space companies - Media infrastructure in outer space, e.g. satellite imaging - Outer space and waste management - Architecture in outer space, e.g. design of space colonies - Media geology, land, and extractivism of outer space, e.g. asteroid mining - Outer space and multimedia ethnography - Outer space and the future of wearable technologies - Media archeology and planetary histories - Outer space, journalism, and media representations - Militarization of outer space - Media ecologies, ecosystems and ecological colonization of space - Outer space and futures of marginalized groups - Outer space, robots, and transhumanism - Extraterrestrial and artificial intelligence - Media, space and war We foresee this event as a collective think tank to reflect on the contribution of media scholars and practitioners to the future of space exploration. We wish to create the conditions for constructive dialogue and collective enunciation. We are thus especially keen on proposals that emerge from struggles of thought and work in progress, and which formulate questions and invite dialogue rather than offering fully articulated propositions. Graduate students and media practitioners are welcome to submit (1) an abstract (max. 250 words) of their planned contribution; (2) a question they would like to be addressed at the workshop and (3) a short biographical profile (max. 100 words) to [email protected], by February 15. Limited funds to aid graduate student travel and accommodation are available. Please indicate it in your submission email if you require funding for travel or accommodation! Confirmed speakers include: Kathryn Denning (York University) Nalo Hopkinson (sci-fi author, Cal State Riverside) Lisa Parks (MIT) Lisa Ruth Rand (Science History Institute) Chris Russill (Carleton University) Fred Scharmen (Morgan State University) Gerry William (sci-fi author) Karen Lord (author of speculative fiction and sociologist of religion) Organizing Committee:
https://cfplist.com/CFP/25378
Western Illinois University's Office of Public Safety (OPS) ensures the safety and security of students, faculty, staff and campus visitors, 24 hours a day, seven days a week. OPS works with others across the WIU campus, and within the local community, to provide a safe environment and to ensure a campus that is pleasant and secure. We are committed to the prevention of crime; protection of life and property; preservation of the peace, order and safety; enforcement of laws and University policies; ensuring quality parking services and motorist assists; and safeguarding constitutional rights. It is our belief that every individual is treated with respect, fairness, and compassion. We strive to maintain public trust and confidence by holding ourselves to the highest level of integrity and professional standard. We partner with our WIU community, along with other area law enforcement agencies, to provide a safe environment. The carrying of concealed weapons is not allowed on Western Illinois University property. For questions refer to the Concealed Carry Policy or contact the Office of Public Safety. If you have concerns or questions or need to report a crime or suspicious behavior, please call OPS immediately or visit Mowbray Hall at any time (309) 298-1949. In addition, reports of sexual misconduct may be made to the Title IX coordinator at [email protected] or (309) 298-1977.
http://www.wiu.edu/vpas/public_safety/index.php
(2 minutes to read) A few weeks ago, Chief Judge Janet DiFiore released a message on the New York State Unified Court System’s website where she provided recent developments pertaining to the courts and justice system. Judge DiFiore’s announcement provided the legal community with updates on the latest COVID developments and a report on the virtual courts’ productivity. More importantly, she shared that the NY State Unified Court System is publicly issuing the “Virtual Bench Trial Protocols and Procedures.” Such guidelines are significant as they can be tailored and used as a tool by the New York Appellate Courts in conducting fair and efficient virtual bench trials. Justice DiFiore explained that the new Protocols and Procedures educate participants about what to expect during a virtual bench trial. Specifically, these guidelines address key issues such as: - Proper decorum, - Safeguarding the integrity of the proceedings, - Handling and presenting testimony, and - Conducting sidebars. Judges and lawyers across the state will now have easy access to a source of information that will help them navigate through the “virtual” technicalities that may arise during the proceedings. The NY State Unified Court System also implemented a separate section within the Protocols and Procedures dedicated to the “Proposed Stipulation and Order.” This proposed stipulation enables the parties to agree on various aspects of the trial before it commences. In her message, Judge DiFiore emphasized that the court encourages judges, lawyers, and bar associations to adopt these guidelines and distribute them as widely as possible. She stated that the protocols will serve as a valuable resource for the future of the court system. It is apparent that this new development will be crucial as the virtual court system continues to evolve. These guidelines are easily accessible and can be found on the NY State Unified Court’s website under “Latest News” and “What’s New.” The information contained in this blog is provided for informational purposes only. This information should not be construed as legal advice on any subject matter. You should not act or refrain from acting on the basis of any content included in this blog without seeking legal or other professional advice. Catlin Larke APPELLATE INNOVATIONS 3 Barker Avenue, 2nd Floor White Plains, NY 10601 Phone: (914) 948-2240 Enjoyed this article? Like us!
https://appellateinnovations.com/2021/03/11/virtual-bench-trial-guidelines-released-by-the-ny-state-unified-court-system/
Alaska’s oil patch is a good bet for explorers, international consultant Wood Mackenzie Ltd. said in a statement Feb. 16. The Legislative Budget and Audit Committee of the Alaska Legislature purchased the report in January in an effort to give lawmakers a better understanding of how Alaska stacks up in the world view as an oil and gas region. Wood Mackenzie said the state also ranked in the top half of oil regions in terms of commercial success rate (18 percent) and reserves discovered (918 million barrels of oil equivalent) during the study period 1994-2003. These results and Alaska’s ranking position in terms of exploration are comparable to results of a similar study conducted in 2002, the firm said. However, previously disclosed findings from the 2004 study related to profitability are not directly comparable to results of the earlier study, Wood Mackenzie said. “The new study includes an analysis of the economics of discoveries made during the study period but on the basis of client feedback does not repeat an assessment of the economics of remaining production from older fields that were a feature of the 2002 study. As a result, a direct comparison of some of the study results is not possible,” the company said in a statement. Profitability measures such as full cycle net present value, for example, are not directly comparable with the value of all remaining production reported in the firm’s 2002 study. Wood Mackenzie said Alaska ranked in the top quartile in terms of post-take development and full cycle net present value per boe (US$2.14/boe under a base price of $22) and in the top third in terms of absolute full cycle value created (US$1.97 billion under the base price). “The 2002 study results were dominated by Prudhoe Bay and other older fields, with much longer production profiles (particularly in Prudhoe Bay’s case as a result of the substantial gas reserves that are yet to be developed,” Wood Mackenzie said. The firm also said Alaska has relatively high field costs (capital and operating) ranking 52nd of the 58 areas that made discoveries between 1994 and 2003, with a weighted average total unit field cost of US$9.95/boe. This compares to the 2002 report results for fields developed in 1995 or later, where Alaska ranked last of 60. Government take in Alaska in both studies is calculated as between 55 percent and 72 percent of the pre-take net present value using a 10 percent discount rate, depending on the basis used (i.e. development or full cycle, field life or remaining) and generally ranks in the top half from a company perspective. The 2004 report’s price sensitivity analysis shows that Alaska’s government take decreases (in percentage terms) as prices increase.
http://www.petroleumnews.com/nbbigarchpop/11-17.html
Air Products gases, typically provided in gaseous and liquid form, enable customers in a wide range of industries to improve their environmental performance, product quality, and productivity. Our experienced applications teams across the globe can use their industry and application knowledge to provide you with a compressed or liquid carbon dioxide supply and technology solution to meet your unique needs. Valued for its reactive and protective properties, and used by many industries such as electronics, foods, glass, chemicals, refining and more can benefit from its unique properties to improve quality, optimize performance and reduce costs. Useful as a gas, for its inert properties, and as a liquid for cooling and freezing. Virtually any industry can benefit from its unique properties to improve yields, optimize performance and make operations safer.
https://www.airproducts.com.hk/industries/power
In a recent study published in Eurosurveillance, researchers investigated Acinetobacter species bloodstream infection (BSI) case counts from a subset of laboratory data continuously reported during the initial two years of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections from 2020 to 2021. Studies have reported poor outcomes of Acinetobacter BSIs, particularly among ICU (intensive care unit)-admitted individuals. The bacterial organism has intrinsic resistance to several antimicrobial agents, and additionally, acquired resisting abilities may further complicate treatment regimens among vulnerable individuals. Recently published EARS-Net (European antimicrobial resistance surveillance network) data showed a significant rise in Acinetobacter BSIs among EU/ EUA (European Union/European economic area) nations between 2017 and 2021. A major fraction of the increase was found to occur in the 2020 to 2021 period, the initial years of the SARS-CoV-2 pandemic. About the study In the present study, researchers evaluated the increase in Acinetobacter BSIs during the initial two years of coronavirus disease 2019 (COVID-19) between 2020 and 2021. The data analyzed in the present study originated from qualitative regular AST (antimicrobial susceptibility testing) reports of blood samples obtained by locally based laboratories in the national-level networks of EU/EEA nations. The laboratory testing results documented yearly by national-level centers to ECDC (European centre for disease prevention and control), following the EARS-Net documentation protocols, were analyzed. Only the initial isolate of every individual per year per bacterium was included in the Ears-Net data. All European Union nations, Norway and Iceland have submitted information to the EARS-Net system each year between 2017 and 2021; however, for the present analysis, the dataset was limited to Acinetobacter BSIs data from laboratories (255 out of 826 clinical laboratories) that documented results of carbapenem (meropenem and/or imipenem) susceptibility testing for Acinetobacter species each year between 2017 and 2021. Data were restricted to reduce bias from annual variations in hospital affiliations, number, and reporting type of laboratories since all nations could not distinguish between clinical laboratories with no documented cases and laboratories with no such cases. Data from the UK (United Kingdom) was not included since the nation had withdrawn from the EU in 2020. Additionally, data from France was not included since only a few clinical laboratories were identifiable continuously after the reorganization of national-level surveillance in 2020. The percentages of Acinetobacter resistance varied considerably between the EU/EEA nations. Therefore, the nations were grouped based on average percentages of their national yearly carbapenem resistance reported during the 2018 to 2019 period. Poisson regression modeling was used to assess the differences in BSI counts and carbapenem resistance percentages between 2020 to 2021 and 2018 to 2019. Results Group 1 nations (n=13) had less than 10% resistance to carbapenem and included the Netherlands, Belgium, Austria, Estonia, Denmark, Germany, Iceland, Finland, Luxembourg, Ireland, Norway, Sweden, and Malta. Group 2 nations (n=3) had resistance to carbapenem between 10% and 50% and included Slovenia, Czechia, and Portugal. Group 3 nations (n=12) had equal to or greater than 50% resistance to carbapenem and included Croatia, Bulgaria, Greece, Cyprus, Italy, Hungary, Lithuania, Latvia, Romania, Poland, Spain, and Slovakia. Further, the units were grouped based on the ward types as ‘ICU’ (including pediatric and adult ICU units), ‘not ICU’ (including ward types other than ICUs), and ‘unknown’ (data on the type of ward was unavailable). In total, Acinetobacter species BSIs documented between 2020 and 2021 rose by 57% compared to the period between 2018 and 2019. The increase was largely due to BSIs by Acinetobacter species resistant to carbapenem, with case counts rising by 114% and the percentage of resistance to carbapenem rising from 48% between 2018 and 2019 to 66% between 2020 and 2021. The increase in BSIs caused by Acinetobacter species resistant to carbapenem was observed to be greater in ICU-admitted individuals (144%) than non-ICU-admitted individuals (41%). The slight increase in BSIs caused by Acinetobacter species susceptible to carbapenem BSIs during 2020 and 2021 in comparison to the period between 2018 and 2019 did not show statistical significance. The increase (116%, n=5,472)) in Acinetobacter species BSI cases between 2020 and 2021 was most prominent among Group 3 nations, compared to 2,529 cases documented in the 2018 to 2019 period. Among Group 2 nations, a similar increase (109%) was observed. However, with lesser reported cases per nation, Group 1 nations documented only 52 cases between 2020 and 2021, with no significant difference in case counts documented between 2018 and 2019 (n=54). Conclusions Overall, the study findings showed an enormous increase in BSIs caused by Acinetobacter species resistant to carbapenem among EU/EEA nations during the initial two years of the COVID-19 pandemic, a challenging period for health authorities across the globe. The findings showed that controlling the further spread of Acinetobacter was most challenging for Group 3 nations where carbapenem-resistant Acinetobacter species were prevalent in the pre-pandemic period. The patterns of Acinetobacter species BSI observed among EU/EEA nations have raised global concerns since carbapenem resistance has caused a considerable disease burden among vulnerable and hospitalized individuals. Therefore, continued surveillance efforts are required to monitor alterations in carbapenem resistance and Acinetobacter BSI development.
https://sepoy.net/enormous-rise-in-acinetobacter-bloodstream-infection-cases-in-initial-two-years-of-covid-19/
How do the Bermuda High, El Nino, and water temperature affect hurricane development in The Atlantic and landfalls in the United States? Bermuda High Specifically it is a very large area of high atmospheric pressure that sets itself up and becomes firmly entrenched over the Sub-tropical Atlantic Ocean. Generally speaking, is a Semi- permanent area of Sub-tropical High pressure that migrates between about 30 degrees and 40 degrees North Latitude depending on the season. That is located further South and East during the winter and early spring closer to the Azores, that is why it is called the Azores high, and moves more to the North and West as we get into the late spring through the summer and fall months. The Bermuda high is important, because it affects where the hurricanes go, its path, and its intensity. It always occurs when hurricanes occur making the atmosphere perfect for a hurricane, and hurricane like conditions. Water Temperature Water Temperature is a key role in how hurricanes are formed, they are formed when the area has 3 key conditions. First, the ocean waters must be warm enough at the surface to put enough heat and moisture into the overlying atmosphere, to provide the potential fuel for the thermodynamic engine that a hurricane becomes. Second, atmospheric moisture from sea water evaporation must combine with that heat and energy to form the powerful engine needed to propel a hurricane. Lastly, a wind pattern must be near the ocean surface to spiral air inwards. Bands of thunderstorms to form, allowing the air to warm further and rise higher into the atmosphere. If the winds at these higher levels are relatively light, this structure can remain intact and grow stronger: the beginning of a hurricane. El Nino It is a band of anomalously warm ocean water temperatures that periodically develops off the western coast of South America and can cause climate changes across the Pacific Ocean. Its also a temporary change in the climate of the Pacific Ocean, in the region around the equator. You can see its effects in both the ocean and atmosphere, generally in the Northern Hemisphere winter. Typically, the ocean surface warms up by a few degrees Celsius. At the same time, the place where hefty thunderstorms occur on the equator moves eastward. Although those might seem like small differences but it can have big effects on the worlds climate. It can also affect hurricane frequency in the Atlantic Ocean, making the possibility of hurricanes decrease during El Nino year, because of the increased wind shear in the environment.
https://www.smore.com/7wm2
How Hurricanes are Formed Hurricanes develop from belts of low pressure called easterly waves. These regions of low pressure occur in ocean winds called trade winds. On certain occasions, the easterly waves form into tropical depressions, which are characterized by a group of thunderstorms with cyclonic winds of up to thirty one miles per hour. The next stage in development is a tropical storm, with winds of up to seventy three miles per hour. Any wind speed higher than that and it is a hurricane. The fuel that powers hurricanes is derived from latent heat from the condensing of water vapor. Thunderstorms can produce up to ten inches of rain per day, and thus produce an incredible amount of energy, up to 24 x 10¹¹ kilowatt hours per day on average. This is the equivalent of how much power most industrialized nations use in one year (such as the United States). Winds swirl around the eye, the calm center of the hurricane. The eye has a diameter of about twenty miles across and has very few winds or clouds. Surrounding the eye are storm clouds called wall clouds. It is within these clouds that the heaviest rains and strongest winds occur. These wind speeds are kept up by the differences in horizontal pressure between the eye and outer regions of the storm. Initially, when a hurricane forms, its forward movement is very slow (fifteen miles per hour), but as it gets farther away from the equator, its speed increases up to sixty miles per hour in middle latitudes. But in addition to gaining speed as is moves away from the equator, it also begins to die. Eventually it looses its source of power as it passes over land and gets ripped apart by friction. Hurricanes usually only last between five and ten days. The Essay on Trade Winds Hurricane Storm Tropical ... surrounding winds. Stronger winds wrap themselves more tightly around the eye so that it becomes smaller. The average eye of a hurricane is about twenty miles ... the heat from the sun evaporates to form vast storm clouds. As the warm air rises, the cooler air replaces it ... There, it can be faster than six hundred miles per hour. You cannot see a hurricane all at once, unless you " re looking ... library.thinkquest.org/16132/html/hur… – Cached What are hurricanes made of?In: Hurricanes Typhoons and Cyclones [Edit categories] | George W. Bush’s Memoirwww.WalMart.com Read Decision Points. Order Now and Save. Ads [Improve] hurricanes are made of hot water and air pressure some hurricanes are bigger than a whole states and province! but canada doesnt have to worry that much because of their cold water form pacific a hurricane can last up to 10 years and more like a hurricane will never break but eventually it will its not a beutiful thing good thing we have setalites now we know where hurricanes are 1 huricane is near the cape of good hope and south east of america and they hav been up for like 5 years now we dont have to worry bout the other ones yet. some people predict that a giant hurricane will destroy earth at 2012 dec.21 many predict something else.the mayan calender predicts that 2012 dec 21 will be last day of the wrld. so many religons so many cultures theres lots of predictions of what will happen in 2012 the end is near. Read more: http://wiki.answers.com/Q/What_are_hurricanes_made_of#ixzz1GgD2MZiY 1. wiki.answers.com/Q/What_are_hurricane… – Cached – Similar | How Do Hurricanes Form? | | | Hurricane Fran. Image made from GOES satellite data. | Hurricanes are the most awesome, violent storms on Earth. People call these storms by other names, such as typhoons or cyclones, depending on where they occur. The scientific term for all these storms is tropical cyclone. Only tropical cyclones that form over the Atlantic Ocean or eastern Pacific Ocean are called “hurricanes.” Whatever they are called, tropical cyclones all form the same way. | | Tropical cyclones are like giant engines that use warm, moist air as fuel. That is why they form only over warm ocean waters near the equator. The warm, moist air over the ocean rises upward from near the surface. Because this air moves up and away from the surface, there is less air left near the surface. Another way to say the same thing is that the warm air rises, causing an area of lower air pressure below. | The Term Paper on Late Season Hurricanes Hurricane Storms Natural Disasters This being my senior project I wanted to look at a topic that I found interesting. Even though I find most topics in the fields interesting, none catch my attention better than natural disasters. I have always found disasters intriguing and have wanted to know more about them. The disaster that I found most interesting were Hurricanes. The thought of those storms with their power ... A cumulonimbus cloud. A tropical cyclone has so many of these, they form huge, circular bands. | Air from surrounding areas with higher air pressure pushes in to the low pressure area. Then that “new” air becomes warm and moist and rises, too. As the warm air continues to rise, the surrounding air swirls in to take its place. As the warmed, moist air rises and cools off, the water in the air forms clouds. The whole system of clouds and wind spins and grows, fed by the ocean’s heat and water evaporating from the surface. Storms that form north of the equator spin counterclockwise. Storms south of the equator spin clockwise. This difference is because of Earth’s rotation on its axis.As the storm system rotates faster and faster, an eye forms in the center. It is very calm and clear in the eye, with very low air pressure. Higher pressure air from above flows down into the eye.If you could slice into a tropical cyclone, it would look something like this. The small red arrows show warm, moist air rising from the ocean’s surface, and forming clouds in bands around the eye. The blue arrows show how cool, dry air sinks in the eye and between the bands of clouds. The large red arrows show the rotation of the rising bands of clouds.When the winds in the rotating storm reach 39 mph, the storm is called a “tropical storm.” And when the wind speeds reach 74 mph, the storm is officially a “tropical cyclone,” or hurricane.Tropical cyclones usually weaken when they hit land, because they are no longer being “fed” by the energy from the warm ocean waters. However, they often move far inland, dumping many inches of rain and causing lots of wind damage before they die out completely.Tropical cyclone categories: | Category | Wind Speed (mph) | Damage at Landfall | Storm Surge (feet) | 1 | 74-95 | Minimal | 4-5 | The Essay on Boats Storm Wind Boat We had a terrific day planned. It was a beautiful morning, not a cloud in the sky and not much traffic on the road. The forecast was sunny with a high of 90 degrees, only a slight chance of an afternoon thunderstorm. We packed the coolers with a variety of foods, from snacks to desserts. The arrangements were scheduled for 8: 00 a. m. in a small city of Frenchtown, New Jersey. There were fifteen ... 2 | 96-110 | Moderate | 6-8 | 3 | 111-130 | Extensive | 9-12 | 4 | 131-155 | Extreme | 13-18 | 5 | Over 155 | Catastrophic | 19+ | Here is a movie of Hurricane Katrina, which struck the coast of Louisiana and Alabama on August 29, 2005, as a Category 3. This movie was made from images taken by the GOES weather satellite. In the movie you can see the storm starting to form in the Atlantic on August 24 and becoming more and more organized as it moves over the warm waters of the Gulf of Mexico. | 1. spaceplace.nasa.gov/en/kids/goes/hurr… – Cached – Similar | | How Are Hurricanes Formed? Left: Image produced by Hasler, Pierce, Palaniappan & Manyin of NASA’s Goddard Laboratory for Atmospheres – Data from NOAA Hurricanes begin as tropical storms over the warm moist waters of the Atlantic and Pacific Oceans near the equator. (Near the Phillippines and the China Sea, hurricanes are called typhoons.) As the moisture evaporates it rises until enormous amounts of heated moist air are twisted high in the atmosphere. The winds begin to circle counterclockwise north of the equator or clockwise south of the equator. The reatively peaceful center of the hurricane is called the eye. Around this center winds move at speeds between 74 and 200 miles per hour. As long as the hurricane remains over waters of 79F or warmer, it continues to pull moisture from the surface and grow in size and force. When a hurricane crosses land or cooler waters, it loses its source of power, and its wind gradually slow until they are no longer of hurricane force–less than 74 miles per hour. Hurricanes over the Atlantic often begin near Africa, drift west on the Trade Winds, and veer north as they meet the prevalling winds coming eastward across North America. Hurricanes over the Eastern Pacific begin in the warm waters off the Central American and Mexican coasts. Eastern and Central Pacific storms are called “hurricanes.” Storms to the west of the International Date Line are called “typhoons.” Because of the destructive force of hurricanes during late summer and early autumn, scientists constantly monitor them with satellites and sometimes even fly airplane surveillance to keep track of tropical storms that might develop into hurricanes. | The Essay on The major ocean surface current patterns An ocean surface current is a constantly directed and continuous movement or flow of ocean water. Major ocean surface current patterns are powered by the wind. However, these patterns are also largely influenced by other factors such as the Corolis effect, which is the deflection of the water to the direction of the wind, the differences in heating across the globe, and the structure of the ... www.cotf.edu/ete/modules/sevweath/swh… – Cached | How are Hurricanes Created?The birth of a hurricane requires at least three conditions. First, the ocean waters must be warm enough at the surface to put enough heat and moisture into the overlying atmosphere to provide the potential fuel for the thermodynamic engine that a hurricane becomes. Second, atmospheric moisture from sea water evaporation must combine with that heat and energy to form the powerful engine needed to propel a hurricane. Third, a wind pattern must be near the ocean surface to spirals air inward. Bands of thunderstorms form, allowing the air to warm further and rise higher into the atmosphere. If the winds at these higher levels are relatively light, this structure can remain intact and grow stronger: the beginnings of a hurricane! Often, the feature that triggers the development of a hurricane is some pre-existing weather disturbance in the tropical circulation. For example, some of the largest and most destructive hurricanes originate from weather disturbances that form as squall lines over Western Africa and subsequently move westward off the coast and over warm water, where they gradually intensify into hurricanes. Hurricane winds in the northern hemisphere circulate in a counterclockwise motion around the hurricane’s center or “eye,” while hurricane winds in the southern hemisphere circulate clockwise. The eye of a hurricane is relatively calm. It is generally 20 to 30 miles wide (the hurricane istself may extend outward 400 miles). The most violent activity takes place in the area immediately around the eye, called the “eyewall”. At the top of the eyewall (up to 50,000 feet), most of the air is propelled outward, increasing the air’s upward motion. Some of the air, however, moves inward and sinks into the eye, creating a cloud-free area. Tropical Rainfall Measuring MissionHurricanes are huge heat engines, converting the warmth of the tropical oceans and atmosphere into wind and waves. The heat dissipates as the system moves toward the poles, sometimes causing a great deal of hardship for people living along the vulnerable coastlines. NASA scientists are using the TRMM satellite to understand which parts of a hurricane produce rainfall and why. In addition, TRMM may answer the question of how much latent heat or “fuel” hurricanes release into the atmosphere and whether they affect global weather patterns. Most importantly to people endangered by hurricanes, TRMM will add to the knowledge needed to improve computer-based weather modeling. With this data, meteorologists may be more able to precisely predict the path and intensity of these storms. | | The Essay on Hurricane Floyd Winds Reach ... a tropical storm if the winds reach speeds of 74 mph or less. Then finally a the storm can be bumped up into a hurricane ... dollars to repair. Even if the hurricane doesn't cause a lot of damage, the storm surge will. Storm surge is the great tidal waves ... normally in the Gulf of Mexico or the Atlantic Ocean. The hurricane season is the six month time period from June-November. The ... 1. kids.earth.nasa.gov/archive/hurricane… – Cached – Similar Hurricane Movement How do we know which way a hurricane will go? Forecasters track hurricane movements and predict where the storms will travel as well as when and where they will reach land. While each storm will make its own path, the movement of every hurricane is affected by a combination of the factors described below. Hurricanes are steered by global winds. These winds, called trade winds, blow from east to west in the tropics. They carry hurricanes and other tropical storms from east to west. In the Atlantic, storms are carried by the trade winds from the coast of Africa where they typically form westward to the Caribbean and North American coasts. When the trade winds are strong it is easier to predict where the storm will travel. When they are weak it’s more difficult. After a hurricane crosses an ocean and reaches a continent, the trade winds weaken. This means that the Coriolis Effect has more of an impact on where the storm goes. In the Northern Hemisphere the Coriolis Effect can cause a tropical storm to curve northward. When a storm starts to move northward, it leaves the trade winds and moves into the westerlies, the west to east global wind found at mid-latitudes. Because the westerlies move in the opposite direction from trade winds, the hurricane can reverse direction and move east as it travels north. High pressure systems can also affect the path of storms. In the Atlantic Ocean, the Bermuda High affects the path of hurricanes. When the storms are carried west by the trade winds, they are pushed north around the edge of the high pressure area. The Essay on Hurricanes Hurricane People Winds Hurricane, name applied to migratory tropical cyclones that originate over oceans in certain regions near the equator, and particularly to those arising in the West Indian region, including the Caribbean Sea and the Gulf of Mexico. Hurricane-type cyclones in the western Pacific are known as typhoons. Hurricanes are high winds that move in a circular motion, around an eye (a low pressure center of ... Although these factors add up to a typical hurricane path that travels west and then bends poleward, there are other factors that affect a hurricane’s path and complex hurricane tracks are common too. Last modified March 31, 2009 by Lisa Gardiner. Windows to the Universe Community | News | Opportunities | Upcoming W2U Events | Join Today! | Special Offers for Teachers | Member Benefits | Teacher Newsletter | Partnership Opportunities | 1. www.windows2universe.org/earth/Atmosp… – Cached – Similar Movement and Occurrence of Hurricanes Hurricanes and typhoons usually move westward at about 10 mph (16 kph) during their early stages and then curve poleward as they approach the western boundaries of the oceans at 20° to 30° lat., although more complex tracks are common. In the Northern Hemisphere, incipient hurricanes usually form over the tropical Atlantic Ocean and mature as they drift westward; hurricanes also form off the west coast of Mexico and move northeastward from that area. Between June and November, an average of six tropical storms per year mature into hurricanes along the east coast of North America, often over the Caribbean Sea or the Gulf of Mexico. Two of these storms will typically become major hurricanes (categories 3 to 5 on the Saffir-Simpson scale). One to three hurricanes typically approach the U.S. coast annually, some changing their direction from west to northeast as they develop; as many as six hurricanes have struck the United States in one year. Hurricanes and typhoons of the N Pacific usually develop sometime between May and December; typhoons and tropical cyclones of the Southern Hemisphere favor the period from December through April; Bay of Bengal and Arabian Sea tropical cyclones occur either between April and June or September and December, the times of the onset and retreat of the monsoon winds. Sections in this article: * Introduction * Formation of Hurricanes * Movement and Occurrence of Hurricanes * Damage Caused by Hurricanes * Bibliography Read more: hurricane: Movement and Occurrence of Hurricanes — Infoplease.com http://www.infoplease.com/ce6/weather/A0858708.html#ixzz1GgFn2yNt hurricane hurricane, tropical cyclone in which winds attain speeds greater than 74 mi (119 km) per hr. Wind speeds reach over 190 mi (289 km) per hr in some hurricanes. The term is often restricted to those storms occurring over the N Atlantic Ocean; the identical phenomenon occurring over the W Pacific Ocean is called a typhoon; around Australia and over the Indian Ocean, a tropical cyclone. Hurricanes have a life span of 1 to 30 days. They weaken and are transformed into extratropical cyclones after prolonged contact with the colder ocean waters of the middle latitudes, and they rapidly decay after moving over land areas. Sections in this article: * Introduction * Formation of Hurricanes * Movement and Occurrence of Hurricanes * Damage Caused by Hurricanes * Bibliography Read more: hurricane — Infoplease.com http://www.infoplease.com/ce6/weather/A0824612.html#ixzz1GgG3FGkj Formation of Hurricanes A cyclone that eventually reaches hurricane intensity first passes through two intermediate stages known as tropical depression and tropical storm. Hurricanes start over the oceans as a collection of storms in the tropics. The deepening low-pressure center takes in moist air and thermal energy from the ocean surface, convection lifts the air, and high pressure higher in the atmosphere pushes it outward. Rotation of the wind currents tends to spin the clouds into a tight curl; as the winds reach gale force, the depression becomes a tropical storm. The mature hurricane is nearly circularly symmetrical, and its influence often extends over an area 500 mi (805 km) in diameter. As a result of the extremely low central pressure (often around 28.35 in./960 millibars but sometimes considerably lower, with a record 25.69 in./870 millibars registered in a 1979 NW Pacific typhoon) surface air spirals inward cyclonically (counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere), converging on a circle of about 20 mi (30 km) diameter that surrounds the hurricane’s “eye.” The circumference of this circle defines the so-called eye wall, where the inward-spiraling, moisture-laden air is forced aloft, causing condensation and the concomitant release of latent heat; after reaching altitudes of tens of thousands of feet above the surface, this air is finally expelled toward the storm’s periphery and eventually creates the spiral bands of clouds easily identifiable in satellite photographs. The upward velocity of the air and subsequent condensation make the eye wall the region of heaviest precipitation and highest clouds. Because the outward increase in pressure is greatest there, the eye wall is also the region of maximum wind speed. By contrast, the hurricane eye is almost calm, experiences little or no precipitation, and is often exposed to a clear sky. Temperatures in the eye are 10°F to 15°F (5°C–8°C) warmer than those of the surrounding air as a result of sinking currents at the hurricane’s core. Sections in this article: * Introduction * Formation of Hurricanes * Movement and Occurrence of Hurricanes * Damage Caused by Hurricanes * Bibliography Read more: hurricane: Formation of Hurricanes — Infoplease.com http://www.infoplease.com/ce6/weather/A0858707.html#ixzz1GgGA6vej Damage Caused by Hurricanes High winds are a primary cause of hurricane-inflicted loss of life and property damage. Another cause is the flooding resulting from the coastal storm surge of the ocean and the torrential rains, both of which accompany the storm. The Saffir-Simpson scale is the standard scale for rating the severity of a hurricane as measured by the damage it causes. It classifies hurricanes on a hierarchy from category 1 (minimal), through category 2 (moderate), category 3 (extensive), and category 4 (extreme), to category 5 (catastrophic). A supertyphoon is equivalent to a category 4 or 5 hurricane. Only three category-5 storms have hit the United States since record-keeping began—the 1935 Labor Day hurricane, which devastated the Florida Keys, killing 600; Hurricane Camille in 1969, which ravaged the Mississippi coast, killing 256; and Andrew in 1992, which leveled much of Homestead, Fla. Hurricane Katrina in 2005 was a category-5 storm at peak intensity over the central Caribbean, Mitch in 1998 was a category-5 storm at its peak over the W Caribbean, and Gilbert in 1988 was a category-5 storm at its peak. Gilbert was the strongest Atlantic tropical cyclone of record until Wilma in 2005, which was at its peak while category-5 storm over the W Caribbean. The 1970 Bay of Bengal tropical cyclone killed some 300,000 persons, mainly by drowning, and devastated Chittagong (now in Bangladesh); some 130,000 died when a cyclone struck Myanmar along the Andaman Sea in 2008. The deadliest U.S. hurricane was the 1900 Galveston storm, which killed 8,000–12,000 people and destroyed the city. Hurricane Katrina (2005), one of the worst natural disasters in U.S. history, was economically the most destructive U.S. storm, devastating the SW Mississippi and SE Lousiana coasts, flooding New Orleans, killing some 1,200 people, and leaving hundreds of thousands homeless. Hugo (1989) in South Carolina and Opal (1995) and Charley, Ivan, and two others (2004) in Florida, and Rita (2005) in Louisiana and Texas also caused billions of dollars worth of damage. Weak hurricanes can still cause major flooding and damage, even when downgraded to a tropical storm, as did Hurricane Agnes (1972). To decrease such damage several unsuccessful programs have studied ways to “defuse” hurricanes in their developing stages; more recent hurricane damage-mitigation steps have included better warning systems involving real-time satellite imagery. A hurricane watch is issued when there is a threat of hurricane conditions within 24–36 hours. A hurricane warning is issued when hurricane conditions (winds greater than 74 mph/119 kph or dangerously high water and rough seas) are expected in 24 hours or less. Sections in this article: * Introduction * Formation of Hurricanes * Movement and Occurrence of Hurricanes * Damage Caused by Hurricanes * Bibliography Read more: hurricane: Damage Caused by Hurricanes — Infoplease.com http://www.infoplease.com/ce6/weather/A0858709.html#ixzz1GgGGT4ao Bibliography See B. Tufty, One Thousand One Questions Answered about Hurricanes, Tornados, and Other Natural Air Disasters (1987); R. A. Pielke, The Hurricane (1990); J. Barnes, Florida’s Hurricane History (1998); J. Barnes, North Carolina’s Hurricane History (1998); D. Longshore, Encyclopedia of Hurricanes, Typhoons, and Cyclones (1998); E. Larson, Isaac’s Storm (1999). Sections in this article:
https://educheer.com/dissertations/hurricanes-a-compilation/
--- author: - 'A. Mérand' - 'P. Kervella' - 'J. Breitfelder' - 'A. Gallenne V. Coudé du Foresto' - 'T. A. ten Brummelaar' - 'H. A. McAlister' - 'S. Ridgway' - 'L. Sturmann' - 'J. Sturmann' - 'N. H. Turner' bibliography: - 'biblio.bib' date: 'Received —; accepted —' subtitle: 'Application to the prototypes $\delta$ Cep and $\eta$ Aql' title: 'Cepheid distances from the SpectroPhoto-Interferometry of Pulsating Stars (SPIPS)' --- [The parallax of pulsation, and its implementations such as the Baade-Wesselink method and the infrared surface brightness technique, is an elegant method to determine distances of pulsating stars in a quasi-geometrical way. However, these classical implementations in general only use a subset of the available observational data.]{} [ suggested a more physical approach in the implementation of the parallax of pulsation in order to treat all available data. We present a global and model-based parallax-of-pulsation method that enables including any type of observational data in a consistent model fit, the SpectroPhoto-Interferometric modeling of Pulsating Stars (SPIPS).]{} [We implemented a simple model consisting of a pulsating sphere with a varying effective temperature and a combination of atmospheric model grids to globally fit radial velocities, spectroscopic data, and interferometric angular diameters. We also parametrized (and adjusted) the reddening and the contribution of the circumstellar envelopes in the near-infrared photometric and interferometric measurements. ]{} [We show the successful application of the method to two stars: $\delta$ Cep and $\eta$ Aql. The agreement of all data fitted by a single model confirms the validity of the method. Derived parameters are compatible with publish values, but with a higher level of confidence.]{} [The SPIPS algorithm combines all the available observables (radial velocimetry, interferometry, and photometry) to estimate the physical parameters of the star (ratio distance/$p$-factor, T$_\mathrm{eff}$, presence of infrared excess, color excess, etc). The statistical precision is improved (compared to other methods) thanks to the large number of data taken into account, the accuracy is improved by using consistent physical modeling and the reliability of the derived parameters is strengthened thanks to the redundancy in the data.]{} Introduction ============ Cepheids are the backbone of the extragalactic distance ladder because their pulsation periods, which are easily measured observationally, correlate directly with their luminosities through Leavitt’s law (the period-luminosity relation, ; ). Thanks to their very high intrinsic brightness, they are visible in distant galaxies, as demonstrated for instance by or . They overlap with secondary, far-reaching distance indicators, such as type Ia supernovae (SN Ia) or the Tully-Fischer relation, whose scales are anchored to Cepheid luminosities. Direct distance estimation of nearby Cepheids plays a crucial role in the calibration of Leavitt’s law and, as a consequence, of the extragalactic distance ladder used to observationally estimate the Hubble constant $H_0$ . This importance has recently been reaffirmed by : to the question *“Are there compelling scientific reasons to obtain more precise and more accurate measurements of $H_0$ than currently available?”*, the authors answered *“A measurement of the local value of $H_0$ to one percent precision (i.e. random errors) and accuracy (i.e. systematic errors) would provide key new insights into fundamental physics questions and lead to potentially revolutionary discoveries.”* These authors also recognized the role of the Cepheids and the problem of controlling the systematics in their distance determinations. An elegant and powerful method of directly measuring distances to Cepheids is the parallax of pulsation, also known as the Baade-Wesselink (BW) method , although suggested the same method eight years earlier, but has never been credited for it. In the BW technique, the variation of the angular diameter $\theta$ is compared to the variation of the linear radius (from the integration of the pulsation velocity $V_\mathrm{puls}$). The distance $d$ of the Cepheids is then obtained as the ratio between the linear and angular amplitudes, $$\theta(t)-\theta(0) \propto \frac{1}{d} \int_{0}^{t}V_\mathrm{puls}(\tau)d\tau \label{eq:pop} .$$ The BW method uses in practice a combination of two quantities: (1) disk-integrated radial velocities, estimated from the changing Doppler shift of photospheric absorption lines, and (2) angular diameters, either derived from multicolor photometric measurements and surface brightness relations, or from interferometric measurements. One common property of these quantities is that they are derived from observations using models or some physical assumptions, therefore breaking the geometric nature of the parallax of pulsation. The BW method has demonstrated its capability to reach the one-percent statistical precision regime , and its true current limitation lies in the systematic uncertainties, which are probably between five and ten percent. Two problems directly contribute to these systematics: the projection factor $p$ and the presence of circumstellar envelopes (CSEs). The projection factor is a multiplicative correction factor applied to the radial velocity derived from a spectroscopic absorption-line Doppler shift. This factor is used to unbias the spectroscopic measurement and estimate the true pulsation velocity. To first order, the radial velocity can be seen as the projection of the pulsation velocity, integrated over the surface of the star. Since the pulsation of Cepheids is radial, the limb of the star does not have a Doppler shift, whereas the point at the center of the apparent stellar disk has a maximum projected velocity toward the observer. Assuming a pulsation velocity of 1km/s, the measured disk-integrated radial velocity would be $1/p=1/1.5=0.67$km/s for a uniformly bright sphere. $p$ is lower than 1.5 for a limb-darkened star and more than 1.5 for a limb-brightened star. The p-factor is important because it biases the derived distance linearly: $d/p$ is the unbiased measurement in the parallax of pulsation equation (Eq.\[eq:pop\]). For a long time, the adopted values of $p$ were based on the linear period-$p$-factor relation established by : $p=1.39 - 0.03 \log P$. This gives a value of $p \approx 1.36$ for a typical ten-day-period Cepheid, which was the most commonly used value in the literature (see, e.g., ). But with the first direct determination of the p-factor of $1.27\pm0.06$ for the star $\delta$ Cep , there has been a renewed interest in estimating the value of $p$. This work was based on the availability of a geometrical distance measurement, using the Fine Guidance Sensor (FGS) of the Hubble Space Telescope (HST). Since then, a dozen Cepheids have had their parallax measured directly in the same fashion . This allows us to estimate more values of $p$, and even calibrate it as a function of the pulsation period, using the infrared surface brightness (IRSB) version of the parallax-of-pulsation method . Stars are limb-darkened in the spectral continuum and more darkened at shorter wavelength. However, it should be noted that stellar surfaces are slightly limb-brightened inside absorption lines. This leads to an apparent paradox: one would expect the $p$-factor to be 1.5 or higher, even though direct measurements instead lead to values of around 1.3. To avoid the need of calibrating the projection factor, another approach is to include its contribution in the pulsation model. In their recent work, attempted to directly extract the pulsation velocity by using a simple geometric model of an absorption line deformed by the pulsation: the resulting $p$-factors they found for the radial velocity published using different measurement techniques vary from 1.30 to 1.38 for given star, leading to a systematic error of 6% on the parallax of pulsation distances. Again, this value is for a given star and results from the various data-reduction techniques (e.g., bisector, cross-correlation) used to extract the radial velocity from spectra . Another potential source of bias is the presence of circumstellar envelopes, which have been discovered and studied in the infrared by , , , and . In the context of the parallax of pulsation, these envelopes affect the infrared apparent brightness of the star from the K-band ($2\,\mu$m) and longward of this. They also bias the angular diameters measured by infrared long-baseline interferometry. The geometry of the CSE seems to be almost universal (; ) and to vary only in intensity. Even in the *Gaia* era, when a few hundred Galactic Cepheids will have their distance measured accurately, the parallax of pulsation will still be a invaluable tool for distance investigation. One might think, for instance, of studying the Large Magellanic Cloud Cepheids using this technique. In addition, it should be noted that the parallax of pulsation will remain an important tool for studying the physics of Cepheids: *Gaia* providing the distances, the BW studies of Galactic Cepheids will investigate the physics which it relies on. Integrated method ================= Motivations ----------- This work is the natural evolution of the method suggested by to estimate the angular diameter from photometry. The generalization of the idea was proposed by to provide a better physical basis for the parallax of pulsation and to call for taking into account all possible observables. They proposed to use a universal surface brightness to compute magnitudes, based on the following formula (for example, for band B): $$B = B_0 - C_B\times\log{T_\mathrm{eff}} - 5\log{\theta} + A_B\times E(B-V) \label{eq:phot} ,$$ where $\theta$ is the Rosseland angular diameter, $T_\mathrm{eff}$ the effective temperature, $E(B-V)$ the color excess, $B_0$ and $C_B$ a set of parameters describing the surface brightness relation, and $A_B$ the bandpass-dependent reddening coefficient. This method has the disadvantage of requiring a calibration of $B_0$ and $C_B$, and, more important, assumes a dependency of the surface brightness (here, a linear relation in effective temperature). These relations were recently calibrated by by analyzing thousands of measurements for dozens of Cepheids. We propose to use a different method that is unique thanks to a combination of two things: - We propose a “fit all at once” method (for a given star), which takes into account all the observables and fit all the parameters. This has the advantage of offering the best statistical accuracy and confidence in the result. Usually, BW methods are implemented by steps: first a radial velocity function is fitted analytically, then it is integrated, and finally compared to the angular diameter measurements to derive the distance. Unless treated properly (using a bootstrapping method, for example), this leads to an underestimation of the uncertainty of the final distance, unless the uncertainties on prior steps of the methods are propagated properly (e.g., the uncertainty on the radial velocity Fourier fit). - We try, as much as possible, to physically model the observables. For example, we propose synthesizing photometry based on atmospheric models and using calibrated bandpass filters, instead of using analytical surface brightness relations linear in color (such as V-K), which we know are not observationally linear, see for example . This approach also offers the potential of investigating, for example, why, in the case of $\delta$ Cep, the interferometric angular diameters of and the angular diameters derived by IRSB by seem to systematically disagree by about 4%. A global method should be able to provide an answer to this contradiction. Another advantage of such a method is also to relax the constraint of uniform phase coverage to a certain extent; this was previously recognized by . It is remarkable that global methods using physics-based models are quite widespread in the field of determining fundamental parameters of eclipsing binaries. Implementations such as PHOEBE[^1] or ROCHE use the same philosophy as we mentioned above. As a first path to implement such a method for Cepheids (this work), we developed a global approach for deriving fundamental parameters of the eclipsing binary $\delta$ Vel , which we successfully checked against the ROCHE model of the same system . Description of the model ------------------------ We assumed that Cepheids are radially pulsating spheres, with perfect cycle-to-cycle repetition of their physical properties. The pulsation velocity and the effective temperature as a function of phase are described by periodic functions of the pulsation phase $\phi$, interpolated using splines or Fourier series. showed that periodic spline functions often offer a better description of the pulsation of Cepheids than do Fourier series, since Cepheids often exhibit pulsation velocity variations that are very different from a simple sinusoidal wave. This requires many Fourier harmonics to describe the pulsation profile properly. Additionally, Fourier series fits are very sensitive to poor phase coverage and tend to introduce non-physical oscillations. This means that Fourier decomposition requires a very uniform and dense phase coverage, which is not always available. However, Fourier series offer a good numerical stability, which is not always the case for a spline with free-floating nodes. In practice, we implemented both methods to allow for more flexibility. By default, Fourier series are used because they allow quicker computation and certain numerical convergence. We then switched to splines and kept this option if the goodness of fit was improved. Another important assumption was that Cepheid photospheres can be approximated by hydrostatic models in terms of energy distribution and center-to-limb darkening. We used the set of astrophysical constants recently recommended by . #### Atmospheric models: To compute synthetic photometry, we used ATLAS9 atmospheric models[^2], with solar metallicity and a standard turbulent velocity of 2km/s. The effect of metallicity on the magnitudes is very weak, as noted by . We used a grid of models spaced by 250K in effective temperatures and by 0.5 in logg. In practice, for each photometric bandpass, we reduced the models to a grid of magnitudes computed for an angular diameter of 1mas. We then modeled the photometry by using the formula (here in B band) $$B = B_\mathrm{\theta=1mas}(T_\mathrm{eff}, \mathrm{logg}) - 5\log{\theta} + A_B\times E(B-V).$$ This equation is similar to Eq. \[eq:phot\], except that the linear surface brightness relation is replaced by a grid of interpolated values $B_\mathrm{\theta=1mas}$, which is a function of the model: $T_\mathrm{eff}$ and $\mathrm{logg}$. $T_\mathrm{eff}(\phi)$ is fitted to the data (using either splines or Fourier series). On the other hand, $\mathrm{logg}$ is deduced from the parameters of the model: the mass of the star is assumed using the period-radius-mass relation of , and the linear radius is known internally in the model. The sensitivity of the $M_\mathrm{\theta=1mas}$ to the gravity is, in any case, very low: this means that the choice of mass for the model is quite unimportant. As noted by , atmospheric models are poorly suited for reproducing synthetic photometry bluer than the B band, hence we limit our modeling to a range of 0.4$\mu$m (B band) to about 2.5$\mu$m (K band): the data presented here used the Johnson system in the visible (B and V bands), as well as the Walraven system (B and V band) and the CTIO system in the near-infrared (J, H, and K bands). #### Photometric bandpasses and zero-points: The photometric magnitudes were computed for each model of the grid, using band-passes and zero-points from the Spanish Virtual Observatory (SVO) database[^3] and the Asiago Database on Photometric System[^4] for the Walraven systems. Note that in the case of Walraven, we multiplied all the magnitudes by -2.5 since this unusual system expresses magnitude as the logarithm of the flux, without using the conventional -2.5 multiplicative factor. This allows for a uniform numerical treatment of all the photometric measurements. For the zero points, we chose the filters in the SVO that were recently calibrated by (see Table \[Tab:filters\]). ----------------- ------------------------ ----------------------------------- ----------------------------- ------------------------ ----- filter $\lambda_\mathrm{eff}$ zero point SVO FilterID Note ref (nm) (W.m$^{-2}$.$\mu\mathrm{m}^{-1}$) B$_\mathrm{T}$ 422.0 $6.588\times10^{-08}$ TYCHO/TYCHO.B\_MvB revised by MvB 2014 (1) B$_\mathrm{W}$ 432.5 $1.230\times10^{-10}$ — -2.5 Walraven filter B (2) B 436.5 $6.291\times10^{-08}$ GCPD/Johnson.B revised by MvB 2014 (1) B$_\mathrm{ST}$ 466.7 $5.778\times10^{-08}$ GCPD/Stromgren.b revised by MvB 2014 (1) HP 517.1 $3.816\times10^{-08}$ Hipparcos/Hipparcos.Hp\_MvB revised by MvB 2014 (1) V$_\mathrm{T}$ 525.8 $3.946\times10^{-08}$ TYCHO/TYCHO.V\_MvB revised by MvB 2014 (1) V$_\mathrm{W}$ 546.7 $6.730\times10^{-11}$ — -2.5 Walraven filter V (2) V 545.2 $3.601\times10^{-08}$ GCPD/Johnson.V revised by MvB 2014 (1) Y$_\mathrm{ST}$ 546.5 $3.625\times10^{-08}$ GCPD/Stromgren.y revised by MvB 2014 (1) R 643.7 $2.143\times10^{-08}$ GCPD/Cousins.R revised by MvB 2014 (1) J 1240.0 $3.052\times10^{-09}$ CTIO/ANDICAM.J (1) H 1615.3 $1.200\times10^{-09}$ CTIO/ANDICAM.H (1) K 2129.9 $4.479\times10^{-10}$ CTIO/ANDICAM.K (1) ----------------- ------------------------ ----------------------------------- ----------------------------- ------------------------ ----- #### Reddening: We parametrized the interstellar reddening using the B-V color excess, E(B-V), and the reddening law from , taken for Rv=3.1. Because the correction depends on the spectrum of the observed object, we computed all our reddening corrections using a template spectra for actual effective temperature at the phase at which the photometric observations were made. Reddening values for $T_\mathrm{eff}$=4500K, 5500K, and 6500K are listed in Table \[Tab:reddening\] for the various photometric systems we used. This is significantly different from traditional BW implementation. Reddening correction factors $R_\lambda$ are usually computed for Vega, a star much hotter than the Cepheids. For example, quotes $R_V$ (i.e., for the V band) values between 3.10 and 3.30 and adopted a value of 3.23. As seen in our Table \[Tab:reddening\], our value for V$_\mathrm{GCPD}$ (Johnson) ranges from 3.00 to 3.05 between T$_\mathrm{eff}$=4500K to T$_\mathrm{eff}$=6500K (it would be 3.1 for T$_\mathrm{eff}$=10,000K). We note that the effect of our choice of computation of the reddening is most notably different for blue filters and makes the least difference for the near-infrared K-band. Our choice of Rv=3.1 is mostly based on consensus and does not play a important role in the result: as far as we are concerned, the degeneracy is one-to-one between the reddening law Rv and the color excess E(B-V). In other words, changing the fixed value of Rv changes the fitted value of E(B-V) and maintains the other parameters of the fit within their fitted values. ----------------- ------------------------------------ ------------------------------------ filter $M_\mathrm{\theta=1mas}$, logg=1.5 A$_\lambda$ T$_\mathrm{eff}$=4500, 5500, 6500K T$_\mathrm{eff}$=4500, 5500, 6500K B$_\mathrm{T}$ 7.734, 5.799, 4.428 4.086, 4.146, 4.179 B$_\mathrm{W}$ 0.759, -1.132, -2.460 4.071, 4.101, 4.117 B 7.372, 5.625, 4.363 3.869, 3.954, 4.012 B$_\mathrm{ST}$ 6.890, 5.321, 4.219 3.800, 3.803, 3.805 HP 6.276, 5.018, 4.114 2.836, 2.990, 3.130 V$_\mathrm{T}$ 6.261, 4.950, 4.034 3.127, 3.173, 3.207 V$_\mathrm{W}$ -0.696, -1.967, -2.852 3.041, 3.058, 3.071 V 6.126, 4.864, 3.986 2.996, 3.027, 3.050 Y$_\mathrm{ST}$ 6.118, 4.855, 3.974 3.048, 3.053, 3.056 R 5.515, 4.467, 3.768 2.346, 2.371, 2.393 J 4.159, 3.602, 3.244 0.802, 0.804, 0.805 H 3.605, 3.280, 3.082 0.525, 0.527, 0.528 K 3.472, 3.217, 3.043 0.354, 0.354, 0.354 ----------------- ------------------------------------ ------------------------------------ : Subsets of magnitudes for $\theta=1mas$ and reddening law (for Rv=3.1) for 3 values $T_\mathrm{eff}$ and logg=1.5[]{data-label="Tab:reddening"} #### Center-to-limb darkening: The effect of the center-to-limb darkening (CLD) needs to be taken into account to properly interpret interferometric angular diameters. Interferometers do not measure diameters directly, they measure visibilities, which need to be modeled in order to estimate an angular diameter. This is easiest to do using a uniform disk (UD) model. However, the derived diameter is not the true stellar diameter. Many authors have published tables of diameter corrections UD/LD, but we found that none are satisfactory, for the simple reason that the UD/LD correction depends on the spatial frequency at which the observations were made, because of the slight difference between UD and LD visibility profiles. For this reason we computed our own $\theta_\mathrm{UD}/\theta_\mathrm{Ross.}$ corrections. The truly interesting radius in our case is the bolometric radius, which almost matches the Rosseland value (where the average optical depth is 1). The Rosseland radius is the one that enters in the identity $L_\mathrm{bol}\propto R_\mathrm{Ross.}^2T_\mathrm{eff}^4$ . In the context of this work, we used a grid of photospheric models tabulated in effective temperature: this is why the apparent Rosseland diameter ($\theta_\mathrm{Ross}$) is the one that allows to compute accurate synthetic photometry. We did not use ATLAS models for our own CLD correction because these models are plane-parallel and cannot produce accurate CLD profiles. Instead, we used grids of SATLAS models in the Cepheid range (). The actual CLD profiles are available in the Vizier database (, via FTP[^5]). We extracted the radial intensity profile I(r), which was converted to a visibility profile using a Haenkel transform, for various spatial frequencies (expressed as $x = \pi B \theta/\lambda$, where B is the baseline in meters, $\theta$ the angular diameter in radian and $\lambda$ the wavelength in meters). For each spatial frequency, we scaled the spatial frequency of a uniform disk visibility profile to match the synthetic profile: the scaling factor was the ratio $\theta_\mathrm{UD}/\theta_\mathrm{Ross}$. An example is shown in Fig. \[fig:Ross\]. We note that spherical models, tabulated as I($\mu$) (where $\mu=\sqrt{1-r^2}$), do not have their limb for $r=1$, in contrast to plane-parallel models. This is because for spherical models, $r=1$ is the outer boundary of the model (defined as the optical depth in the case of SATLAS, ) and does not correspond to the Rosseland radius. We used a separate tabulation of $R_\mathrm{Rosseland}/R_\mathrm{outer}$ extracted from the grid of SATLAS models (H. Neilson, private communication). The mathematical justification of the equivalence of the scaling in *r* in the intensity profile and scaling the visibility curve to estimate the unbiased Rosseland angular diameter is a fundamental property of the Fourier transform: $V[I(a\times r), B\theta_{LD}] = V[I(r), B\theta_{LD}/a] = V[I(r), B\theta_{Ross.}], $ where $B$ is the baseline and $a=1/r_\mathrm{Ross.}$ We note that our results notably depart from those of for two reasons: 1) we took the radius of the star as the Rosseland radius, not the outer layer of the SATLAS model (defined as $\theta_\mathrm{LD}$ by ), and 2) our $\theta_\mathrm{UD}/\theta_\mathrm{Ross}$ is a function of angular diameter and baseline. Overall, we found our values of $\theta_\mathrm{UD}/\theta_\mathrm{Ross}$ to be higher than those published in . A limitation of our approach is that we used hydrostatic atmospheric models to compute our UD/Rosseland correction. This is not the latest way, since have used updated models to take into account non-hydrostatic effects. These authors found that the UD/Rosseland correction is, on average, comparable with the hydrostatic values and that the variation of the correction, due to the pulsation, is very small: about 0.3% in the near-infrared and up to 1.5% in the visible. This translates more or less into the same bias in $d/p$ bias. Since we mostly used near-infrared optical interferometric data, the bias from our choice of using hydrostatic models is, to the best of our knowledge, only about 0.3%, at most. Moreover, there are no published grids of hydrodynamic models. ![Example of deriving the interferometric correction factor $\theta_\mathrm{UD}/\theta_\mathrm{Ross.}$ for SATLAS model T$_\mathrm{eff}$=6000K, logg=1.5 and M=10M$_\odot$. **Left:** radial intensity profile, close to the limb ($\pm1\%$), for various bands; **upper right:** corresponding visibility functions as a function of the dimensionless spatial frequency $x = \pi B \theta/\lambda$; **lower right:** corresponding factors $\theta_\mathrm{UD}/\theta_\mathrm{Ross.}$ for each band as a function of $x$. []{data-label="fig:Ross"}](Ross2.pdf){width="48.00000%"} #### Circumstellar envelopes: The CSEs have two observational effects. The first one is on the near infrared photometric measurements, which are potentially biased for wavelengths in the K band ($2.2\,\mu$m) and redder. The second effect is on the interferometric angular diameters. and showed that the fringe visibility as a function of the baseline length departs from the classical function of a limb-darkened star. In the case of the CSE, the bias on the measurements depends on the baselines and angular diameter. The approach we adopted was to use a grid of models using the parametrization reported by , allowing the tabulation of the angular diameter bias as a function of infrared excess. Biases ($\theta_{observed}$ / $\theta_{real}$) for different strengths of CSEs are shown on Fig. \[Fig:bias\]. We also allowed for an excess in H band, since these two bands are relatively close in wavelengths and it is hard to imagine that the CSEs produce K-band excess and no H-band excess. If no H excess is given as an input parameter, we chose to consider an H-band excess twice as low as the K band excess. The numerical process is very similar to the one we described for the limb-darkening correction: we synthesized the visibilities of a limb-darkened disk surrounded by the CSE, with the relevant observational parameters, and we fitted a uniform disk model to estimate the bias. This is numerically costly, but it is the only accurate way to estimate the bias. ![K-band interferometric angular diameter bias (observed / real) due to the CSE as a function of the dimensionless spatial frequency.[]{data-label="Fig:bias"}](biasCSE_angdiam.pdf){width="48.00000%"} Fitting strategy ---------------- We used a standard $\chi^2$ minimization, $$\chi^2 \propto \sum_i \frac{(O_i - M_i)^2}{e_i^2} ,$$ where $O_i$ is the i-th observations, $e_i$ its associated error, and $M_i$ the prediction from the model. The strategy to compute the overall $\chi^2$, for all observations, necessitates some care. A normal $\chi^2$ would weight each measurement by its error bar. However, when we mix various observables, those that are present in large numbers are favored compared to scarce ones. A more general approach is to compute a $\chi^2$ by computing the final $\chi^2$ as the average of $\chi^2$ computed for each observable: $$\chi^2 \propto \sum_j \frac{1}{\mathrm{sizeof(G_j)}} \sum_{i\in G_j} \frac{(O_i - M_i)^2}{e_i^2} .$$ This is to ensure that each group $G_j$ of observables contributes equally to the final likelihood estimation: for example, there are usually many more photometric observations than radial velocity or interferometric diameters. We used a Levenberg-Marquardt (LM) least-squares fit based on `SciPy`[^6] `scipy.optimize.leastsq`. Using the total $\chi^2$ would have given more importance to data in highest numbers. Contrary to the approach taken by , we did not fit the zero points of photometric systems, so we do not suffer degeneracy. After we found the best fit, we estimated the uncertainties in the derived parameters by using the covariance matrix around the best-fit solution. Another aspect of the fitting process is the phasing of the data. It is known that Cepheids are not perfectly stable pulsators. For example, the slow (compared to the pulsation time) evolution of the star’s interior leads to a first-order period change. The amount of linear change is an indicator of the evolutionary stage of the Cepheids and can be computed theoretically (see, for example, ). We allowed the period to change linearly in our model. Prototypical stars ================== Note that the observational data, and best fit model are available in electronic form, as FITS tables. $\delta$ Cep ------------ $\delta$ Cep is the prototypical Cepheid and has been observed extensively, in particular by optical interferometer. We took the photometry from , , , and . We also added photometric observations from Tycho and Hipparcos from and . We took the cross-correlation radial velocities from and . The angular diameters are the ones published in and . In addition, to properly interpolate the photospheric models, we adopted a metallically of \[Fe/H\]=0.06, based on . We note that the metallicity has a very weak effect on surface brightness values and is undetectable with our data set. For the $\chi^2$ averaging, we used four groups of observables: radial velocities (91 measurements) angular diameters (67 measurements), photometric magnitudes (483 measurements), and colors (421 measurements). Error bars for each of these groups were multiplied by $\sim$0.59, $\sim$0.50, $\sim$1.26, and $\sim$1.35, respectively. We show the fit in Fig. \[Fig:delCep\], and the most important parameters are listed in Table \[Tab:delCepFit\]. ![image](deltaCep_spline5.pdf){width="\textwidth"} parameter best fit ---------------------------------------- ------------------------------------------------------ $\theta_\mathrm{0}^\mathrm{(a)}$ (mas) $1.420\pm0.009$ $E(B-V)$ $0.032\pm0.005_\mathrm{stat.}\pm0.015_\mathrm{sys.}$ K excess (mag) $0.025\pm0.002$ H excess (mag) $0.018\pm0.004$ p-factor $1.29\pm0.02$ $\mathrm{MJD}_0^\mathrm{(b)}$ $48304.7362421$ period (days) $5.3662906\pm0.0000061$ period change (s/yr) $-0.069\pm 0.033$ metallicity \[Fe/H\] $0.06$ distance (pc) $274$ \[fixed\] $\chi^2_r$ 1.7 adopted mass (M$_\odot$) 4.8 average radius (R$_\odot$) 43.0 : Parameters of the $\delta$ Cep fit. The quantities with uncertainties are adjusted in the model and the other ones are fixed. We note that the uncertainties are purely statistical and do not take into account systematics, such as the uncertainties on the distance, for example (274$\pm$11 pc, ).[]{data-label="Tab:delCepFit"} It is interesting to compare the result we obtain here with that of our previous study, which did not include photometry (). The value of the p-factor is very similar: Using only the radial velocities and angular diameters reported by , we found p=1.27$\pm$0.01. The uncertainty was smaller since we took into account correlations in interferometric error bars (using the formalism of ), which we do not yet have implemented in our current SPIPS fitting algorithm. The actual p-factor uncertainty should take into account the distance uncertainty (0.050), however, which is much larger that the statistical uncertainty (0.020). The CSE is noticeable in the interferometric data as a bias affecting the angular diameter measured at the shortest baselines. did not fit the excess, but rather compared the fit using a simple star model to a fit using the model we fitted on another Cepheid (Polaris), for which we had extended baseline coverage. At the time, we used a 1.5% excess (0.016 mag). In the case of SPIPS, we have photometric data that allow anchoring the model and allow using the CSE contribution as a free parameter. Thanks to this, we confirmed the infrared excess and estimated it to be 0.025$\pm$0.002 mag in K band. We also let the H excess free to vary to fit the photometry and found it to be 0.018$\pm$0.004. This latter is solely based on the photometric measurements. The good agreement with all the observables is remarkable and increases our confidence in the method. In particular, our SPIPS modeling is able to combine all data and does not show apparent discrepancies between optical interferometry and IRSB, such as noted by . Admittedly, we added the complexity of having an infrared excess, which probably explains the discrepancy (which did not take into account). One could argue that the K-band magnitudes do not agree the best agree in our fit (Fig. \[Fig:delCep\], panel ’h’). We also performed a fit using only photometric measurements (omitting our interferometric measurements) and found the p-factor to be $1.29\pm0.06,$ which, apart from the poorer statistical uncertainty, agrees perfectly well with our fit using optical interferometry. The K excess was also let free in the photometric fit, and its value was found to be $0.010\pm0.004$ magnitude. Additionally, the period change ($-0.07\pm0.03$ s/yr) is found to agree well with the recent estimate by , even though these authors have a much greater accuracy ($-0.1006\pm0.0002$ s/yr). $\eta$ Aql ---------- $\eta$ Aql is another important prototypical Cepheid because of its proximity (and hence large apparent size), which makes it accessible to optical interferometry. We observed $\eta$ Aql in July 2006, using the FLUOR instrument () at the CHARA Array. We used the same data reduction approach as in previous works, in particular the $\delta$ Cep data used in the previous section. We took photometry from , , , , . Photometric measurements in the Walraven system were taken from . We also added photometric observations from Tycho and Hipparcos from and . Radial velocities were taken from and . Finally, we also took additional angular diameter measurements: H band long-baseline measurements from and short-baseline K-band measurements from . We adopted a metallically of \[Fe/H\]=0.05 , based on . ![image](etaAql_Fourier5.pdf){width="\textwidth"} parameter best fit ---------------------------------------- ------------------------------------------------------ $\theta_\mathrm{0}^\mathrm{(a)}$ (mas) $1.694\pm0.002$ $E(B-V)$ $0.161\pm0.005_\mathrm{stat.}\pm0.015_\mathrm{sys.}$ K excess (mag) $0.018\pm0.002$ H excess (mag) $0.016\pm0.003$ p-factor 1.30 \[fixed\] distance (pc) $396\pm6$ $\mathrm{MJD}_0^\mathrm{(b)}$ $48069.3905$ period (days) $7.176841\pm0.000012$ period change (s/yr) $0.18\pm0.07$ metallicity \[Fe/H\] $0.05$ reduced $\chi^2$ 2.3 adopted mass (M$_\odot$) 6.3 average radius (R$_\odot$) 57.6 : Parameters of the $\eta$ Aql fit. The quantities with uncertainties are adjusted in the model and the other ones are fixed.[]{data-label="Tab:etaAqlFit"} The results of the fit are presented in Fig. \[Fig:etaAql\] and Table \[Tab:etaAqlFit\]. As for $\delta$ Cep, we applied a correction factor to the error bars to equally weight the four following groups: radial velocities (57 measurements, 0.5 factor), angular diameter (70 measurements, 0.55 factor), photometric magnitudes (377 measurements, 1.3 factor), and photometric colors (432 measurements, 1.35 factor). We detect a slight H- and K-band infrared excess ($0.016\pm0.003$ and $0.018\pm0.002,$ respectively). Like $\delta$ Cep, this is allowed by the combination of infrared photometry and infrared interferometric angular diameters. Regarding the accuracy of E(B-V), reported 0.126 and also quoted an older value of 0.143 (), as well as 0.138 (metallicity corrected, computed by the software ’BELRED’). quoted $0.130\pm0.009$. used 0.129. Our estimate is in this range, at $0.161\pm0.005$, on the redder side. The statistical uncertainty we obtain, $\pm0.005$, is underestimated because we did not take into account the fact that all photometric measurements in a same band and from a same source share a common error, namely the zero point and the photometric calibrators. If we perform a Jack-knife resampling, removing one set of photometric measurements every time, the uncertainty on E(B-V) increases by a factor of 3, to $\pm0.015$. Regarding the distance, $\eta$ Aql appears in Table 5 of with a distance of $261\pm6\pm7$pc for p=1.321 ($d/p=198\pm5\pm4$pc). determined a distance of $255\pm5$pc using IRSB method, for p=1.39 ($d/p=183\pm4$pc). Using a subset of data we used, obtained d=320$\pm$32pc with p=1.43 (d/p=223$\pm$22pc). Our method gives a distance of $296\pm5$pc ($d/p=228\pm4$pc), which is not consistent with . We note that our uncertainty is on the same order as that of , and surprisingly, they used only radial velocity and two-band photometry. If we restrict ourselves to IRSB data (radial velocities and V, K photometry), our fit leads to $\pm15$pc. Since we cannot fit E(B-V) (because of the degeneracy with T$_\mathrm{eff}$), we should estimate the sensitivity of the distance estimate to change in E(B-V). We computed that decreasing E(B-V) by 0.05 leads to a distance 4pc smaller. In other words, restricting our data set to the IRSB method leads to similar distances. The reason why we find an uncertainty in the estimated distance three times larger than is the following: we suspect that since we fitted all parameters at once (radial velocity profile, T$_\mathrm{eff}$ profile, distance, etc.), our uncertainties are more realistic. If we keep our $\eta$ Aql model and only use the IRSB dataset, and if we assume that we know everything in the model except for the distance and only adjust for this parameter, the uncertainty decreases to $\pm5$pc, which is the claim of . In other words, our analysis of $\eta$ Aql is a perfect example of why fitting all parameters at the same time provides more realistic uncertainties. Conclusions =========== Our model makes many simplistic assumptions about Cepheids, most of which are known to be incorrect at a certain level. However, in the context of the parallax-of-pulsation distance estimation, our approach is more complete than most (if not all) implementations that are variations of the Baade-Wesselink method (BWM): 1) we include all possible observables, including redundant ones, and 2) we use observation modeling based on a physical model (as opposed to ad-hoc parameters, such as the surface brightness relations). Our implementation includes the traditional BWM, if one restricts the input data set. Using our modeling, we address some shortcomings of the BWM: - We adopted an approach of modeling the observables rather than using ill-defined corrective factors. For example, we used modeled interferometric visibility profiles to compute the interferometric bias $\theta_\mathrm{UD}/\theta_\mathrm{Ross.}$ whereas it is traditionally derived for brightness profile fits to analytical functions. We still make use of the projection factor, but we are working on a spectral synthesis modeling to allow us to use a consistent pulsation velocity estimation. - We used atmospheric models (ATLAS9 in our case) to compute synthetic photometry. This works very well, as proven by the agreement with interferometric angular diameters on our two prototypical stars. We note that the resulting surface brightness relation cannot be approximated by a linear function of the effective temperature (or color), as is done with a traditional implementation of the BWM. Because the BWM lacks redundancy in the dataset it uses, this shortcoming cannot be detected and propagates as a color bias on the distances. - Circumstellar envelopes (CSE) are consistently taken into account in the near-infrared photometry and optical interferometric diameters. - Reddening is fitted from the data in a self-consistent way. Conversely, BWM uses an E(B-V) that was determined for a certain reddening law and often applies it using another reddening law. Our method does not suffer from this bias. - Our approach permits very good phasing of data, even taken at different epochs. Not only does it improve the accuracy of the distance determination (because poorly phased data often have underestimated amplitude), it also allows us to study the period change of Cepheids. - Fitting all parameters at once realistically estimates the statistical uncertainties, as opposed to a method that would fit consecutive sets of parameters. For example, if the analytical radial velocity function is fitted first in an implementation of the BMW and then the analytical variations of angular diameters, followed by the distance alone as the ratio between the two, the uncertainty of the distance would not account for the other uncertainties and would likely be underestimated by a factor as large as 3. All this should come as a warning to studies using only two bands: their distance (or p-factor) determinations probably have systematic errors that are hard to estimate without using a method like the one we have presented. Even then, their statistical uncertainties might very well be underestimated by a large factor. We applied the method to $\delta$ Cep and $\eta$ Aql. For $\delta$ Cep we confirm our formerly published values for the p-factor of 1.28$\pm$0.06, accounting for the uncertainty of the distance by of $274\pm11$ pc. For $\eta$ Aql, we estimated its biased distance to be $d/p=228\pm4$pc, leading to $d=296\pm5$pc assuming p=1.30. In both cases, our models reproduced all the available data (about a thousand observations in each case), in a self-consistent way. In the near future, we will continue our work by systematically studying Cepheids for which large datasets are available. We would like to thank the referee, Hilding Neilson, for his work that led to a much improved manuscript, as well as for providing additional insights to the use of SATLAS models described in the present work. This research has made use of the Spanish Virtual Observatory supported from the Spanish MEC through grant AyA2008-02156. This research has made use of the VizieR catalog access tool and SIMDAD database, operated at CDS, Strasbourg, France. A.G. acknowledges support from FONDECYT grant 3130361. P.K and J.B acknowledge financial support from the “Programme National de Physique Stellaire” (PNPS) of CNRS/INSU, France, and the ECOS/Conicyt grant C13U01. The CHARA Array is funded by the National Science Foundation through NSF grants AST-0908253 and AST-1211129, and by the Georgia State University through the College of Arts and Sciences. STR acknowledges support by NASA through grant number HST-GO-12610.001-A from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. [^1]: <http://phoebe-project.org/> [^2]: <http://wwwuser.oats.inaf.it/castelli/grids.html> [^3]: <http://svo2.cab.inta-csic.es/theory/fps3/> and <http://www.ivoa.net/documents/Notes/SVOFPS/> [^4]: <http://ulisse.pd.astro.it/Astro/ADPS/Paper/index.html> [^5]: <ftp://cdsarc.u-strasbg.fr/pub/cats/J/A%2BA/554/A98/spheric/> [^6]: <http://scipy.org/>
IABM’s current Vice Chair, Graham Pitman, is stepping aside from this key governance role in August, having served the Association with great distinction for the last nine years. IABM is therefore seeking to appoint a new Vice Chair. Reporting to the IABM Members’ Board, this non-executive role is key to the governance and supervision of the Association, ensuring transparency and integrity in everything it does, providing experienced counsel and inspiring best practice and excellence in its processes and activities. Established in 1976, IABM has grown into a truly international representative body over recent years, with 600+ member companies worldwide benefiting from its knowledge, support and leadership. IABM operates on a day-to-day basis under the auspices of a full-time team which receives direction from its elected members’ board. The Vice Chair works for IABM two days per calendar month. As well as ensuring good governance and transparency, the role also involves ensuring effective business and financial processes, remuneration committee duties, staff welfare, and chairing the IABM Supervisory Group. IABM is seeking applications from candidates with substantive board level experience either in the Broadcast and Media industry or organisations of similar complexity and type to IABM – for example charity, health trust, trade or a similar not-for-profit where there is a high level of accountability to external stakeholders. This will have included responsibility for supervision, governance, audit and compliance.
https://content-technology.com/media-business/iabm-invites-applicants-for-vice-chair-position/
We developed this concept model to help test the arrangement of the different elements of the programme and demonstrate the spatial interest resulting from the composition. A shortlisted design proposal for a new residential teaching building at Camphill School in Aberdeen, an independent charity offering education, care and therapy services for children and young people with additional support needs. Our design approach was shaped by both the landscape of Camphill and the unique teaching culture which has been developed there over the past 60 years. Our proposals seek to embed themselves into the landscaped surroundings of the school campus creating harmonious and supportive living environments for the residents. The driver for our scheme was the desire to create a little village in the woods, breaking down the scale of the building into more individual and personal elements each with their own character but internally linked to form one integrated facility. Semi-enclosed walled garden spaces surround the building tempering the relationship between the open landscapes of the campus and the snug internal spaces of the school. Some small huts containing potting sheds and studio spaces sit independently within these garden spaces, offering places of calm retreat, away from the main building. The final location for the building within the site had not been determined, with this in mind we developed a proposal which allowed for adaption of the model to suit it's immediate context. This would allow for the creation of a nuanced and carefully considered, sustainable and cost effective home for residents and staff alike.
http://jmarchitects.net/projects/camphill-school/
Off-day scheduling with hierarchical worker categories. A work force includes workers of m types. The worker categories are ordered, with type-1 workers the most highly qualified, type-2 the next, and so on. If the need arises, a type-k worker is able to substitute for a worker of any type j greater than k (k = 1, ..., m - 1). For 7-day-a-week operation, daily requirements are for at least Dk workers of type-k or better, of which at least dk must be precisely type-k. Formulas are given to find the smallest number and most economical mix of workers, assuming that each worker must have 2 off-days per week and a given fraction of weekends off. Algorithms are presented which generate a feasible schedule, and provide work stretches between 2 and 5 days, and consecutive weekdays off when on duty for 2 weekends in a row, without additional staff.
Updated on 2022-12-15: KEV update CISA has updated its KEV database with six new vulnerabilities that are currently being actively exploited. These include recently disclosed zero-days in Citrix, Fortinet, Windows, and iOS, but also two vulnerabilities patched earlier this year in Veeam backup solutions. Overview: Flaws in Veeam, Microsoft, Citrix, Fortinet, and Apple Added to KEV Catalog The US Cybersecurity and Infrastructure Security Agency (CISA) has added six flaws to its Known Exploited Vulnerabilities (KEV) Catalog. The vulnerabilities are a pair of remote code execution vulnerabilities in Veeam Backup & Replication; an authentication bypass vulnerability in Citrix Application Delivery Controller (ADC) and Gateway; a feature bypass vulnerability in Microsoft Defender SmartScreen; a heap-based buffer overflow vulnerability in Fortinet FortiOS; and a type confusion vulnerability in iOS. The first five issued have remediation deadline dates of January 3, 2023; the iOS issue has a remediation date of January 4. Note - For those in the federal space, you now have targets for rolling out the updates we’ve been talking about. And yes, those dates are challenging with the holidays. The attackers are counting on us being distracted or not present so they can more easily exploit targets during this time of year, so we need to plan accordingly. Fingers crossed you can get things rolled out in the next week, to include any tune-up to your monitoring and alerting systems so you can give your staff time off.
https://pupuweb.com/flaws-veeam-microsoft-citrix-fortinet-apple-added-kev-catalog/
Double Chocolate Zucchini Bread INGREDIENTS: - 1 cup Solid Coconut Oil or Room Temperature Butter - 1 cup Granulated Sugar - 1 cup Brown Sugar - 4 whole Eggs - 2 teaspoons Pure Vanilla Extract - 2 teaspoons Baking Powder - 1 teaspoon Baking Soda - 1 1/2 teaspoons Sea Salt - 3 1/2 cups Whole Wheat Pastry Flour Or All-Purpose - 3 cups Zucchini, Grated - 2/3 cup Dark Chocolate Cocoa Powder or Natural Cocoa Powder - 1/4 cup Hot Coffee, Strongly Brewed - 1 1/2 cups Dark Chocolate or Semi-Sweet Chocolate Chips - 1 cup Walnuts, Chopped (optional) INSTRUCTIONS - Grease (2) 9"x5"x3" loaf pans and set aside. - Cream together coconut oil or butter and both sugars until light and fluffy. - Add eggs one at a time beating well between each addition. - Add vanilla, baking powder, baking soda, and sea salt. Stir until well combined. - Stir in flour, zucchini, cocoa powder, coffee, 1 cup chocolate chips and nuts until well combined. Scrape sides and bottom of bowl as needed. - Divide batter evenly between greased loaf pans and push batter into corners of loaf pan and leave center slightly hallowed. Sprinkle each loaf with 1/4 cup chocolate chips. Allow loaves to rest for 20 minutes while oven preheats to 350 degrees. - Bake loaves for 60-70 minutes until toothpick inserted into the center of each loaf comes out clean. - Allow loaves to cool completely before slicing.
https://www.serenabakessimplyfromscratch.com/2015/08/double-chocolate-zucchini-bread.html
Veda means “knowledge,” but specifically refers to the eternal wisdom of the four collections of hymns, sacrificial rituals, andVayu, the wind, shown riding his vahana, the antelope other sacred texts that are called the Vedas. Along with the four collections (the Samhitas, the Brahmanas, the Aranyakas, and the Upanishads), two more bodies of literature, the Sutras and the Vedangas, are sometimes included in the Vedas. The Vedas are said not to be composed by human hand (apaurusheya). Tradition teaches that they have always existed, and myth accounts for the preservation and reappearance of the Vedas after each dissolution of the universe. The seven seers (sapta-rishis) in most accounts, the Manu of the age (who might also be a maharishi), or an avatara (incarnation) like the boar (Vamana) incarnation of Vishnu makes sure that the inerrant Vedas come into an age. But the myths show an anxiety about how they are preserved as an evil time sets in, or when demons have stolen the Vedas and an avatara is required to restore them. Even so, many believe that the Vedas contain all knowledge (veda) past and future. All science has been discovered in some previous age, and there is evidence of the gods or demons using airplanes to cross the sky, as does Ravana when he captures Sita, and using missiles and nuclear weapons against each other, as does Indra when he launches his vajra (divine weapon, formerly a lightning bolt) or Rama his “arrows.” The Aryans resisted all forms of writing for several millennia even after they had come in contact with it, depending instead on their educational system and memory techniques (pathas, vikritis). An oral scriptural tradition and its reverent transmission was believed to be more reliable than writing, because writing could be altered, while a collective memory could not. The Vedas were therefore known as sruti, “that which is heard.” There were several schools (sakhas) and subschools (caranas) that kept alive variations within the tradition. Scholars have argued that internal examination of the Vedic corpus indicates a time when the Aryans knew of but had not yet migrated to what is now northeastern Pakistan and northwestern India. They called this region Sap- tasindhu (the seven rivers’ land). They shared an ancestry with their pre-Avestan cousins who later inhabited what is now Iran. The early part of the Vedic or Samhita period (c. 1500-900 b.c.e.) appears to have been a time when the warriors (kshatriyas) were highest in esteem and power and the priests (brahmins) were charismatic prayers and ritualists. The later Vedic period was a time of religious transformation, with several triads of gods rising to and then losing supremacy (the last triad of the period being Agni, Indra, and Surya), Varuna’s rapid decline, the loss of the entheogen (“god-experience”-inducing plant) soma from actual ritual use sometime before the end of the Brahmana and Aranyaka period (c. 900-c. 600 b.c.e.), and the rise of the godling (aditya) Vishnu and a non- Aryan god of the cremation ground, Siva, to supremacy over the Vedic pantheon. (There are major entries on each of the aspects of Vedic religion.) Within the Samhitas (collections) is a fourth section known as the Atharvaveda that is quite unlike the rest in tone and content. The other three sections were often referred to as the three (triyi) Vedas, excluding or assigning lesser authority to the Atharvaveda. It was filled with magic, both positive and negative, ranging from mantras and potions for fertility to death prayers. In fact the notion of knowing brahman, in the sense of power, was what the Atharva priests claimed, as over against the rituals and sacrifices of Vedic brahmins. They knew the power (brahman) and the magic (maya) that could control both gods and sacrifice. Although their practice does not come unaltered into later periods, their influence on Hindu mythology and its practices to acquire magic powers (siddhis) has not been fully accounted for. Their magical practices were called the panca (five) kalpas (sacred precepts). The period when the Brahmana and Aranyaka literature were created closed with a priesthood that was asserting its primacy over the other castes (varnas), confident that its “science” of sacrificial rituals and Vedic chanting (mantrayana) could control the universe and optimistic that it knew how to prepare Aryans for a life beyond the grave. The Upanishadic literature was not included as part of the Vedic corpus for centuries after its composition. Its outlook challenged the hereditary nature of the Brahmanical priesthood, its sacrificial religion, its knowledge of that One by which everything was known (Brahman), and its notion of the afterlife. Once this challenge was integrated into a new worldview, the fourfold Vedas would become authoritative, orthodox, and orthopraxic for what would eventually become Hinduism. But many would maintain that Vedic authority was a symbol and Hindu mythology of the Puranas was the reality. VEDAS – The Scripture Veda means “knowledge,” but specifically refers to the eternal wisdom of the four collections of hymns, sacrificial rituals, andVayu, the wind, shown riding his vahana, the antelope other sacred texts that are called the Vedas. Along with the four Leave a Reply You must be logged in to post a comment.
http://www.mahapurana.com/hindu-mythology/vedas-the-scripture/
This is the concluding part of a paper delivered at the JUSTICE/Sweet and Maxwell Human Rights conference on 20 October 2010. The first part was posted on 26 October 2010. The Strasbourg developments in relation to Article 8 and reputation have found their way into domestic law. Under the Human Rights Act 1998 (“HRA”) the court is required to have regard to Strasbourg jurisprudence and, as a public authority, must not act in a way that is incompatible with Convention rights. Although the HRA did not create a new cause of action to enforce Article 8 directly in English law (other than in claims against a public authority which has acted, or intends to act, in a way which is incompatible with it: see HRA ss7 & 8), it had been acknowledged that Article 8 imposes not only a negative, but also a positive, obligation on the state to respect the individual’s private and family life (See, for example, X and Y v Netherlands (1985) 8 EHRR 235). While Article 8 may include a positive obligation on a member state to adopt measures to secure respect for private life between individuals, the state has a wide margin of appreciation as to what is required particularly where there is a balance between competing interests or Convention rights (see, for example, Evans v UK (2008) 46 EHRR 34 at , ; and see ) As a result, Article 8’s influence had led to the development in domestic law of a new cause of action to protect privacy: “misuse of private information” (see generally Duncan & Neill on Defamation (3rd edn, 2009), chapter 25). In that context, it has been held that “the values enshrined in Articles 8 and 10 are now part of the cause of action for breach of confidence” (See Campbell v Mirror Group Newspapers Ltd 2 AC 457 at (Lord Nicholls) and that it is necessary to consider Strasbourg jurisprudence to establish the scope of that domestic cause of action, since those Articles are now “not merely of persuasive or parallel effect” but are “the very content of the domestic tort that the English court has to enforce” (McKennitt v Ash QB 73 at ). The House of Lords in the British Broadcasting Corporation case 1 AC 145 appeared to be in no doubt that Article 8 conferred a right to reputation that must be balanced, in an appropriate case, against the rights conferred by Article 10: see Lord Hope at and and Lord Brown at . The Supreme Court considered the point, dealing for the first time with Karako, in Guardian News & Media 2 WLR 325. Lord Rodger said: “Article 8 and reputation 37 On behalf of the press, Mr Robertson QC did not dispute that article 8 rights fall within the scope of “the rights of others” in article 10(2). But, under reference to the judgment of the European Court of Human Rights in Karakó v Hungary (Application No 39311/05) (unreported), given 28 April 2009, he submitted that article 8 does not confer a right to have your reputation protected from being affected by what other people say. So the only article in play in relation to M’s reputation was article 10. 38 In the Karakó case the applicant was a politician. During an election campaign an opponent had said in a flyer that the applicant was in the habit of putting the interests of his electors second. The applicant accused his opponent of criminal libel, but the prosecutor’s office terminated the investigation on the ground that the flyer concerned the applicant as a candidate rather than as a public official and so its publication was not a matter for a public prosecution. Then, acting as a private prosecutor, the applicant submitted an indictment for libel. The district court dismissed the indictment on the ground that the opponent’s statement was a value judgment within the limits of acceptable criticism of a politician. The applicant complained of a violation of his article 8 rights. The European court held that there had been no such violation. 39 As the European court’s judgment in the Karakó case itself shows, in Petrina v Romania (Application No 78060/01) (unreported), given 14 October 2008, the court had confirmed, at para 19, that the right to protection of reputation is a right which, as an element of private life, falls within the scope of article 8 (“le droit à la protection de la réputation est un droit qui relève, en tant qu’élément de la vie privée, de l’article 8 de la Convention”). The court had gone on, at para 29, to survey its previous case law, ending up with the statement in Pfeifer v Austria (2007) 48 EHRR 175, 183, para 35, that “a person’s reputation, even if that person is criticised in the context of a public debate, forms part of his or her personal identity and psychological integrity …”. 40 In the Karakó case the European court did not depart from that earlier jurisprudence. Rather, it accepted, at para 23, that some attacks on a person’s reputation could be of such a seriously offensive nature as to have an inevitable direct effect on the victim’s private life. But the court took the view that, on the facts, the applicant had not shown that the publication in question had constituted such a serious interference with his private life as to undermine his personal integrity. That being so, the applicant’s reputation alone was at stake in the context of the expression which was said to have damaged it. 41 Contrary to what Mr Robertson suggested, however, this conclusion did not mean that the court was proceeding on the basis that the applicant’s claim in respect of his reputation did not fall within the scope of article 8. That would have been inconsistent with the court’s previous case law and would also have made nonsense of the reasoning in paras 24-29 of the judgment. In particular, in paras 24 and 25 the court is concerned with the inter-relationship of articles 8 and 10 in the circumstances. The outcome of that discussion (para 26) is that, even though the applicant is founding on article 8, the court must consider whether the Hungarian authorities properly applied the principles inherent in article 10. The court concludes that they did: para 27. Putting the two strands together, the court goes on to find, in para 28, that the applicant’s claim that his reputation as a politician has been harmed is not sustainable under article 8 and that a limitation of his opponent’s right to freedom of expression under article 10 would have been disproportionate. That leads, finally, to the conclusion that there has been no violation of article 8. 42 In short, in the Karakó case the European court was concerned with the application of articles 8 and 10 in a situation where, in the court’s view, the applicant had not shown that the attack on his reputation had so seriously interfered with his private life as to undermine his personal integrity. In fact, the court does not mention any specific effects on the applicant’s private life. In the present case, however, as already set out at para 21 above, M does explain how he anticipates that his private life would be affected if his identity were revealed. Admittedly, he appears at one point to single out the alleged damage to his reputation. Nevertheless, the court is really being invited to consider the impact of publication of his name on his reputation as a member of the community in which he lives and the effect that this would have on his relationship with other members of that community. In that situation the alleged effect on his reputation should be regarded as one of the reasons why, he contends, a report that identified him would seriously affect his private life. On that basis the report would engage article 8(1).” Thus, the highest court in the land has accepted that reputation is protected by Article 8. Neither BBC nor GNM was a defamation case. If Article 8 does include reputation, as well as privacy, then it may well have a significant impact on domestic defamation law. In Greene v Associated Newspapers Limited QB 972 at , although the Court of Appeal was prepared to assume that reputation was part of Article 8 (The decision was early in the series of Strasbourg decisions on the Article 8/reputation point. See also the discussion of the balance between Articles 8 & 10 in Galloway v Telegraph EMLR 11 CA at [78-83].), it refused to depart from the well-established principles in relation to the grant of interim injunctions in defamation cases. What is known as the “rule in Bonnard v Perryman” – 2 Ch 269 – means that an interim injunction will not generally be granted in a defamation case where the defendant intends to prove the truth of what is to be published, or advance some other substantive defence, unless it can clearly be shown that such defence is bound to fail. For now – at least – it remains much easier to obtain an interim injunction in a “privacy” case than in a “reputation” (defamation) case. The difference between the two causes of action can be critical, as John Terry found to his cost: Terry (formerly LNS) v Persons Unknown EMLR 16 (Tugendhat J). At least, so long as that difference continues to survive in its present form. This issue is part of the bigger question of whether domestic law needs to re-balance the rival interests, to take account of the elevation of “reputation” from a “legitimate aim” referred to in Article 10(2) to a Convention right, encompassed within Article 8. Following the House of Lords decision in Campbell v MGN Limited 2 AC 457, in a case where Articles 8 and 10 both require to be considered, the court will follow the approach set out as four clear propositions by Lord Steyn in Re S (A Child) 1 AC 593 HL at : “First, neither article has as such precedence over the other. Secondly, where the values under the two articles are in conflict, an intense focus on the comparative importance of the specific rights being claimed in the individual case is necessary. Thirdly, the justifications for interfering with or restricting each right must be taken into account. Finally, the proportionality test must be applied to each. For convenience I will call this the ultimate balancing test. This is how I will approach the present case.” This is a far cry from the days in which it could be said that freedom of speech was a “trump card which always wins” (Lord Hoffmann in R v Central Independent Television Fam 192 at 202-204). Domestic defamation law has always had to balance reputation and free speech interests. Over the last 20 years, there has been a re-balancing of interests to take account of Article 10 and give greater weight to freedom of expression. Must there now be a further re-balancing the other way and, if so, will it make any real difference? The answer may well be “yes” and “yes”. This is illustrated by the recent case of Flood v Times Newspapers Limited, in which the newspaper contended that its report about an investigation into a police officer was protected by the “Reynolds/Jameel” defence, which protects responsible publication on matters of public interest. In Reynolds v Times Newspapers 2 AC 127 – in which that defence was devised – Lord Nicholls set out the essential test and gave illustrative guidelines (at page 205A-C). He concluded his summary of the relevant principles with these words (at page 205F): “Above all, the court should have particular regard to the importance of freedom of expression. The press discharges vital functions as a bloodhound as well as a watchdog. The court should be slow to conclude that a publication was not in the public interest and, therefore, the public had no right to know, especially when the information is in the field of political discussion. Any lingering doubts should be resolved in favour of publication.” The judge in the Flood case held that the last sentence (in bold) above could not stand in the light of the HRA, the Strasbourg cases and the approach set out in Re S (see above). The essential test, in a defamation case, as in the misuse of private information cases, now comes down to:- “…whether publication of the material pursues a legitimate aim, and whether the benefits that will be achieved by its publication are proportionate to the harm that may be done by the interference with the right to reputation“. see Flood EWHC 2375 (QB) at , , [148-149]. The Court of Appeal agreed with the judge: EMLR 26 at :- “In that connection, although the point was not mentioned in Jameel 1 AC 359, I agree with the Judge (at … paragraph 146) that the last sentence in the passage quoted above .. from Lord Nicholls’s opinion cannot stand following the 1998 Act: it is clear from In re S (A Child) (Identification: Restrictions on Publication) … and ..BBC .. that Articles 8 and 10 have equal weight.” An application for permission to appeal to the Supreme Court has been lodged in Flood, but it is unlikely (even if permission were to be granted) that this point would the subject of review. Who gets the benefit of the doubt can be enormously important in freedom of expression cases. This is illustrated vividly by the Naomi Campbell case, in which 5 judges thought that the “journalistic package” was warranted by the public interest, but 4 thought that the publication intruded too far into the claimant’s privacy (principally because of the publication of a photograph taken in the street). The claimant won by 3:2 majority in the House of Lords (there is an outstanding application to the ECtHR by the defendant) (For an account (by me) of the Naomi Campbell litigation, see Cases That Changed Our Lives (Buttterworths LexisNexis 2010), chapter 17). It will be increasingly important that judges give a proper ambit to the scope of editorial discretion, particularly in relation to how the media decide to communicate matters of public interest to readers/viewers (with the inclusion of photographs, names or other details). Uncertainty about the outcome of cases increases the chilling effect. It will also be important in any defamation case to consider, on the facts, whether the nature of any attack on the claimant’s reputation is sufficiently serious to intrude into their private life. In a privacy case, there must be a “certain level of seriousness” before the matter will fall within Article 8(1). In R (Wood) v Cmr of Police of the Metropolis ( 1 WLR 123 CA at [22-23]) it was said that this “safeguard” was necessary to ensure that the core right protected by article 8, “however protean”, should not be “read so widely that its claims become unreal and unreasonable” (see also ; R (Gillan) v Comr of Police of the Metropolis 2 AC 307 HL at ; M v Secretary of State for Work and Pensions 2 AC 91 HL at ) . Where a defamation claim falls below that threshold (For an interesting discussion of the “threshold of seriousness” in the definition of “defamatory” see Thornton v Telegraph EMLR 25 (Tugendhat J) at [20-95], then there should be no question of considering the protection of reputation as a “right” under Article 8. In such a case, there would be no balancing of competing rights and the protection of reputation would be considered only in relation to Article 10(2). Before leaving the question of whether or not reputation is a “right”, it is worth noting the approach of the South African courts. “Human dignity” is one of the founding values of the South African Constitution (clause 1). The Constitution protects dignity (clause 7), privacy (clause 14) and freedom of expression (clause 16). In Khumalo v Holomisa ZACC 12; 2002 (5) SA 401, the court said: “ In the context of the actio injuriarum, our common law has separated the causes of action for claims for injuries to reputation (fama) and dignitas. Dignitas concerns the individual’s own sense of self worth, but included in the concept are a variety of personal rights including, for example, privacy. In our new constitutional order, no sharp line can be drawn between these injuries to personality rights. The value of human dignity in our Constitution is not only concerned with an individual’s sense of self-worth, but constitutes an affirmation of the worth of human beings in our society. It includes the intrinsic worth of human beings shared by all people as well as the individual reputation of each person built upon his or her own individual achievements. The value of human dignity in our Constitution therefore values both the personal sense of self-worth as well as the public’s estimation of the worth or value of an individual. It should also be noted that there is a close link between human dignity and privacy in our constitutional order. [a footnote here in the judgment reads: “See National Coalition .. at para 30: “The present case illustrates how, in particular circumstances, the rights of equality and dignity are closely related, as are the rights of dignity and privacy.”] The right to privacy, entrenched in section 14 of the Constitution, recognises that human beings have a right to a sphere of intimacy and autonomy that should be protected from invasion… This right serves to foster human dignity. No sharp lines then can be drawn between reputation, dignitas and privacy in giving effect to the value of human dignity in our Constitution. … The law of defamation seeks to protect the legitimate interest individuals have in their reputation. To this end, therefore, it is one of the aspects of our law which supports the protection of the value of human dignity. When considering the constitutionality of the law of defamation, therefore, we need to ask whether an appropriate balance is struck between the protection of freedom of expression on the one hand, and the value of human dignity on the other. South Africa has devised a defence to protect media reports on matters of public interest: National Media Ltd v Bogoshi 3 LRC 6178. In relation to Bogoshi, and defamation law more generally, Sachs J observed in NM and Others v Smith and Others ZACC 6; 2007 (5) SA 250 (CC): “Firstly, it seeks to harmonise as much as possible respect for human dignity and freedom of the press, rather than to rank them in terms of precedence. The emphasis is placed on context, balance and proportionality, and not on formal and arid classifications accompanied by mantras that favour either human dignity or press freedom. The more private the matter, the greater the call for caution on the part of the media, while conversely, the more profound the public interest, the more heavily will it weigh in the scales. Secondly, by stressing the need for the media to take reasonable steps to verify the information to be published, it introduces objective standards that can be determined in advance by the profession and then evaluated on a case-by-case basis by the courts. The result is the creation of clearly identifiable and operational norms, and the fostering in the media of a culture of care and responsibility” [footnote here in the judgement referred to evidence which had been given about the standards of reasonable reporting set by the media. Professor Anton Harber testified that since legal control over the media was prone to stifle its freedom of expression unduly, most democracies had opted for as much self-regulation as possible. He had noted (in relation to the facts of the case) that it was general journalistic practice not to disclose the identity of a person with HIV without their consent] Whether or not reputation is a “right” – or part of private life – or an aspect of human dignity – it will continue to be protected in domestic law, which is likely to look more and more at what is “proportionate”. Heather Rogers QC is a barrister at Doughty Street Chambers.
https://inforrm.org/2010/10/29/is-there-a-right-to-reputation-part-2-heather-rogers-qc/
Day-End, Month-End, Year-End Closing Routines command From the Navigator menu, choose General Ledger> Closing Routines. Use the Closing Routines to advance your ledger book month, ... Mon, 2 May, 2022 at 6:37 AM At the end of each month, it is recommended that you complete the following: 1. Reconcile all clearing accounts (Cash Clearing, Paid Out Clearing, etc), a... Thu, 13 May, 2021 at 11:05 AM The Point of Sale Transaction Report is used to report and reconcile on Cash, Accounts Receivable, or both. Typically, you will use the Cash and AR option,... Thu, 25 Aug, 2022 at 4:52 PM The purpose of reconciling your cash clearing is to verify that you have correctly recorded and deposited your daily point of sale activity. The idea is tha... Thu, 25 Aug, 2022 at 4:48 PM Contrary to popular belief, “year-ends” do not happen on the last day of your financial year. You must wait for bank statements and last supplier bills to ...
https://newaccount1608055419986.freshdesk.com/support/solutions/folders/66000417691
Aspects Of Personal Development There are many aspects of our lives that help define who we are and how we feel. One of the largest areas of our lives is related to the work that we do or our profession. Therefore this aspect tends to form the most important part of our own personal development. There are some simple questions that you need to ask yourself regarding the current job you do. Do you enjoy your work or do you just slog away to earn the money you need to survive? Does your work leave you with a sense of fulfillment or does it seem to suck the joy out of your life? You may be surprised to know that a large percentage of people work just to earn money and take no pleasure in what they do. This in itself is not always a bad thing because money earned can be used to buy some enjoyment such as funding holidays, hobbies, etc. On the other hand, people who work only for remuneration may not know what they are missing. True fulfillment can really only be achieved when all areas of your life are balanced and compliment each other. Keep in mind that most of the hours of every day are spent working. If you are truly unhappy in your job, you may spend this time thinking about other things you could be doing or where else you could be. This means that not only are you making yourself unhappy going to work every day but that you are not giving your job the focus and attention it may deserve. Finding yourself in this position does not mean that you should just quit and find something new to do. In some cases, for some people, this may be a good option, but losing an income in the current economy may leave you worse off than you are now. There are however other actions that you can take to improve your work life. Find what it is about your job that you do like and focus on these areas. The parts of your job that you don't like are not going to go away, but you may spend less time thinking about the bad by focusing on the good. Find out more about moving into a different position within the organization you are working in. This may not be a better position than you are currently employed in but could be a happier position. Take a training course or find a mentor to help improve your skills. You will be amazed at how much better you feel about the job you are doing if you are better able to do the job. A life coach or personal development manager can also help you with more ways in which you can improve your work life. In any aspect of personal development, it is important to take your happiness and well-being into your own hands. So take control of your work life by making better choices that leave you more fulfilled. Creating a Personal Development Plan Creating a personal development plan can transform an individual’s life. It can help a person get in touch with their internal feelings, determine exactly who they want to be, and how they want to live their life. A personal development plan is often used by younger individuals that are constantly bombarded by peer pressure. However, adults can benefit greatly by creating their own personal development plan, to take the single step forward to transforming a life. What You Admire It is imperative to first decide exactly what types of characteristics you admire and other individuals and would like to internalize on your own. These might be the types of characteristics that are easy to measure such as staying physically fit. It might include taking on a better and healthier diet or living a life full of integrity. Choosing the right types of characteristics that you find in other individuals that you admire should be on the top of the list. Decide which three characteristics you would like to incorporate in your own life, to begin the process of transformation. The types of characteristics you should focus on are the ones that are outside your personal comfort zone. However, they should not be so unattainable, as to become quickly discouraged. Setting Successful Goals To be successful, you will need to set goals. If the characteristics you selected above are tangible, like physical fitness, you will need to understand exactly what the goal looks like. It might include participating in a marathon, simply walking the neighborhood, or changing bad eating habits to healthier ones. If the characteristics include living a life with high integrity, you will need to detail exactly what that looks like to you. It may mean ending bad behaviors, making better habits, or asking for outside help to achieve the goal. Flesh It Out Once the goals are firmly in place, it is time to flesh them out, to give them substance. It is time to expand the plan to add meat to the bones. It is impossible to skimp during this part of the process. Expanding the plan might include attaching some form of timelines or benchmarks where you need to reach a specific goal at a specific time. Be realistic, and flesh out the goals on a timeline that can be easily achieved. Find the Support It is much easier to obtain these types of goals in your personal development plan with support. It can be a family member, friend, therapist, physician or an exercise buddy that simply wants to take the route with you. With enough support, it is easy to achieve nearly any type of goal as long as it has been set in a realistic time frame. The last portion of the personal development plan should include a way to celebrate your success. This can be done through by keeping a journal or checking off a long list of necessary goals. Part of the celebration should also include a look back so that you can determine exactly how far you have come on the journey. Remember that slip-ups are part of the process, and you will need to forgive yourself if you need to start the process over again.
https://www.wealthylifestyletips.com/2018/12/aspects-of-personal-development.html
Implementation of Redundancy in Stepper Motors Some of the recent research activities in the area of electric motor drives for critical applications (such as aerospace and nuclear power plants) are focused on looking at various fault tolerant motor and drive topologies. After discussing different solutions, this paper focuses on a miniature PM stepper motor design which falls in this fault tolerant category by providing an increased redundancy. Safety critical systems are taking on increasing importance in the industrial world. Some examples of such systems are aerospace, transportation, medical and military applications, and nuclear power plants. These all accommodate a number of electric motor drives installed to a point where the plants rely heavily upon them. Any failure in these drives may cause catastrophic failures in the plants, which may be very costly in term of human resources and capital cost, and clearly undesirable. Techniques behind most of the electric drives on the market today are not adequate for safety-critical applications. Therefore, there is a need to improve the survivability of critical systems given the increasing dependence on them, and the serious consequences of their failure. One of the common tools used in the design of safety critical systems is redundancy. Ideally, many fault-tolerant systems should mirror all operations; that is, every operation should be performed on two or more duplicate systems, so if one fails the other can take over. Therefore, redundancy within the system is an essential aspect. What is a fault tolerant motor? The specifications of a fault tolerant motor include: - Higher redundancy, by using identical motor segments on the same shaft. - Electrically isolated phases to prevent phase to phase short-circuit. - Magnetically uncoupled windings to avoid reduction of performance in the case of a failure of the other phases. - Physically isolated phases to prevent propagation of the fault into the neighboring phases and to increase the thermal insulation. What solutions are offered? Coupling two motors on the same shaft (Figure 1) is what comes normally to one’s mind first. Although its implementation is straightforward, this solution presents several drawbacks which are often underestimated: - This solution costs roughly twice as much as the nonfault-tolerant system. - The driving motor has to overcome the friction torque and cogging of the idle motor while it induces in addition iron losses in the later, reducing the overall efficiency of the system. - It brings in unpredictable resonance frequencies which may severely impact the proper running of the system. - It does not fulfill at all the requirements of small size and light weight requested by the aero-space industry. Duplicating the windings of a traditional two-phase PM stepper motor is also conceivable. The windings could be either made of individual component placed side by side (Figure 2) in order to create two 2-Phase motors or, with the same intent, made of two windings wound together (Figure 3). Both however do not secure an optimal thermal insulation and the fault of one phase may be propagated to the one next to it. Overall, these designs requires significant modifications of the motor construction and do not meet exactly the specification of a fault tolerant motor as described above. A customized solution letting the 4 windings independent from each other creates two Two-Phase PM stepper motors with physically and electrically isolated phases which are the key to achieve a failure free system (Figure 4). The windings are only partially magnetically coupled and the redundant configuration leads to only a torque reduction of 30% when compared to the standard motor configuration at equivalent dissipated power. With proper heat sink and phase current increase, the same output torque can be reached. Conclusion The specific and patented design of some existing small miniature motor (down to Ø6mm) meets, with very little adaptation, the specifications for a fault-tolerant, robust, reliable motor with the degree of redundancy which is crucial in safety-critical applications that rely on the failure free operation of electric motor drives.
https://www.micromo.com/technical-library/stepper-motor-tutorials/stepper-motor-redundancy
Positions in this function are responsible for the management and manipulation of mostly structured data, with a focus on building business intelligence tools, conducting analysis to distinguish patterns and recognize trends, performing normalization operations and assuring data quality. Depending on the specific role and business line, example responsibilities in this function could include creating specifications to bring data into a common structure, creating product specifications and models, developing data solutions to support analyses, performing analysis, interpreting results, developing actionable insights and presenting recommendations for use across the company. Roles in this function could partner with stakeholders to understand data requirements and develop tools and models such as segmentation, dashboards, data visualizations, decision aids and business case analysis to support the organization. Other roles involved could include producing and managing the delivery of activity and value analytics to external stakeholders and clients. Team members will typically use business intelligence, data visualization, query, analytic and statistical software to build solutions, perform analysis and interpret data. Positions in this function work on predominately descriptive and regression-based analytics and tend to leverage subject matter expert views in the design of their analytics and algorithms. This function is not intended for employees performing the following work: production of standard or self-service operational reporting, casual inference led (healthcare analytics) or data pattern recognition (data science) analysis; and/or image or unstructured data analysis using sophisticated theoretical frameworks. Primary Responsibilities: - Deliver daily, weekly and monthly performance forecasting and reporting - Play lead role in developing marketing reporting and analysis data infrastructure for Government Programs - Identify new data points to advance analytic capabilities and help turn into actionable information - Drive implementation of Government Programs view of consumer into reporting and forecasting - Build strong relationship with Finance, Product and Sales - Predict emerging customer needs and develops innovative solutions to meet them - Solve unique and complex problems with broad impact on the business - Influence senior leadership to adopt new ideas, products, and / or approaches - Direct cross-functional and / or cross-segment teams You'll be rewarded and recognized for your performance in an environment that will challenge you and give you clear direction on what it takes to succeed in your role as well as provide development for other roles you may be interested in. ", Categories UnitedHealth Group is the most diversified health care company in the United States and a leader worldwide in helping people live healthier lives and helping to make the health system work better for everyone. Before you go... Our free job seeker tools include alerts for new jobs, saving your favorites, optimized job matching, and more! Just enter your email below.
https://diversity.careercast.com/jobs/senior-data-analyst-uhc-m-r-marketing-minnetonka-mn-minnetonka-mn-55345-119260397-d
2×Universal Ligation Mix is a ready-to-use 2×premix containing T4 DNA Ligase and reaction buffer. The contained T4 DNA Ligase can catalyze the 5'-P and 3'ends of sticky ends or blunt-ended double-stranded DNA or RNA. The -OH ends are combined with phosphodiester bonds. and can repair single-stranded nicks in double-stranded DNA, RNA, and DNA/RNA hybrid chains. The optimized premixed reaction buffer makes the reaction more efficient and convenient to handle. The reaction system can transform a variety of chemically competent cells after reacting at 25°C for 5 minutes. Storage and transportation Transport with wet ice; Storage at -20°C, valid for 12 months. | | Component | | G3341-50 | | G3341-100 | | 2×Universal Ligation Mix | | 250 μL | | 2×250 μL Connection system (recommended 10 μL reaction system) | | Component | | Volume | | 2×Universal Ligation Mix | | 5 μL | | Vector | | X μL | | DNA segment | | Y μL | | Nuclease-Free Water | | Add To 10 μL Ligation reaction conditions The ligation reaction of the sticky end at 25°C is 5-30 minutes, and the ligation reaction of the blunt end at 25°C does not exceed 2 hours or the reaction at 4°C overnight. Conversion of ligation products 1. Remove competent cells (such as E.coli DH5α, E.coli Top10, etc.) from the refrigerator at -80°C and thaw them on ice; 2. Add the reacted sample to the competent state, gently move the bottom of the tube with your fingers to mix well, and ice bath for 30 minutes; 3. The product is then placed in a 42°C water bath for 90 s to heat shock, and after that, it is quickly placed on ice for 2-5 min; 4. Take 900 μL of sterile SOC or LB medium and add it to the EP tube. After mixing, place the EP tube on a shaker and incubate at 220 rpm at 37°C for 1 hour to recover the bacteria (it can also be placed in a 37°C incubator for static Place culture for 1 h); 5. According to experimental requirements, draw different volumes of transformed competent cells and add them to the LB solid medium containing the corresponding antibiotics, spread the cells evenly, and after the liquid is completely absorbed, place the plate upside down in a 37°C incubator and cultivate overnight. Identification of positive clones The monoclonal colonies grown on the plate are picked for colony PCR identification, or after culture, the plasmid is extracted by restriction digestion or PCR identification, or the extracted plasmid is directly sequenced and analyzed for identification. Precautions 1. It is recommended to configure the reaction system on ice. 2. The vector and target fragments must be gel purified and electrophoresis to detect their quality and concentration. When the concentration is low, you can directly use them to make up without adding water. When the total volume of the vector and insert is greater than 5 μL, the reaction system can be amplified to 20 μL. 3. It is recommended that the molar ratio of vector to insert is 1:3~1:10. 4. When electroporation is used for transformation, the ligation product needs to be purified by column method or ethanol precipitation method. 5. 2×Universal Ligation Mix is recommended to be taken out when used, and immediately returned to -20°C after use. After thawing, it can be packed and frozen to reduce the number of repeated freeze-thaw cycles. 6. When connecting a blunt-ended vector to a DNA fragment, the vector must be dephosphorylated (G3400 recommended) to prevent its self-circulation. 7. Please wear laboratory clothes and disposable gloves during operation.
https://www.servicebio.com/DNA-Ligase-2-x-Universal-Ligation-Mix-with-T4-DNA-Ligase-and-buffer-molecular-biology-reagent-pd49210683.html
We're celebrating 25 years of Taking Part with a joyous season of work created with our local communities. Find out more. The Young Vic’s creative engagement department begins a year of celebrations to mark its 25th anniversary with a series of events entitled Taking Part 25. Shereen Jasmin Phillips, Director of Young Vic Taking Part says: “I am incredibly proud to present our 25th birthday season of Taking Part work, for, with and by the local community of Lambeth and Southwark. This year we will be connecting our young people with the queer community in New York through our Communities of Resistance project, imagining what the next 25 years will look like through a magical realism lens with our Of The Cut promenade production that will boast a 30 strong intergenerational community ensemble cast! And for a fourth year we will be taking theatre to community centres, hospitals and schools through YV Unpacked: I Wonder If… , which will explore human connection through an array of stories. Taking Part is staunchly committed to continuing joyous & thought provoking work through our local communities over the next 25 years and beyond. Jun - Jul Following the sell-out success of the Taking Part production Sundown Kiki, young people will explore how art has historically been used as a collective space of political resistance. 25 Jul – 25 Dec Journeys, a six-part podcast series celebrating local voices in Lambeth and Southwark, uncovers stories with our Neighbourhood Theatre Company, taking a look at the past 25 years and looking ahead to the next. 30 Jul - 6 Aug Of The Cut written by Yasmin Joseph (J'Ouvert) and the Company, directed by Philip J Morris (Sessions) brings together the voices and imagination of the Young Vic’s Taking Part community to create a promenade performance piece which blends film, theatre, audio and imagery, using magical realism to imagine what the next 25 years could look like on and around The Cut. A Taking Part production from Young Vic and TEA films. 14 – 21 Oct Taking Part’s 25th anniversary season will conclude with The Twenty Thrive Exhibition, an immersive, interactive exhibition combining archive material, photography, and storytelling to explore the role the Young Vic has played in its community over the past 25 years. On tour to community venues in Lambeth and Southwark 10 – 21 Oct. The Maria, 24 – 29 Oct Fusing dance, music and dialogue to explore the human relationship in its many forms, I Wonder If… is a dazzling new play directed by Daniel Bailey (Red Pitch) and devised by the company.
https://www.youngvic.org/taking-part/taking-part-25
I have written before about courts calling to account lenders who reneg on loan modifications after the borrower made numerous trial plan payments. Courts have ruled against lenders based on promissory estoppel, offer and acceptance creating a contract, for lack of a signed, written modification; and lack of a modification signed by the lender. Usually, when the property is about to be, or already has been, sold at a trustee’s sale, the borrower consults a Sacramento real estate attorney about such a situation. In a recent decision the lender was disappointed when the court found that the plaintiff properly alleged numerous claims against it. In James Rufini v. CitiMortgage, Inc., the Sonoma homeowner sought a loan modification. In June 2009 CitiMortgage approved the loan modification, and told him he would receive a permanent modification in October after timely making three trial payments. He continued making the trial payments through December, in January the lender told him that his permanent loan modification agreement would be ready in three days. Three months later, since he had not received the written agreement, he rented out the house (and lived with his son) to offset expenses while waiting for the modification. The modification was then denied because the home was not “owner-occupied.” The lender then refused to accept his mortgage payments at the modified level. A notice of trustee’s sale was recorded, and the borrower got a 30-day postponement, while the lender was requesting additional information, like income information. Meanwhile, CitiMortgage transferred the loan to PennyMac. CitiMortgage kept discussing the modification, and the property was foreclosed. The borrower claimed that the lender’s contact said he had known all along the loan had been transferred to PennyMac. BREACH OF CONTRACT The borrower sued the lender for breach of contract. He claimed it was the agreement to modify the loan that was breached. The court first reviewed the HAMP modification procedure: 1. The participating lender initially determines whether a borrower satisfies certain threshold requirements regarding the amount of the loan balance, monthly payment, and owner occupancy. If the borrower qualifies, it then implements the HAMP modification process in two stages. 2. In the first stage, it provides the borrower with a “Trial Period Plan” (TPP), setting forth the trial payment terms, instructs the borrower to sign and return the TPP and other documents, and requests the first trial payment. 3. In the second stage, if the borrower has made all required trial payments and complied with all of the TPP’s other terms, and if the borrower’s representations on which the modification is based remain correct, the lender must offer the borrower a permanent loan modification. The court reviewed the decisions that require, in the event that the borrower has a TPP and makes the three timely payments, that the lender must offer the borrower a permanent loan modification. If the lender doers not do so, the borrower may sue for breach of the trial modification plan. The court here agreed; Rufini was suing for breach of the modification plan. He could also allege a claim for breach of duty of good faith and fair dealing based on the lender’s failure to modify the loan. NEGLIGENT MISREPRESENTATION The court first set out the elements of negligent misrepresentation: (1) the defendant made a false representation; (2) without reasonable grounds for believing it to be true; (3) with the intent to deceive the plaintiff; (4) justifiable reliance on the representation; and (5) resulting harm The borrower claimed that CitiMortgage falsely told him that he was approved for a permanent modification and thereafter carried on the pretense of efforts to finalize it, while planning to foreclose, intending that he reply on the representations. He reasonably relied on them in expending time in modification negotiations, and foregoing pursuing other opportunities. The lender argued that they owe their borrowers no duty not to misrepresent the truth. HA HA!, the court said, lenders have a common law duty not to make misleading representations of material facts. BUSINESS & PROFESSIONS CODE §17200 This Is the ‘Unfair Competition Law.’ The homeowner alleged that the lender committed an unlawful business practice when it denied his loan modification in bad faith “on the grounds that the home was not owner occupied when in fact it was owner occupied,” and pretended to engage in loan modification efforts while actually intending to foreclose. The bank argued that this was insufficient, because it failed to allege a predicate act in violation of a statute. The court found that the B&P language makes clear that a practice may by unfair even if it is not prevented by some other law. Next, the bank argued that he could not bring the unlawful competition claim because he had allege that he lost money or property. However, he alleged that the unfair practices deprived him of the opportunity to pursue other means of avoiding foreclosure. Lastly, the bank argued that the unfair competition law only applies to ongoing conduct. But no- that used to be the case, but not any longer. The law allows basing a claim on a single instance of unfair conduct. This Appeal was from a demurrer to the complaint, alleging that the complaint, as it was written, did not support these causes of action. But these are just allegations, and the plaintiff is a long way from proving them. He still has to show the house was owner occupied, and it does not sound like it was. He also has to convince a judge or jury that the CitiMortgage employee knew the property was going to foreclosure, but kept negotiating a modification anyway. Photos:
https://www.calrealestatelawyersblog.com/drafting-a-lawsuit-when-the-le/
High efficiency of 5-aminolevulinate-photodynamic treatment using UVA irradiation. Photodynamic therapy (PDT) is being used clinically for the treatment of skin cancers. One concept of delivering the employed photosensitizer directly to target cells is to stimulate cellular synthesis of sensitizers such as porphyrins. ALA (5-aminolevulinate) is applied as a precursor of porphyrins which then serve as endogenous photosensitizers. Upon irradiation, reactive oxygen species, predominantly singlet oxygen, are generated, leading to cell death. ALA-PDT using red light (550-750 nm) is known to lead to the activation of stress kinases, such as c-Jun-N-terminal kinase and p38. These kinases are also activated by UVA (320-400 nm), whose biological effects are mediated in part by singlet oxygen. In the present study, the efficiency of a combination of both treatment strategies, ALA-PDT and UVA, in cytotoxicity and activation of stress kinases was investigated taking human skin fibroblasts as a model. Compared with the commonly used ALA-PDT with red light (LD(50) = 13.5 J/cm(2)), UVA-ALA-PDT was 40-fold more potent in killing cultured human skin fibroblasts (LD(50) = 0.35 J/cm(2)) and still 10-fold more potent than ALA-PDT with green light (LD(50) = 4.5 J/cm(2)). Its toxicity relied on the formation of singlet oxygen, as was shown employing modulators of singlet oxygen lifetime. In line with these data, strong activation of the stress kinase p38 was obtained in ALA-pretreated cells irradiated with UVA at doses two orders of magnitude lower than necessary for a comparable activation of p38 by UVA in control cells. Taken together, these data suggest UVA-ALA-PDT as a potentially interesting new approach in the photodynamic treatment of skin diseases.
This page will walk you through a methodical approach to rendering contour lines from an array of spot elevations (Rabenhorst and McDermott, 1989). To get the most from this demonstration, I suggest that you print the illustration in the attached image file. Find a pencil (preferably one with an eraser!) and straightedge, and duplicate the steps illustrated below. A "Try This!" activity will follow this step-by-step introduction, providing you a chance to go solo. Starting at the highest elevation, draw straight lines to the nearest neighboring spot elevations. Once you have connected to all of the points that neighbor the highest point, begin again at the second highest elevation. (You will have to make some subjective decisions as to which points are "neighbors" and which are not.) Taking care not to draw triangles across the stream, continue until the surface is completely triangulated. The result is a triangulated irregular network (TIN). A TIN is a vector representation of a continuous surface that consists entirely of triangular facets. The vertices of the triangles are spot elevations that may have been measured in the field by leveling, or in a photogrammetrist's workshop with a stereoplotter, or by other means. (Spot elevations produced photogrammetrically are called mass points.) A useful characteristic of TINs is that each triangular facet has a single slope degree and direction. With a little imagination and practice, you can visualize the underlying surface from the TIN even without drawing contours. Wonder why I suggest that you not let triangle sides that make up the TIN cross the stream? Well, if you did, the stream would appear to run along the side of a hill, instead of down a valley as it should. In practice, spot elevations would always be measured at several points along the stream, and along ridges as well. Photogrammetrists refer to spot elevations collected along linear features as breaklines (Maune, 2007). I omitted breaklines from this example just to make a point. You may notice that there is more than one correct way to draw the TIN. As you will see, deciding which spot elevations are "near neighbors" and which are not is subjective in some cases. Related to this element of subjectivity is the fact that the fidelity of a contour map depends in large part on the distribution of spot elevations on which it is based. In general, the density of spot elevations should be greater where terrain elevations vary greatly, and sparser where the terrain varies subtly. Similarly, the smaller the contour interval you intend to use, the more spot elevations you need. (There are algorithms for triangulating irregular arrays that produce unique solutions. One approach is called Delaunay Triangulation which, in one of its constrained forms, is useful for representing terrain surfaces. The distinguishing geometric characteristic of a Delaunay triangulation is that a circle surrounding each triangle side does not contain any other vertex.) Now draw ticks to mark the points at which elevation contours intersect each triangle side. For instance, see the triangle side that connects the spot elevations 2360 and 2480 in the lower left corner of Figure 7.6.3, above? One tick mark is drawn on the triangle where a contour representing elevation 2400 intersects. Now find the two spot elevations, 2480 and 2750, in the same lower left corner. Note that three tick marks are placed where contours representing elevations 2500, 2600, and 2700 intersect. This step should remind you of the equal interval classification scheme you read about in Chapter 3. The right choice of contour interval depends on the goal of the mapping project. In general, contour intervals increase in proportion to the variability of the terrain surface. It should be noted that the assumption that elevations increase or decrease at a constant rate is not always correct, of course. We will consider that issue in more detail later. Finally, draw your contour lines. Working downslope from the highest elevation, thread contours through ticks of equal value. Move to the next highest elevation when the surface seems ambiguous. Keep in mind the following characteristics of contour lines (Rabenhorst and McDermott, 1989): - Contours should always point upstream in valleys - Contours should always point downridge along ridges - Adjacent contours should always be sequential or equivalent - Contours should never split into two - Contours should never cross or loop - Contours should never spiral - Contours should never stop in the middle of a map How does your finished map compare with the one I drew below? Try This! Now try your hand at contouring on your own. The purpose of this practice activity is to give you more experience in contouring terrain surfaces. - First, view an image of an irregular array of 16 spot elevations. - Print the image. - Use the procedure outlined in this chapter to draw contour lines that represent the terrain surface that the spot elevations were sampled from. You may find this to be a moderately challenging task that takes about a half hour to do well. TIP: label the tick marks to make it easier to connect them. - When finished, compare your result to an existing map. Here are a couple of somewhat simpler problems and solutions in case you need a little more practice. Practice Problem #1 Kevin Sabo (personal communication, Winter 2002) remarked that "If you were unfortunate enough to be hand-contouring data in the 1960's and 70's, you may at least have had the aid of a Gerber Variable Scale. After hand contouring in Chapter 7, I sure wished I had my Gerber!"
https://www.e-education.psu.edu/natureofgeoinfo/c7_p6.html
Damages Due To Defective Wiring Becoming Weary Among the biggest dangers regarding defective wiring is the possibility of an electrical work mishap causing electric shock or electro-shock, resulting in death by electricity. This can occur in many different situations, including with loose, broken, exposed, or live wiring, or when the power source to electrical equipment has been turned off or grounded improperly. The risk of being shocked by electricity often occurs when the wiring or cable is defective. It is essential that anyone who handles electrical equipment regularly knows how to identify defective wiring and follow basic safety practices. This article provides some general information about dealing with any potential threat from electrical shock properly. There are two types of electrical defect: physical defects and hypothetical risks. A physical defect is one in which something is physically wrong with the item. This could be a break in the wiring or the separation of the wire from its socket. A hypothetical risk is a risk that doesn’t depend on any physical factors. For instance, many scientists assume that cancer is based largely on genetics, and that poor nutrition can increase the risk of developing all sorts of cancer. Although both of these examples are fallible assumptions, it is still important to recognize the difference between pure risk, which can be either actual or possible, and speculative risk, which can be either actual or possible. With any type of injury attorney, the first thing you need to do is determine whether the harm caused to you resulted from defective equipment or faulty wiring. If it is determined that you suffered harm as the result of defective equipment or wiring, then you should move on to filing a claim. However, even when the cause of your injury is a purely hypothetical risk, you still need to determine what the cost will be in the event that you suffer from an injury as a result. Identifying the actual damages you suffered will help you decide whether you should file a lawsuit or settle the case out of court. In many cases, the cost of pursuing a lawsuit is far greater than the potential savings of settling the case. Therefore, knowing the cost of filing your claim will give you a better idea about your potential settlement outcome. If your injury attorney finds that you have suffered serious injuries as a result of defective wiring or other similar incidents, then he or she may recommend that you seek monetary compensation. Some injuries, such as electrical shock, require life-long medical care, while others, such as spinal cord injuries, will only require temporary medical attention. To determine how much money you should seek, consult an attorney experienced in handling these cases to assess your exact personal needs. This will allow you to better understand the extent of your injuries. Another common problem associated with defective wiring and other types of accidents is psychological injury. It is not uncommon for those who have sustained injuries as a result of faulty electrical wiring to be depressed or anxious about their situation. These individuals may also develop psychological issues as a result of their inability to work for long periods of time due to their injuries. Such workers may feel as though they are unable to perform as well as they previously did. If your defective electrical wiring caused you to have a mental handicap, it is important to discuss these issues with your personal injury attorney as soon as possible. It is also important to remember that when filing a lawsuit for defective electrical wiring defects, you may be required to pay a substantial amount of money upfront. In addition to paying for your doctor’s bills and medications, you may also need to pay for lost wages and possible medical expenses related to your injuries. If you do not have the money available upfront to cover these costs, it may be necessary for you to work with an experienced case management company to negotiate a settlement. Before hiring such a company, however, it is important for you to carefully evaluate any company you are considering. It is important to ensure that the company you select has the experience and expertise needed to successfully represent your interests in a lawsuit related to defective electrical wiring. Your personal injury attorney will be able to help you understand your chances of successfully obtaining compensation. If you were working on a job site where defective wiring was present, your personal injury attorney will likely work hard to prove that the employer was responsible for your injuries. In some cases, your injury attorney may even be able to sue the company for not properly maintaining their work equipment. It may also be important for you to consider hiring an electrical workers’ compensation lawyer if you were injured on the job. In some cases, you may not be able to sue the company in its entirety. You can file a claim for negligence in these instances and seek monetary compensation from the liable party. You should be aware that various circumstances may prevent you from suing the individual or company responsible for your condition. Your personal injury attorney will advise you whether or not you are entitled to pursue such a claim.
https://www.taskin.tv/damages-due-to-defective-wiring-becoming-weary/
Brought to you by Investec Switzerland. In a report published by SECO, a group of economy experts working for Switzerland’s federal government, says it anticipates an acceleration in economic growth in Switzerland from 0.8% in 2015 to 1.5% in 2016 and 1.9% in 2017. Along side this they expect unemployment to rise from 3.3% in 2015, to an annual average of 3.6% in 2016, before falling again to 3.4% in 2017. However, they also see risks. The group mainly pins the slowdown in 2015 on the appreciation of the Swiss franc, which exerted drag on exports in the first three quarters. Although the balance of trade in goods delivered a positive contribution to growth in the 3rd quarter, the contribution from the balance of trade in services was negative. Furthermore, key components of the domestic economy lost momentum over recent quarters. This applies in particular to the construction industry, which reported the first decrease after several years of strong growth. Domestic household and government consumption, in particular, provided a positive boost in the 3rd quarter. Survey based sentiment indicators, such as the one conducted by the KOF, still show no clear signs of an economic turnaround. The surveys indicate a certain degree of stabilisation since the summer, including in the hardest-hit industries: trade and tourism. Domestic demand should remain an important pillar of the economy over the entire forecasting horizon. Given persistent negative inflation, domestic households can expect real increases in purchasing power in 2016, which should at least partially flow into consumer spending. Foreign trade is not expected to provide any significant impetus for the current year. For the next two years, the Expert Group anticipates positive contributions to growth from foreign trade in goods and services. There are signs of a continued weakness of investments in construction in 2016 but no crisis. In addition to the low interest rate environment, sustained population growth should support investments in construction and domestic household consumption. The negative trend in prices in many sectors is likely to continue for a few quarters until the effects of the appreciation in the value of the Swiss franc and lower oil prices vanish. The group expects to see consumer price inflation remain slightly negative in 2016 (-0.1%) and not return to positive territory until 2017 (+0.2%). Economic risks The normalisation of U.S. monetary policy still represents a risk for the economic prospects of various emerging markets, with potential spill over to the global economy. In view of their fragile condition, key emerging economies might be affected by considerable turmoil and capital outflows as soon as interest rates in the USA begin to increase. Such an evolution could affect economic growth in various developed countries and indirectly in Switzerland as well. The Bank of International Settlements recently released its early warning indicators for stress in domestic banking systems. In China, the credit to GDP gap, a measure of the gap between actual credit to GDP and the ratio over the long run, rose to 30.1% in the first quarter of 2016, three times the level signaling elevated stress. The figures can be found on page 22 of this BIS report. The uncertainty regarding the future rules on immigration also poses significant risks for the Swiss economy. Restrictive implementation of the Mass immigration Initiative, which sharply reduces migration could have a detrimental effect on domestic demand as well as on corporate decisions on investment and business location. For more stories like this on Switzerland follow us on Facebook and Twitter.
https://lenews.ch/2016/09/21/swiss-economy-slowly-gains-momentum-but-risks-remain/
INTRODUCTION ============ Non-communicable diseases are the leading causes of death globally, killing more people each year than all other causes combined. Of the 57 million deaths that occurred globally in 2008, 36 million were due to non-communicable diseases comprising mainly cardiovascular diseases, cancers, diabetes, and chronic lung diseases \[[@b1-jcp-24-163]\]. The combined burden of these diseases is rising fastest among the lower-income countries, populations, and communities \[[@b2-jcp-24-163]\]. Contrary to popular opinion, available data demonstrate that nearly 80% of deaths due to non-communicable diseases occur in low- and middle-income countries \[[@b3-jcp-24-163]\]. In 2011, the World Health Organization \[[@b4-jcp-24-163]\] estimated that 34% of the Ethiopian population died from non-communicable diseases, with a national cardiovascular disease prevalence of 15%, cancer and chronic obstructive pulmonary disease prevalence of 4% each, and diabetes mellitus prevalence of 2%. Cancer is the second largest contributor to non-communicable disease deaths and causes a great deal of suffering worldwide \[[@b5-jcp-24-163]\]. Cancer is a leading cause of disease worldwide with estimated 14.1 million new cancer cases occurring in 2012 \[[@b6-jcp-24-163]\]. It is now the third leading cause of death worldwide with 8.2 million deaths in 2012 \[[@b6-jcp-24-163]\]. More than half of all cancer deaths each year are due to lung, stomach, liver, colorectal and female breast cancers \[[@b6-jcp-24-163]\]. These cancers accounted for more than 40% of all cases diagnosed worldwide. In men, lung cancer was the most common cancer (16.7% of all new cases in men). Breast cancer was by far the most common cancer diagnosed in women (25.2% of all new cases in women) \[[@b6-jcp-24-163]\]. By 2030, it is projected that 26 million new cancer cases and 17 million cancer deaths are expected to occur \[[@b7-jcp-24-163]\]. This represents an increase of 68% compared with 2012 (66% in low and medium human development index \[HDI\] countries and 56% in high and very high HDI countries) \[[@b6-jcp-24-163],[@b8-jcp-24-163]\]. Moreover, the global distribution of cancer and types of cancer that predominate continues to change, especially in economically developing countries. Low- and middle-income countries accounted for about half (51%) of all cancers worldwide in 1975 \[[@b9-jcp-24-163]\]. This proportion increased to 55% in 2006 and is projected to reach 61% by 2050 \[[@b9-jcp-24-163]\]. The global increase in the cancer burden and its disproportionate impact on economically developing countries is being propelled by both demographic changes in the populations at risk and by temporal and geographic shifts in the distribution of major risk factors. Colorectal, lung, female breast and prostate cancers were the main contributors in most regions of the world, explaining 18% to 50% of the total healthy years lost \[[@b10-jcp-24-163]\]. These cancers are no longer largely confined to Western industrialized countries but are among the most common cancers worldwide. An estimated 169.3 million years of healthy life were lost globally because of cancer in 2008 and approximately 44% of cancer cases and 53% of cancer deaths occur in countries at a low or medium level of the HDI \[[@b11-jcp-24-163]\]. In sub-Saharan Africa, cancer burden is predicted to increase by 85% by 2030 due to the increase in life expectancy, changes in diet and lifestyle and lower burden of communicable diseases \[[@b5-jcp-24-163]\]. According to the International Agency for Research on Cancer, about 715,000 new cancer cases and 542,000 cancer deaths occurred in 2008 in Africa. These numbers are projected to nearly double (1.28 million new cancer cases and 970,000 cancer deaths) by 2030 simply due to the aging and growth of the population \[[@b12-jcp-24-163]\]. With the potential to be even higher because of the adoption of behaviors and lifestyles associated with economic development, such as smoking, unhealthy diet, and physical inactivity \[[@b13-jcp-24-163]\]. Despite this growing burden, cancer continues to receive low public health priority in Africa, largely because of limited resources and other pressing public health problems, including communicable diseases, such as acquired immune deficiency syndrome (AIDS)/human immunodeficiency virus (HIV) infection, malaria, and tuberculosis \[[@b14-jcp-24-163]\]. It may also be in part due to a lack of awareness about the magnitude of the current and future cancer burden among policy makers, healthcare providers, the general public, and international private or public health agencies \[[@b14-jcp-24-163]\]. Ethiopia is home to a growing population of more than 105 million people and is the second most populous country in Africa and is expected to become the ninth most populous country in the world by 2050, with an estimated parallel rise in cancer burden \[[@b15-jcp-24-163]\]. In Ethiopia, cancer is estimated to account for about 5.8% of total national mortality \[[@b16-jcp-24-163]\]. Although population-based data do not exist in the country except for Addis Ababa, it is estimated that the annual incidence of cancer is around 60,960 cases and the annual mortality is over 44,000 \[[@b16-jcp-24-163]\]. For people under the age of 75 years, the risk of being diagnosed with cancer is 11.3% and the risk of dying from the disease is 9.4% a five year prevalence for 2003 to 2008 was 224.2 per 100,000 people \[[@b17-jcp-24-163]\]. The most prevalent cancers in Ethiopia among the adult population are breast cancer (30.2%), cancer of the cervix (13.4%), and colorectal cancer (5.7%). About two-thirds of reported annual cancer deaths occur among women \[[@b16-jcp-24-163]\]. Based on 2013 data from the Addis Ababa Cancer Registry, breast cancer accounted for 31.4%, cervical cancer for 14.3% and ovarian cancer for 6.3% of all cancer cases \[[@b18-jcp-24-163]\]. According to a qualitative study at the only oncology center in the country at Tikur Anbessa Specialized Hospital, limited patient awareness along with lack of resources contribute to diagnoses of cancers at advanced stages, which lead to poor patient outcomes \[[@b19-jcp-24-163]\]. However, patterns of cancer, their stages and risk factors for advanced cancers have not been well studied and documented in Ethiopia, as prior studies have largely focused on communicable diseases, such as AIDS/HIV, malaria, and tuberculosis \[[@b20-jcp-24-163]\]. To fill this substantial gap, this study examines patterns of cancer occurrence and stages of cancer at diagnosis, and risk factors associated with advanced stage cancers among patients at Tikur Anbessa Hospital from 2010 to 2014. MATERIALS AND METHODS ===================== 1. Study design and setting --------------------------- A hospital-based retrospective cross-sectional study was conducted based on medical record review of selected patients at Tikur Anbessa Hospital in Addis Ababa, Ethiopia. Tikur Anbessa Hospital is Ethiopia's highest tertiary level referral and teaching hospital, and the nation's sole cancer referral center. It is staffed with the nation's most senior specialists and faculty from Addis Ababa University. Tikur Anbessa Hospital is a training center for undergraduate and postgraduate medical students, dentists, nurses, pharmacists, and public health specialists. The hospital serves approximately 370,000 to 400,000 patients a year and the emergency department sees around 80,000 patients a year. This study was conducted at pathology department of the school of medicine at Tikur Anbessa Hospital. Before starting data collection ethical approval letter was obtained for data collection from Jimma University (Ethical clearance letter number RPGC/3061/2015). 2. Study population ------------------- The inclusion criteria were all sampled patients who were diagnosed to have cancer by biopsy at Tikur Anbessa Hospital pathology department between January 1st, 2010 and December 15th, 2014. The exclusion criteria were inconsistency of data among all of the three data sources (biopsy logbook, the physician biopsy request forms and patient cards), if data was not found in all of the three data sources, if the data of a patient was recorded in previous records and if the result of the diagnosis was non-cancerous or a benign lesion. 3. Sampling procedures ---------------------- All samples are given biopsy numbers in the biopsy logbook by calendar year, starting in January and ending in December. The samples were selected by stratified sampling technique considering each of the study years as strata and calculating the samples to be selected from each stratum to be proportional to their size. These samples from each stratum were then included in the study using simple random sampling method using randomly generated numbers. 4. Sample size calculation -------------------------- Sample size is calculated using formula for single and finite population. Nf = N ( Z α / 2 ) 2 P ( 1 \- P ) \_ where, d 2 ( N \- 1 ) \+ ( α / ) ( \- ) Nf is minimum sample size, which was calculated to be 919 samples, where P is taken from recent report of Addis Ababa Cancer Registry about cancer patterns in which the most common cancer, which is breast cancer and the proportion was 33% and the least common cancer was esophageal cancer and the proportion was 2% \[[@b21-jcp-24-163]\]. d is margin of error which is estimated to be 3% since P is less than 0.5. Zα/2 is the standard normal variable at 1 α % confidence level and α is mostly 5% at 95% CI level. N is population size which is total number of patients having biopsy during the year January 1st, 2010 to December 15th, 2014 is 35,400. 5. Operational definitions -------------------------- Time to presentation is the time from start of chief complaint up to the first time that the patient seeks medical care. Type of cancer by site is the diagnosis of cancer after biopsy and is expressed by site of occurrence and the values include breast cancer, cervical cancer, prostate cancer, ovarian cancer, colorectal cancer, hematologic malignancies including leukemia and lymphoma, lung cancer, gastric cancer, esophageal cancer, liver cancer (hepatoma), skin, bone and soft tissue cancer, retinoblastoma, thyroid cancer, endometrial/genital cancers, bladder cancer, brain cancer, ear, nose and throat cancer, renal cancer, and nephroblastoma. Stage of cancer at diagnosis is a measure of disease progression, detailing the degree to which the cancer has advanced. Surveillance Epidemiology and End Results (SEER) program has standardized and simplified staging to ensure consistent definitions over time and is used in countries with advanced population based cancer registry programs. In-situ cancer is an early cancer that is present only in the layers of cells in which it began. Localized cancer is cancer which is limited to the organ in which it began, without evidence of spread. Regional cancer is a cancer that has spread beyond the primary site to nearby organs, lymph nodes (LNs) and tissues. Distant cancer is cancer that has spread from primary site to distant organs. Unstaged cancer is cancer for which there is not enough information to indicate stage. Advanced stage of cancer is regional and/or distant cancer. Dependent variables are stage of cancer (advanced stage of cancer). Independent variables are age, sex, time of presentation of patients, types of cancer. 6. Statistical analysis ----------------------- Descriptive statistical analysis was performed across the different types of cancers after stratifying by sex. Pearson chi-square test was used for categorical variables. ANOVA and *t*-tests were computed for continuous variables. Univariate and multivariate binary logistic regression analyses were performed to examine factors associated with advanced stage cancers. The regression models were built with advanced stage cancers as the outcome variable after adjusting for covariates. Covariates were included in the final regression model based on priori and our conceptual framework. Adjusted OR with 95% CI were used to determine the magnitude of associations among advanced stage cancers and various potential risk factors. Two-tailed statistical significance was assessed at α \< 0.05. All the analyses were conducted in 2019 using statistical software SAS 9.4 (SAS Institute Inc., Cary, NC, USA). RESULTS ======= [Table 1](#t1-jcp-24-163){ref-type="table"} shows basic characteristics of the study participants. Out of the 919 patients in the study, 254 (27.6%) were males and 665 (72.4%) were females. The overall mean age was 45.2 ± 19.1 years and 44.6 ± 15.1 years among males and females, respectively. The most common malignancies among males were bone and soft tissue (16.5%), colorectal (12.2%), and esophageal (9.1%). Among females, the most common cancers were cervical (39.7%), breast (18.3%), and ovarian (7.1%). Among both males and females, retinoblastoma and thyroid cancers had the longest time-intervals (in months) between the onset of symptoms and presentation 34.0--19.3 (*P* \< 0.01) and 23.3--23.9 (*P* \< 0.01), respectively. Distribution patterns and stages of cancers at diagnosis are provided in [Table 2](#t2-jcp-24-163){ref-type="table"}. At the time of diagnosis, 3.9% and 12.0% of the malignancies were already distantly metastasized among males and females, respectively. While, regionally metastasized malignancies accounted for 20.9% in males and 25.6% in females. Half (50.0%) of the cancers among males and 45.0% of the cancers among females were unstageable. Males had less advanced (regional and distant metastasis) stage cancers at diagnosis than females (24.8% vs. 37.6%, *P* \< 0.01). Among males, 46.7% of prostate, and 29.0% of colorectal cancers were at advanced stages at the time of diagnosis. Among females, 59.0% of breast, 41.7% of cervical, and 17.2% of colorectal cancers were at advanced stages at the time of diagnosis. [Table 3](#t3-jcp-24-163){ref-type="table"} shows the prevalence and risk factors associated with advanced stage cancers stratified by sex. Among females, the odds of having advanced stage malignancy was 221% (OR = 3.21; 95% CI = 1.69--6.10) higher in those who presented after 12 months, compared to those who presented within 6 months of the onset of symptoms. Among males, Prostate (OR = 5.22; 95% CI = 1.26--21.60) and breast (OR = 9.73; 95% CI = 2.31--40.92) cancers were more likely to be diagnosed at advanced stages, in comparison to bone and soft tissue cancers. Among females, breast (OR = 1.93; 95% CI = 1.23--3.03) cancer was more likely to be diagnosed at advanced stages, while colorectal (OR = 0.30; 95% CI = 0.11--0.83), gastric (OR = 0.20; 95% CI = 0.04--0.90), esophageal (OR = 0.39; 95% CI = 0.17--0.90), bone and soft tissue (OR = 0.04; 95% CI = 0.01--0.29), thyroid (OR = 0.30; 95% CI = 0.12--0.78), and endometrial and genital (OR = 0.11; 95% CI = 0.02--0.48) cancers were less likely to be diagnosed at advanced stages, in comparison to cervical cancers. The distribution of cancer at diagnosis by age group stratified by sex at Tikur Anbessa Hospital in Ethiopia, 2010 to 2014 is illustrated [Table 4](#t4-jcp-24-163){ref-type="table"}. Selected cancers with high incidence rates and related factors are shown in [Table 5](#t5-jcp-24-163){ref-type="table"}. DISCUSSION ========== The majority of the biopsy-confirmed cancer diagnoses were made in females. We found bone and soft tissue (16.5%), colorectal (12.2%), and esophageal (9.1%) were the most common cancer diagnoses among males. Among females, the most common cancers were cervical (39.7%), breast (18.3%), and ovarian (7.1%). At the time of diagnosis, 3.9% and 12.0% of the malignancies were already distantly metastasized among males and females, respectively. Advanced stage cancers were more common among females than males at the time of diagnosis. Prostate and breast cancers among males, and breast cancer among females were more likely to be diagnosed at advanced stages. Significant proportions of cervical and colorectal cancers were also diagnosed at advanced stages. Delayed presentation from onset of symptom was associated with more advanced stage cancer diagnoses among females. Our study found the majority of patients with cancer were females (72.4%) with mean age of 44.6 ± 15.1 years. This finding concurs with a recent report of Addis Ababa cancer registry, where 67% of the registered cancer cases were women with highest incidence rate of 38.4% in the age group of 30 to 49 years \[[@b21-jcp-24-163]\]. Similarly, higher burden of cancer among females have been reported in other African countries \[[@b6-jcp-24-163],[@b22-jcp-24-163]\]. A possible explanation could be the fact that females are more likely to be in contact with clinicians and utilize health services, largely due to pregnancy and childbirth \[[@b23-jcp-24-163]\]. Occurrence of different malignancies among females especially those related to reproductive organs could be the reason for this sex preference and the age group preference could be due to repeated exposure of these reproductive age group population with different risk factors related to cancer \[[@b21-jcp-24-163]\]. Particular attention needs to be paid to this issue since predominantly the productive age groups, mainly females, are being affected, which in turn could have an impact on the economy of the country. Results of this study showed the most common cancers among females were cervical (39.7%), breast (18.4%), and ovarian (7.1%). These three top cancers were also reported by the Addis Ababa cancer registry report where the most common cancers in females were cancer of the breast 33%, followed by cancer of the cervix uteri 17% and ovary 6% \[[@b21-jcp-24-163]\]. These findings are also similar with studies done about pattern and trends of cancer in Odisha, India and in South Africa except that breast cancer in these countries is the most common cancer diagnosis \[[@b24-jcp-24-163],[@b25-jcp-24-163]\]. In India carcinomas of breast (28.94%), cervix (23.66%), and ovary (16.11%) are leading causes among females and breast cancer was the leading cause of cancer followed by cervical cancer and colorectal cancer in South Africa \[[@b24-jcp-24-163],[@b25-jcp-24-163]\]. This similarity could be explained by the similarity of the social and economic context of these countries, as well as due to lack of awareness of patients about the symptoms of cancer in these countries. The significant burden of cervical cancer has been attributed to the high prevalence of human papilloma virus (HPV) infection coupled with a lack of screening services for prevention and early detection of the disease \[[@b26-jcp-24-163]\]. In the Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) 2012 report, the most commonly diagnosed cancers worldwide are those of the lung 13.0%, breast 11.9%, and colorectal 9.7% \[[@b5-jcp-24-163]\]. These differences from our study findings are mainly due to the disparity in standards of living between developed and developing countries, high prevalence of infections related to cancers especially cervical cancer in developing countries and increased lifestyle risk factors related to cancers, mainly smoking, in developed countries. This implies that in Ethiopia the risk could significantly increase in the near future since there is high prevalence of infection related cancers coupled with adoption of western lifestyle in the country \[[@b27-jcp-24-163]\]. In addition, these findings imply problem in the awareness among the general public regarding cancer or problem in availability and utilization of early screening modalities \[[@b26-jcp-24-163],[@b28-jcp-24-163]\]. The study found the most common malignancies among males were bone and soft tissue (16.5%), colorectal (12.2%), and esophageal (9.1%) with some similarity with the report of the Addis Ababa cancer registry where colorectal cancer was the leading cancer among males at 19% \[[@b21-jcp-24-163]\]. However, the result is different from the study at Yirga Alem Hospital where the most common cancers by site in men were non-Hodgkin's lymphoma (13.9%), soft-tissue sarcoma (12.7%), and non-melanoma skin cancer (12.2%) \[[@b29-jcp-24-163]\]. The reason for this disparity could be the difference in the study period since the study was done 20 years ago where non-Hodgkin's lymphoma was much more common due to high prevalence of HIV/AIDS at that time \[[@b29-jcp-24-163]\]. Some of these findings could also be due the relative ease of being able to obtain and confirm these types of cancers in resource limited settings, compared to other cancer types, which may require more technical expertise and invasive procedures. We found esophageal cancer is the third most common cancer among males in Ethiopia. Although other cancer studies in Ethiopia have not reported similar findings, esophageal cancer is common among several East African countries \[[@b30-jcp-24-163]\]. Studies have reported Malawi has the highest incidence rates in the world, while esophageal cancer is the leading cause of cancer mortality among men in Kenya \[[@b30-jcp-24-163]\]. The underlying risk factors vary greatly from tobacco smoking and heavy alcohol drinking in Western countries to drinking hot mate in South America. The cause for relatively high esophageal cancer rates in East Africa, including Ethiopia is unclear, given the relatively low rates of tobacco smoking and heavy alcohol drinking \[[@b30-jcp-24-163],[@b31-jcp-24-163]\]. One possible explanation that needs to be further investigated is the high prevalence of Khat chewing among males in Ethiopia \[[@b31-jcp-24-163]\]. Khat is a stimulant containing Alkaloid Cathinone that has been linked to genetic tissue damage and esophageal cancer \[[@b32-jcp-24-163]\]. Overall, males had less advanced stage cancers at diagnosis than females (24.8% vs. 37.6%, *P* \< 0.01). Among females 59.0% of the breast cancers were at advanced stages at the time of diagnosis. This finding is in contrast to lower advanced stage breast cancer rates at diagnosis in developing countries, such as the recently reported 46.0% in the Netherlands \[[@b33-jcp-24-163]\]. This is likely to be multifactorial, including lack of awareness about cancer among females or ignoring what they think is minor symptoms because of their low economic status or social stigma. This result is similar with a study published in Lancet in 2013, where patients often present with aggressive features; concluding Ethiopian women with breast cancer often ignore lumps, and usually seek treatment only when symptoms such as pain and itching occur \[[@b15-jcp-24-163]\]. Among females, the odds of having advanced stage malignancy was 3.21 times higher in those who presented more than a year after having the first symptom compared to those with shorter presentation within 6 months and Breast cancer was more likely to be diagnosed at advanced stage. This is also comparable with a study done on the influence on survival of delay in the presentation and treatment of symptomatic breast cancer, where 32% of patients had symptoms for 12 or more weeks before their first hospital visit and 32% of patients with delays of 12 or more weeks had locally advanced or metastatic disease, compared with only 10% of those with delays of less than 12 weeks (*P* \< 0.0001) \[[@b34-jcp-24-163]\]. Significant proportion (41.7%) of cervical cancers were also diagnosed at advanced stages, which previous studies have attributed to several factors, including lack of awareness and access to appropriate health services, and the use of traditional remedies for early stages of the disease \[[@b32-jcp-24-163]\]. The cancer stage at presentation will have impact not only on the treatment outcome and survival of patients, but also on costs related to treatment and follow ups. Substantial progress could be made through effective public health education, along with HPV vaccine and HPV-based cervical cancer screening programs \[[@b35-jcp-24-163]\]. In both males and females, retinoblastoma and thyroid cancers had the longest time-intervals between the onset of symptoms and presentation. This is in contrast to a study conducted in England on risk factors for delay in symptomatic presentation of cancer, prostate (44%) and rectal cancers (37%) were most likely to delay and patients with breast cancer least likely to delay (8%) \[[@b36-jcp-24-163]\]. In our study, we did find prostate cancer was most likely to be diagnosed at more advanced stage. On the contrary, diagnosis of advanced stage prostate cancer has been declining in developed counties, such as the United Kingdom \[[@b37-jcp-24-163]\]. This is likely due to the difference in the cancer patterns and having the awareness and access to appropriate medical care to diagnose symptomatic cancers, such as prostate, thyroid and breast cancers in developed countries. The study has several strengths. First, cancer diagnoses were confirmed with biopsy, as opposed to diagnoses made solely clinically or from patient surveys. Second, the study was conducted at Tikur Anbessa Hospital, the country's sole cancer referral center. Finally, a large representative sample size was used over a 5-year time-period. One of the study limitations is the fact that it was based on patients who were able to have a biopsy done. Many factors could influence this, including whether or not a patient was able to present to appropriate medical facility and obtained a timely referral to Tikur Anbessa Hospital. It also depends on whether or not the patient was able to undergo a biopsy and on the availability of equipment and technical expertise in a low resource country. Finally, nearly half of the cancers were unstageable. These limitations could underestimate certain types of cancers and should be considered when interpreting the study findings and future studies should be designed to identify factors associated with common cancer occurrences in Ethiopia. In conclusion, we found cancers with effective screening tests, such as cervical, breast and colorectal cancers are common in Ethiopia and significant proportions of these were diagnosed at advanced stages, typically several months after onset of symptoms. Timely access to preventive care along with effective educational and screening strategies is needed in Ethiopia to detect and treat cancer early. **CONFLICTS OF INTEREST** No potential conflicts of interest were disclosed. ###### Baseline characteristics and distribution of cancer at diagnosis at Tikur Anbessa Hospital in Ethiopia, 2010 to 2014 Site/type of cancer Male Female ------------------------- ------------ ------------- ------------- ------------ ------------- ------------- All 254 (27.6) 45.2 ± 19.1 9.7 ± 8.1 665 (72.4) 44.6 ± 15.1 9.7 ± 7.1 Breast 14 (5.5) 52.6 ± 10.8 11.6 ± 8.9 122 (18.3) 40.6 ± 12.4 9.4 ± 6.9 Cervical \- \- \- 264 (39.7) 48.8 ± 11.6 8.6 ± 5.9 Prostate 15 (5.9) 62.3 ± 17.7 7.1 ± 5.4 \- \- \- Ovarian \- \- \- 47 (7.1) 43.1 ± 14.4 8.2 ± 4.7 Colorectal 31 (12.2) 44.9 ± 18.1 7.5 ± 4.5 29 (4.4) 50.1 ± 15.3 8.4 ± 3.5 Hematologic 2 (0.8) 58.0 ± 18.4 3.5 ± 0.7 3 (0.5) 39.3 ± 23.8 14.3 ± 8.7 Lung 5 (2.0) 46.8 ± 14.5 8.4 ± 3.0 3 (0.5) 49.2 ± 11.0 7.0 ± 6.6 Gastric 16 (6.3) 46.1 ± 9.9 7.5 ± 3.3 16 (2.4) 49.2 ± 15.7 7.8 ± 4.3 Esophageal 23 (9.1) 50.6 ± 15.2 8.3 ± 3.3 37 (5.6) 51.8 ± 11.1 9.4 ± 4.7 Liver 1 (0.4) 75.0 6.0 5 (0.8) 32.0 ± 21.8 4.8 ± 1.1 Skin 15 (5.9) 42.7 ± 19.9 8.4 ± 9.2 14 (2.1) 49.1 ± 15.1 5.9 ± 2.5 Bone and soft tissue 42 (16.5) 37.5 ± 19.6 10.2 ± 7.6 33 (5.0) 34.6 ± 17.9 9.0 ± 7.4 Retinoblastoma 6 (2.4) 3.8 ± 1.9 34.0 ± 11.8 9 (1.4) 9.0 ± 15.5 23.3 ± 12.9 Nephroblastoma 2 (0.8) 6.0 14.0 ± 14.1 2 (0.3) 5.0 ± 1.4 9.0 ± 4.2 Thyroid 12 (4.7) 42.1 ± 19.1 19.3 ± 15.1 31 (4.7) 38.3 ± 17.3 23.9 ± 17.8 Renal 8 (3.2) 45.4 ± 8.5 7.5 ± 3.6 9 (1.4) 29.2 ± 18.7 8.8 ± 4.5 ENT 18 (7.1) 48.2 ± 15.4 8.4 ± 6.4 1 (0.2) 18.0 3.0 Brain 10 (3.9) 24.6 ± 16.4 7.2 ± 2.9 4 (0.6) 35.0 ± 16.1 5.5 ± 2.6 Bladder 20 (7.9) 58.2 ± 11.7 6.9 ± 2.9 10 (1.5) 50.3 ± 12.9 8.2 ± 3.3 Endometrial and genital 6 (2.4) 50.0 ± 13.4 13.7 ± 6.7 22 (3.3) 48.2 ± 15.1 12.9 ± 6.8 Others 8 (3.2) 49.4 ± 18.1 8.8 ± 3.7 4 (0.6) 29.8 ± 10.3 8.3 ± 6.1 Values are presented as number (%) or mean ± SD. ENT, ear, nose and throat; TOP, time of presentation from onset of symptoms in month. *P* \< 0.01 using ANOVA. ###### Stages of cancer at diagnosis at Tikur Anbessa Hospital in Ethiopia, 2010 to 2014 Site/type of cancer Sex Unstaged (%) Localized (%) Regional (%) Distant (%) Advanced[a](#tfn4-jcp-24-163){ref-type="table-fn"} (regional and distant) (%) ------------------------- -------- -------------- --------------- -------------- ------------- ------------------------------------------------------------------------------- All Male 50.0 25.2 20.9 3.9 24.8 Female 45.0 17.4 25.6 12.0 37.6 Breast Male 35.7 0.0 50.0 14.3 64.3 Female 36.1 4.9 32.8 26.2 59.0 Cervical Male \- \- \- \- \- Female 46.2 12.1 31.1 10.6 41.7 Prostate Male 6.7 46.7 46.7 0.0 46.7 Female \- \- \- \- \- Ovarian Male \- \- \- \- \- Female 38.3 19.2 23.4 19.2 42.6 Colorectal Male 54.8 16.1 22.6 6.5 29.0 Female 48.3 34.5 13.8 3.5 17.2 Hematologic Male 100.0 0.0 0.0 0.0 0.0 Female 100.0 0.0 0.0 0.0 0.0 Lung Male 40.0 0.0 60.0 0.0 60.0 Female 33.3 33.3 0.0 33.3 33.3 Gastric Male 81.3 6.3 6.3 6.3 12.5 Female 50.0 37.5 12.5 0.0 12.5 Esophageal Male 56.5 17.4 17.4 8.7 26.1 Female 59.5 18.9 21.6 0.0 21.6 Liver Male 100.0 0.0 0.0 0.0 0.0 Female 0.0 40.0 40.0 20.0 60.0 Skin Male 40.0 53.3 6.7 0.0 6.7 Female 14.3 42.9 21.4 21.4 42.9 Bone and soft tissue Male 64.3 14.3 21.4 0.0 21.4 Female 75.8 21.2 0.0 3.0 3.0 Retinoblastoma Male 16.7 50.0 16.7 16.7 33.3 Female 0.0 55.6 44.0 0.0 44.4 Nephroblastoma Male 0.0 50.0 50.0 0.0 50.0 Female 0.0 50.0 50.0 0.0 50.0 Thyroid Male 41.7 41.7 16.7 0.0 16.7 Female 25.8 45.2 25.8 3.2 29.0 Renal Male 37.5 37.5 25.0 0.0 25.0 Female 22.2 55.6 22.2 0.0 22.2 ENT Male 66.7 11.1 16.7 5.6 22.2 Female 0.0 0.0 100.0 0.0 100.0 Brain Male 10.0 80.0 10.0 0.0 10.0 Female 75.0 25.0 0.0 0.0 0.0 Bladder Male 25.0 55.0 15.0 5.0 20.0 Female 30.0 40.0 20.0 10.0 30.0 Endometrial and genital Male 100.0 0.0 0.0 0.0 0.0 Female 90.9 0.0 0.0 9.1 9.1 Others Male 87.5 0.0 12.5 0.0 12.5 Female 100.0 0.0 0.0 0.0 0.0 ENT, ear, nose and throat. *P* = 0.04 among males; *P* \< 0.01 among females, and all males verses females using chi-square test. ###### Prevalence and risk factors associated with advanced stage cancers stratified by sex at diagnosis at Tikur Anbessa Hospital in Ethiopia, 2010 to 2014 Variable Advanced stage[a](#tfn6-jcp-24-163){ref-type="table-fn"} cancer ------------------ ----------------------------------------------------------------- ------ -------------------- ------- -------------------- Age (yr) \< 30 28.6 1.00 33.7 1.00 30--59 25.2 0.60 (0.24--1.48) 39.4 0.95 (0.52--1.75) ≥ 60 21.1 0.37 (0.13--1.08) 34.1 0.89 (0.44--1.81) TOP (mo) \< 6 22.4 1.00 36.7 1.00 6--12 29.1 1.35 (0.69--2.68) 34.7 1.03 (0.71--1.49) \> 12 16.7 0.40 (0.09--1.74) 52.6 3.21 (1.69--6.10) Cancer type/site Breast 64.3 9.73 (2.31--40.92) 59.0 1.93 (1.23--3.03) Cervical N/A N/A 41.7 1.00 Prostate 46.7 5.22 (1.26--21.60) N/A N/A Ovarian N/A N/A 42.6 1.07 (0.56--2.03) Colorectal 29.0 1.78 (0.58--5.52) 17.2 0.30 (0.11--0.83) Hematologic 0.0 \-- 0.0 \-- Lung 60.0 5.67 (0.79--40.90) 33.3 0.51 (0.04--6.26) Gastric 12.5 0.62 (0.11--3.51) 12.5 0.20 (0.04--0.90) Esophageal 26.1 1.60 (0.45--5.65) 21.6 0.39 (0.17--0.90) Liver 0.0 \-- 60.0 2.26 (0.36--14.14) Skin 6.7 0.33 (0.04--3.02) 42.9 1.16 (0.39--3.47) Bone and soft tissue 21.4 1.00 3.0 0.04 (0.01--0.29) Retinoblastoma 33.3 3.56 (0.36--35.36) 44.4 0.52 (0.11--2.43) Nephroblastoma 50.0 4.53 (0.22--8.80) 50.0 1.44 (0.08--24.82) Thyroid 16.7 1.05 (0.18--6.09) 29.0 0.30 (0.12--0.78) Renal 25.0 1.40 (0.22--8.80) 22.2 0.36 (0.07--1.86) ENT 22.2 1.37 (0.34--5.57) 100.0 \-- Brain 10.0 0.32 (0.04--2.90) 0.0 \-- Bladder 20.0 1.39 (0.33--5.88) 30.0 0.66 (0.17--2.60) Endometrial and genital 0.0 \-- 9.1 0.11 (0.02--0.48) Others 12.5 0.58 (0.06--5.63) 0.0 \-- TOP, time of presentation from onset of symptom in months; ENT, ear nose and throat; N/A, non-applicable; \--, not enough sample size to compute OR. Regional or distant metastasis. ###### Distribution of cancer at diagnosis by sex and age group at Tikur Anbessa Hospital in Ethiopia, 2010 to 2014 Site/type of cancer Male[a](#tfn8-jcp-24-163){ref-type="table-fn"} Female[a](#tfn8-jcp-24-163){ref-type="table-fn"} ------------------------- ------------------------------------------------ -------------------------------------------------- ------- ------- ------ ------ All 22.1 50.0 27.9 13.8 66.8 19.4 Breast 0.0 71.4 28.6 14.8 75.4 9.8 Cervical \- \- \- 2.7 74.6 22.7 Prostate 13.3 13.3 73.3 \- \- \- Ovarian \- \- \- 19.2 65.9 14.9 Colorectal 19.4 54.8 25.8 3.5 68.9 14.9 Hematologic 0.0 50.0 50.0 33.3 33.3 33.3 Lung 20.0 60.0 20.0 0.0 66.7 33.3 Gastric 0.0 87.5 12.5 12.5 56.3 31.2 Esophageal 8.7 56.5 34.8 0.0 64.9 35.1 Liver 0.0 0.0 100.0 40.0 60.0 0.0 Skin 26.7 53.3 20.0 7.1 64.3 28.6 Bone and soft tissue 45.2 40.5 14.3 45.5 39.4 15.1 Retinoblastoma 100.0 0.0 0.0 88.9 11.1 0.0 Nephroblastoma 100.0 0.0 0.0 100.0 0.0 0.0 Thyroid 33.3 41.7 25.0 38.7 51.6 9.7 Renal 0.0 87.5 12.5 44.4 55.6 0.0 ENT 11.1 61.1 27.8 100.0 0.0 0.0 Brain 70.0 30.0 0.0 50.0 50.0 0.0 Bladder 0.0 40.0 60.0 10.0 60.0 30.0 Endometrial and genital 0.0 66.7 33.3 18.2 50.0 31.8 Others 12.5 50.0 37.5 50.0 50.0 0.0 Values are presented as percent only. ENT, ear nose and throat. *P* \< 0.01. ###### Selected high incident cancers, bone and soft tissue cancer and colorectal in males and cervical and breast cancer in females, and related factors at diagnosis at Tikur Anbessa Hospital in Ethiopia, 2010 to 2014 Variable Value -------------------------------------------------------------------------- -------------------------- ------- Male  Bone and soft tissue   TOP (mo)[a](#tfn10-jcp-24-163){ref-type="table-fn"} \< 6 33.30 6--12 52.40 \> 12 14.30   Chief presenting complaints[a](#tfn10-jcp-24-163){ref-type="table-fn"} Difficulty walking 52.40 Soft tissue swelling 11.90 Difficulty swallowing 2.40 Others 33.30  Colorectal cancer   TOP (mo)[a](#tfn10-jcp-24-163){ref-type="table-fn"} \< 6 54.80 6--12 41.90 \> 12 3.20   Chief presenting complaints[a](#tfn10-jcp-24-163){ref-type="table-fn"} Constipation 45.20 Abdominal pain 12.90 Diarrhea 12.90 Others 29.00 Female  Cervical cancer   TOP (mo)[a](#tfn10-jcp-24-163){ref-type="table-fn"} \< 6 42.40 6--12 49.60 \> 12 12.30   Chief presenting complaints\* Vaginal bleeding 54.90 Post-coital bleeding 17.80 Abdominal pain 5.30 Others 22.00  Breast cancer   TOP (mo)[a](#tfn10-jcp-24-163){ref-type="table-fn"} \< 6 39.30 6--12 48.40 \> 12 12.30   Chief presenting complaints[a](#tfn10-jcp-24-163){ref-type="table-fn"} Breast lump and swelling 45.10 Ulcerative breast lesion 37.70 Pain 6.60 Others 10.60 Values are presented as percent only. TOP, time of presentation from onset of symptom in months. *P* \< 0.01.
Why do parents play an important role? The proper role of the parent is to provide encouragement, support, and access to activities that enable the child to master key developmental tasks. A child’s learning and socialization are most influenced by their family since the family is the child’s primary social group. Happy parents raise happy children. What are the roles of responsible parents? 10 Things Responsible Parents Do (and 5 They Don’t) - They teach more with actions (and examples) and less with words. … - They encourage more and criticize less. … - They spend quality time with their children. … - They act as responsible individuals themselves. … - They encourage dialogues with the kids. … - They stay connected as a couple. What is the most important role in the family? Answer. Answer: The primary function of the family is to ensure the continuation of society, both biologically through procreation, and socially through socialization. From the point of view of the parents, the family’s primary purpose is procreation: The family functions to produce and socialize children. What is the role of parents and the community in the life of a child? Parents and community members can adopt a variety of roles and relationships with schools. Three of the most critical roles they can assume are: becoming primary educational resources for their children; becoming supporters and/or advocates for children through site-based school restructuring efforts; and. What role do parents play in a child’s brain development? Parents and other caregivers can support healthy brain growth by speaking to, playing with, and caring for their child. Children learn best when parents take turns when talking and playing, and build on their child’s skills and interests. What are four basic responsibilities of parents? Parental Responsibilities - Provide an environment that is SAFE. … - Provide your child with BASIC NEEDS. … - Provide your child with SELF-ESTEEM NEEDS. … - Teach your child MORALS and VALUES. … - Develop MUTUAL RESPECT with your child. … - Provide DISCIPLINE which is effective and appropriate. … - Involve yourself in your child’s EDUCATION. What are good qualities of a parent? Traits of Good Parents - Guide and Support, Not Push and Demand. - Let Kids Be Independent. - Remember, Kids Are Always Watching. - Never Be Mean, Spiteful, or Unkind. - Show Your Kids You Love Them. - Apologize for Your Mistakes. - Discipline Effectively. - See Your Child for Who They Are.
https://connectedkansaskids.com/maternity/how-parents-play-an-important-role-in-childrens-life.html
the state of art deep learning-based threat intelligence for attack detection. The frontiers in deep learning, namely Meta-Learning and Federated Learning, along with their challenges have been included in the chapter. We have proposed, trained, and tested the deep CNN-LSTM architecture for CAV threat intelligence; assessed and compared the performance of the proposed model against other deep learning algorithms such as DNN, CNN, LSTM. Our results indicate the superiority of the proposed model although DNN and 1d-CNN also achieved more than 99% of accuracy, precision, recall, f1-score, and AUC on the CAV-KDD dataset. The good performance of deep CNN-LSTM comes with the increased model complexity and cumbersome hyperparameters tuning. Still, there are open challenges on deep learning adoption in the CAV cybersecurity paradigm due to lack of properly developed protocols and policies, poorly defined privileges between stakeholders, costlier training, adversarial threats to the model, and poor generalizability of the model under out of data distributions.
https://spywarenews.com/index.php/2021/09/23/a-deep-learning-perspective-on-connected-automated-vehicle-cav-cybersecurity-and-threat-intelligence-arxiv2109-10763v1-cs-cr/security-world-news/admin/?doing_wp_cron=1635127729.0046479701995849609375
Best Books About Retirement Planning The rise of smartphones and WiFi has made it easy for people to access information about financial matters. On the other hand, books remain an excellent resource for learning about personal finance. Besides personal finance, some books cover various personal finance... 3 Tips for Paying Off Credit Card Debt Credit cards are an easy way to fall into debt, especially for those who don't stick to a budget and limit their purchases. At first, a little debt may not seem like a problem, but when it begins to pile up and the monthly payments come due, it can become... How Mental Health Can Impact Your Finances Most individuals overlook mental health when it comes to personal finances. However, your mental health can have a massive impact on your financial stability. Here are a few ways that mental health can affect your finances: Excessive Spending If you suffer from... 4 Strategies for a Smooth Transition from Two Sources of Income to One Life happens Sometimes a family has to transition from two sources of income down to one. This shift could be the right decision for the family at the time. Perhaps a family decides to homeschool their children. One person could lose their job. In any event, making... How to Set Your Financial Goals in 2022 Do you know what you want your future to look like? Do you want to buy a big home with a yard? Do you want to pay off your student loans? Are you ready to get out of credit card debt? To accomplish any of these, you will need to set financial goals. Goals need to be... Creating a Budget For the New Year Many people take the opportunity to create a new budget as part of their new year’s resolution. A budget can help you keep your spending in check and ensure that you're living within your means. There are several different ways to create a budget, and the best way to... Financial Preparedness for the Year 2022 In order for the chance to achieve financial freedom, one needs to invest now for a brighter future. This includes setting a budget that has specific amounts allocated to investments, savings, and expenses. For instance, in the current year, 2021, there are many ways... Recession-Proofing Your Retirement Plan 2021 Recessions are an inevitable part of economic cycles. Sometimes, they're expected far in advance, but other times they show up unexpectedly. Their level and duration are also somewhat variable. What often happens is that these so-called 'market corrections' wind up... Budgeting During Retirement Retirement can be one of the most enjoyable times of your life, allowing time for many activities you may otherwise not have been able to pursue. However, even during retirement, it’s important to create and stick to a budget in order to ensure that you will be able... Preparing for the Unexpected in Retirement Retirees may find themselves experiencing unforeseen and difficult circumstances at one point in time or another. Certain situations can leave them wondering what they could have done to better prepare themselves to handle unexpected occurrences, especially when it...
https://mattwalkerkansas.com/blog/
Global Trends 2021: Aftershocks and continuity Most people across 25 countries now agree it is more important that businesses fight climate change than pay the right amount of tax. Seven in ten globally now say they tend to buy brands that reflect their personal values and that business leaders have a responsibility to speak out on social issues. Around the world, agreement on the urgency of dealing with climate change continues to rise but many other social attitudes hold steady, despite COVID-19. The Ipsos' Global Trends Study 2021 is the latest instalment of the wide-ranging Ipsos survey series that seeks to understand how global values are shifting. This year’s update polls the public in 25 countries around the world, ranging from developed countries such as the US, UK and Italy and the to emerging markets in Asia such as China and Thailand – as well as covering important new markets like Kenya and Nigeria for the first time. The survey reveals a world where public attitudes and values have changed less than might be expected under pressure from the pandemic. The changes we do see in the data tend to be driven by long-running trends in public opinion that pre-date COVID-19. In this 30 minute podcast, new Ipsos Global CEO discusses the findings from the latest iteration of our Global Trends Study, looking back at how the pandemic has changed our attitudes, and looking forward to what this means for the coming year and beyond. Download the transcript Climate change and the environment Concern about the climate was identified as the strongest global value in the Global Trends 2019 survey and its position has strengthened over the pandemic. Across the 25 markets, almost two thirds (63%) say it is more important to them that companies do as much as they can to reduce harm to the environment than it is that companies pay the right amount of tax, while just a quarter say tax is more important than climate (27%). Emerging markets are most likely to see dealing with climate as the imperative: more than eight in ten Colombians (82%) as well as a similar proportion of Chinese and Brazilian citizens (78%) say companies prioritising the environment is more important to them. By contrast, the balance is closest in Britain and Denmark where four in ten say companies paying the correct amount of tax is more important – although in these countries too half see the environment as the priority. Brand purpose The importance of brands aligning with personal values has been accelerated over the pandemic: this year seven in ten across the 25 markets agree that they tend to buy brands that reflect their personal values (70%). This link is strongest in Nigeria (91%), China (86%), Kenya and the Philippines (both 85%), while Mexicans and Danes are the least likely to agree (51% each). Since 2013, the importance of brands being associated with values has grown cross a number of key markets, with 17 percentage point increases in agreement in Britain and France, and a rise of 16 points in the US. Similarly, there is high interest in business leaders getting involved social issues, with more than eight in ten of the public in Nigeria, the Philippines, Singapore, India, Kenya and South Africa agreeing that business leaders having a responsibility to speak out on social and political issues facing their country. Even in France and the US, where agreement is lowest, just over half agree that this is desirable (51%). However there remain strong tensions in the public interest in businesses showing wider social purpose, with 45% of the sample across all countries agreeing that “I don’t care if a brand is ethically or social responsible, I just want them to make good products” – including 47% of Americans and four in ten Germans and Britons (both 41%). Faith in science Rising belief in science is another long-term trend. Six in ten of the 25-market global sample agree that eventually all medical conditions will be curable (60%) – a figure which has been rising since the start of the Global Trends series in 2013 and continued to rise through the pandemic as vaccines were developed in record time. Optimism is again higher among those in emerging markets, with nine in ten Indonesians (90%) and over eight in ten of those from Thailand and the Philippines (84% and 82%) agreeing that science will conquer all diseases. The French are the least convinced this is the case; almost half disagree (47%) compared with four in ten who agree (39%). However even here there is a longer-term increase in positivity: in 2013, just a quarter agreed all medical conditions will eventually become curable. Attitudes to data and technology Public opinion remains firmly against social media firms: 84% of the public across all countries agree that social media companies have too much power. While this has risen somewhat in almost all countries between 2019 and 2021, the most notable increase has been in China where the proportion who think social media firms have too much power has risen from 67% to 83%. However, attitudes towards data have not become more anxious over this time. Instead the survey has recorded rising apathy and even openness to sharing personal data. For instance the same proportion who are concerned about the power of social media – 84% - say it is inevitable we will all lose some privacy in the future because of the power of new technology. Agreement with this statement has risen in a number of countries since 2013, including a 12-point rise in France and increases of close to ten points in China (+9), Italy (+9), Canada (+8) and Britain (+7). Ben Page, incoming Chief Executive of Ipsos said: Globally we can see concern about climate change and demands for business to step up in general continue to grow, but also that the pandemic has not fundamentally changed human values and priorities. Existing trends like rising apathy about data privacy and trust in science are confirmed. The new normal looks more like the old normal than one might have expected last year.
https://www.ipsos.com/en/global-trends-2021-aftershocks-and-continuity
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). During evolution, plants have developed mechanisms to cope with and adapt to different types of stress, including microbial infection. Once the stress is sensed, signaling pathways are activated, leading to the induced expression of genes with different roles in defense. Mosses (Bryophytes) are non-vascular plants that diverged from flowering plants more than 450 million years ago, allowing comparative studies of the evolution of defense-related genes and defensive metabolites produced after microbial infection. The ancestral position among land plants, the sequenced genome and the feasibility of generating targeted knock-out mutants by homologous recombination has made the moss Physcomitrella patens an attractive model to perform functional studies of plant genes involved in stress responses. This paper reviews the current knowledge of inducible defense mechanisms in P. patens and compares them to those activated in flowering plants after pathogen assault, including the reinforcement of the cell wall, ROS production, programmed cell death, activation of defense genes and synthesis of secondary metabolites and defense hormones. The knowledge generated in P. patens together with comparative studies in flowering plants will help to identify key components in plant defense responses and to design novel strategies to enhance resistance to biotic stress. Plants are in permanent contact with a variety of microbial pathogens, such as fungi, oomycetes, bacteria and viruses. To ward off these pathogens, plants must recognize the invaders and activate fast and effective defense mechanisms that arrest the pathogen. Perception of the pathogens is central to the activation of a successful plant defense response. Plant cells are capable of sensing evolutionarily conserved microbial molecular signatures, collectively named pathogen-associated molecular patterns (PAMPs) or microbe-associated molecular patterns (MAMPs), by plant pattern recognition receptors (PRRs) [1–3]. MAMPs are molecules that are essential for microbe fitness and survival and are conserved between different species, resulting in an efficient form to sense the presence of pathogens by the plant. Perception of PAMPs by PRRs activates an immune response, referred to as PAMP-triggered immunity (PTI), which provides protection against non-host pathogens and limits disease caused by virulent pathogens [4]. Pathogens adapted to their host plants can deliver virulence effector proteins into plant cells, which target key PTI components and inhibit plant defense [5–9]. In turn, plants have evolved resistance (R) proteins to detect directly or indirectly the effector proteins and trigger disease resistance effector-triggered immunity (ETI), which is highly specific and often accompanied by the hypersensitive response (HR) and systemic acquired resistance (SAR). An additional surveillance system for the presence of pathogens is the release or production of endogenous damage associated molecular patterns (DAMPs), including plant cell wall and cutin fragments that are released by the enzymatic action of pathogens and also trigger immune responses [3,10,11]. Thus, plant immunity can be divided in two phases: PTI triggered by PAMPs and ETI triggered by effectors, with the difference being that activated immune responses in ETI are faster and amplified compared to those in PTI [4,12]. ETI and PTI pathways result in activation of an overlapping set of downstream immune responses, suggesting that there is a continuum between PTI and ETI [13]. These downstream defense responses include the activation of multiple signaling pathways and transcription of specific genes that limit pathogen proliferation and/or disease symptom expression. In addition, antimicrobial compounds are produced, reactive oxygen species (ROS) accumulate, cell wall defense mechanisms are activated and defense hormones, such as salicylic acid (SA), ethylene and jasmonic acid (JA) accumulate [4,14–17]. During the last few years, some progress has been made on the defense mechanisms activated in mosses (Bryophytes) during pathogen assault. The moss Physcomitrella patens (P. patens) is an interesting model plant to perform functional studies of genes involved in stress responses, because its genome has been sequenced, targeted knock-out mutants can be generated by homologous recombination and it has a dominant haploid phase during its life cycle [18–20]. Mosses are non-vascular plants that diverged from flowering plants more than 450 million years ago [21]. P. patens, together with the sequenced vascular spikemoss Selaginella moellendorffii [22], provide an evolutionary link between green algae and angiosperms, allowing comparative studies of the evolution of plant defense mechanisms and gene function. In nature, mosses are infected with microbial pathogens, resulting in chlorosis and necrosis of plant tissues [23–25]. Necrotrophic pathogens are capable of infecting and colonizing P. patens tissues, leading to the activation of defense responses [26–32]. Most likely, P. patens utilizes similar mechanisms for pathogen recognition as flowering plants, since chitin (PAMP) [31] and probably cell wall fragments generated by the action of cell wall degrading enzymes from bacterial pathogens (DAMPs) [26] are sensed by P. patens cells and typical PRRs and R genes homologues are present in its genome [33–35]. In addition, many of the cellular and molecular defense reactions activated in P. patens are similar to those reported in flowering plants. The present paper reviews the current knowledge of defense responses activated in P. patens and compares them to those activated in flowering plants after pathogen assault. 2. Broad Host Range Pathogens Infect both Mosses and Flowering Plants Broad host range pathogens are capable of infecting a variety of plant species, including flowering plants and mosses. These are successful pathogens, which have adapted and developed effective invasion strategies causing disease by producing different compounds, including enzymes and toxins that interfere with metabolic targets common to many plant species. In this review, we focus on the broad host range fungus Botrytis cinerea, the bacterium Pectobacterium carotovorum subsp. carotovorum and the oomycetes Pythium irregulare and Pythium debaryanum. These are necrotrophic pathogens that actively kill host tissue prior to or during colonization and thrive on the contents of dead or dying cells [36]. B. cinerea is a necrotrophic fungal pathogen that attacks over 200 different plant species [37] and penetrates plant tissues by producing toxins and multiple cell wall degrading enzymes (CWDEs), including pectinolytic enzymes and cutinases that kill the host cells causing grey mould disease in many crop plants [38]. B. cinerea is primarily a pathogen of dicotyledonous plants, but some monocot species, including onions and lilies, are also infected [39,40]. B. cinerea also infect P. patens plants, producing maceration of the tissues and browning of stems and juvenile protonemal filaments [26,28]. P.c. carotovorum (ex Erwinia carotovora subsp. carotovora) cause soft rot in a wide range of plant species, including vegetables, potato and Arabidopsis [41]. P.c. carotovorum is often described as a brute-force pathogen, because its virulence strategy relies on plant CWDEs, including cellulases, proteases and pectinases, which disrupt host cell integrity and promote tissue maceration [42,43]. Cell-free culture filtrate (CF) containing CWDEs from P.c. carotovorum produces similar symptoms (Figure 1) and defense gene expression as those caused by P.c. carotovorum infection, demonstrating that CWDEs are the main virulence factors [43–48]. In addition, these CWDEs release cell wall fragments, including oligogalacturonides that act as DAMPS activating an immune response in plant cells evidenced by the activation of defense related genes and phytoalexin accumulation [44,49–51]. Recently, it was shown that two strains of P.c. carotovorum, SCC1, harboring the harpin-encoding hrpN gene, which is an elicitor of the hypersensitive response (HR) [52], and the HrpN-negative P.c. carotovorum strain (SCC3193) [53] infect and cause maceration in leaves of P. patens [26]. Green fluorescent protein (GFP) labeled- P.c. carotovorum, was detected in the apoplast, as well as the space of P. patens invaded leaf cells (Figure 2). Treatments with CFs of these strains also caused symptom development in moss tissues, evidenced by tissue maceration and browning, which was more severe with the HrpN-positive strain, suggesting that harpin may contribute to P.c. carotovorum virulence [26]. Pythium species are soil-borne vascular pathogens, which infect the plants through the root tissues and under humid conditions cause pre-/post-emergence damping-off and root and stem rots in important crop species. Pythium infect host young tissues, and maceration is caused by both toxins and cell wall degrading enzymes, such as pectinases, hemicellulases, cellulases and proteinases [54,55]. P. irregulare and P. debaryanum infect P. patens, producing tissue maceration and browning of young protonemal tissues, stems and leaves [29]. In nature, Pythium ultimum infect mosses, causing the formation of areas of dead moss tissues [24]. In all these moss-pathogen interactions, multiple defense reactions are activated in plant cells, although they are not sufficient to stop infection, and after a few days, moss tissues are degraded, leading to plant decay. 3. Activation of Cell Wall Associated Defense Responses Pathogens are capable of penetrating the plant cell wall and gain access to cellular nutrients. Plant cells have developed pre-invasive structural defenses, including the cuticle and modifications of the cell wall that serve as barriers for the advance of potential pathogens [38,56]. Modification of the plant cell wall is an important defense mechanism operating in the defense response of flowering plants against necrotrophs [57,58]. Reinforcement of the cell wall involves accumulation of phenolic compounds, ROS and callose deposition at attempted penetration sites, making the cell wall less vulnerable to degradation by CWDEs. Callose is a high–molecular weight β-(1,3)-glucan polymer that is usually associated, together with phenolic compounds, polysaccharides and antimicrobial proteins, with cell wall appositions, called papillae, which are proposed to be effective barriers that are induced at the sites of pathogen attack [59,60]. Callose depositions are formed during early stages of pathogen invasion to inhibit pathogen penetration and are sites of accumulation of antimicrobial secondary metabolites [61]. Callose deposition plays a role in the defense response of Arabidopsis thaliana against P. irregulare, since the callose synthase mutant pmr4 is more susceptible to this oomycete compared with wild-type plants [62]. Phenolic compounds are also incorporated in cell walls of Pythium-infected tissues of flowering plants [63]. Similarly, the P. patens defense response against P. irregulare and P. debaryanum involves the accumulation of phenolic compounds, which were observed in the entire cell wall of infected cells (Figure 3) [29]. In contrast to P. irregulare-infected Arabidopsis plants [62], callose-containing wall appositions were usually not detected in Pythium-infected moss tissues [29]. However, callose depositions were observed when an old Pythium inoculum was used and colonization was not extensive, showing that these cell wall appositions can be formed at attempted infection sites, halting the progress of the invading pathogen [29]. Modification of the plant cell wall by the incorporation of phenolic compounds is also an important defense mechanism in the response of flowering plants against B. cinerea [57,58]. Increased activity of type III cell wall peroxidases, which probably influence the degree of crosslinking, resulted in enhanced resistance to B. cinerea [64]. Upon B. cinerea infection, P. patens incorporates phenolic compounds in the cell wall and increases expression of dirigent (DIR) encoding gene(s) [28]. DIR proteins are thought to mediate the coupling of monolignol plant phenols to yield lignans and lignins [65], and it is suggested that they participate in the defense response against pathogens [66,67]. Consistently, enzymes involved in monolignol biosynthesis, including putative cinnamoyl-CoA reductases, increase in Arabidopsis plants inoculated with B. cinerea [68]. The genome of P. patens contains orthologs of all the core lignin biosynthetic enzymes with the exception of ferulate 5-hydroxylase (F5H), which converts G (guaiacyl) monolignol to S (syringyl) monolignol [69]. The occurrence of lignins in bryophytes is still controversial, and instead, mosses may have wall-bound phenolics that resemble lignin [70,71]. The lack of genuine lignin together with the absence of S monolignols in P. patens could contribute to the high susceptibility observed in Pythium and B. cinerea infected moss tissues [28,29]. Recently, Lloyd and coworkers suggested that syringyl-type lignols in particular are important for successful defense of flowering plants against B. cinerea [72]. 4. ROS Accumulation and Programmed Cell Death in Pathogen-Infected and Elicitor-Treated Plant Tissues The production of ROS is one of the earliest plant cell responses following pathogen recognition and is involved in cell wall strengthening via cross-linking of glycoproteins, defense signaling and induction of the hypersensitive response [73]. Plant cells produce ROS after B. cinerea attack, which assist fungal colonization, since treatments with antioxidants suppress fungal infection [57]. Aggressiveness of different B. cinerea isolates correlates with the amount of H2O2 and hydroxyl radicals present in leaf tissues during infection [74]. In addition to increased ROS production generated by the host plant as part of a defense mechanism, B. cinerea itself produces ROS, including hydrogen peroxide, which accumulates in germinating conidia during the early steps of tissue infection [75,76]. Inactivation of the major B. cinerea H2O2-generating superoxide dismutase (SOD) retarded development of disease lesions, indicating that this enzyme is a virulence factor leading to the accumulation of phytotoxic levels of hydrogen peroxide in plant tissues [77]. Thus, ROS production is an important component of B. cinerea virulence, and increased levels of ROS in plant cells contributes to host cell death and favors fungal infection [78]. ROS production also increased in moss tissues after B. cinerea, P. irregulare and P. debaryanum infection (Figure 3) [28,29]. Single cells respond rapidly to B. cinerea hyphae contact by generating ROS, suggesting that, like vascular plants [78,79], the oxidative burst is probably induced before and during B. cinerea invasion. P.c. carotovorum elicitor treatment also increases ROS production in P. patens tissues (Ponce de León et al., unpublished results), similarly to flowering plants [80]. In addition, the fungal elicitor chitin and chitosan caused an oxidative burst in P. patens cells [30,32]. The importance of ROS production as a defense mechanism against microbial pathogen in mosses was demonstrated in the P. patens class III peroxidase knock-out mutant Prx34, which showed enhanced susceptibility to fungal pathogens compared to wild-type P. patens plants [30]. This mutant is unable to generate an oxidative burst after elicitor treatment. While a saprophytic fungal isolate of genus Irpex and a pathogenic isolate of Fusarium sp. caused only mild symptom development in wild-type plants, hyphal growth was abundant and symptoms were severe in Prx34 knock-out plants, leading to moss decay [30]. Class III peroxidases from flowering plants are known to have antifungal activity [81], and recently, it was shown that the secreted effector Pep1 from the fungus Ustilago maydis directly interacts with a class III peroxidase from maize, suppressing the plant defense response by interfering with ROS production [82]. The functional relevance of the Pep1-peroxidase (POX12) interaction was demonstrated with POX12 silenced plants, which were infected by the pep1 deletion mutant, indicating that inhibition of this peroxidase by Pep1 is crucial for U. maydis infection [82]. In addition, PpTSPO1 moss knock-out mutants, which are impaired in mitochondrial protoporphyrin IX uptake and produce elevated levels of intracellular ROS [83], exhibited increased susceptibility to a fungal necrotrophic pathogen, including Irpex sp. and Fusarium avenaceum, suggesting that PpTSPO1 controls redox homeostasis, which is necessary for efficient resistance against pathogens [32]. Cell death plays a different role in plant response to biotrophs and necrotrophs. The hypersensitive response (HR) is a type of programmed cell death (PCD) with features of two types of cell death recently described, vacuolar cell death and necrotic cell death [84]. HR cell death contributes to resistance to biotrophic pathogens by confining the pathogen and limiting its growth [4]. Biotrophic pathogens actively suppress the HR by using effectors. Pseudomonas syringae and Xanthomonas campestris deliver 15 to 30 effectors into host cells using type III secretion systems to suppress PTI and ETI, including the HR [85]. In contrast, necrotrophic pathogens actively stimulate the HR, which enhances tissues colonization and host susceptibility. Plant mutants with enhanced cell death have increased resistance to biotrophic pathogens, but higher susceptibility to necrotrophic fungi [86,87]. B. cinerea produces nonspecific phytotoxic metabolites, which contribute to cell death on different plant hosts [76]. As part of its invasion strategy, B. cinerea promotes PCD in plant cells [78], and studies in flowering plants suggest that B. cinerea needs HR to achieve full pathogenicity [78,88]. Arabidopsis mutants with an accelerated cell death response are more susceptible to B. cinerea, while mutants with reduced or delayed cell death are generally more resistant [89]. P. patens also activate an HR-like response after B. cinerea colonization, evidenced by protoplast shrinkage, accumulation of ROS and autofluorescent compounds, chloroplasts breakdown and TUNEL (terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling) positive nuclei of infected cells [26,28]. Pathogen-infected P. patens tissues also showed other characteristics of PCD, including nucleus condensation and DNA fragmentation, presence of nuclease activities and formation of cytoplasmic vacuoles [31]. Treatments with elicitors, such as CFs of P.c. carotovorum and chitosan, also provoked cell death in P. patens tissues [26,31]. Harpin proteins from Pectobacterium sp. [90,91], Xanthomonas axonopodis [92] or Pseudomonas syringae [93] elicit HR in flowering plants. Consistently, moss cells treated with the CF of the P.c. carotovorum harpin-positive strain SCC1 showed hallmarks of PCD, including protoplast shrinkage, accumulation of autofluorescent compounds and chloroplasts breakdown, while none of these features were detectable in CF treatments with the P.c. carotovorum harpin-negative strain SCC3193 [26]. Chitosan induces ROS production and cell death with hallmarks of PCD in young protonemal tissues and gametophores [31]. Interestingly, genes involved in plant PCD, such as those encoding proteases, deoxiribonucleases and ribonucleases and the antiapoptotic Bax Inhibitor-1 (BI-1) are induced after pathogen or elicitor treatment of P. patens [31]. The most convincing evidence indicating that genetically programmed cell death occurs in moss cells in response to some pathogens, comes from studies showing that transgenic P. patens plants overexpressing BI-1 are more resistance to necrotrophic fungal pathogens [31]. 5. Induced Expression of Defense-Related Genes and Synthesis of Metabolites Perception of a pathogen by a plant triggers rapid defense responses via multiple signaling pathways that lead to the induced expression of genes with different roles in defense. These include genes encoding functionally diverse pathogenesis-related (PR) proteins, transcription factors and enzymes involved in the production of metabolites (e.g., phenylpropanoids) and hormones [15,94,95]. Transcriptional reprogramming occurs rapidly after pathogen infection, and in the case of Arabidopsis-B cinerea interaction, a high-resolution temporal analysis demonstrated that approximately one-third of the Arabidopsis genome is differentially expressed during the initial stages of infection [96]. As expected, P. patens also sense the presence of pathogens and elicitors and respond rapidly by activating defense gene expression. B. cinerea, P. irregulare and P. debaryanum induce the expression of PAL (phenylalanine ammonia-lyase), CHS (chalcone synthase) and LOX1 (lipoxygenase) in P. patens tissues [26,28,29]. PAL is a key enzyme in the synthesis of phenylpropanoids, including lignin monomers, phytoalexin antibiotics and the production of SA and CHS is the first enzyme in the synthesis of flavonoids [95]. LOXs are enzymes involved in the synthesis of oxygenated fatty acids (oxylipins), including JA and aldehydes, which play important functions in plant defense against microbial infection and insects [97]. Elicitors of P.c. carotovorum also induce PpPAL, PpCHS, PpLOX1 and the pathogenesis-related gene PpPR-1 [26]. ROS-responsive genes encoding alternative oxidase (PpAOX), NADPH-oxidase (PpNOX) and LOX (PpLOX7) are induced by chitosan [32], while B. cinerea and P.c. carotovorum elicitors induce the expression of P. patens genes encoding glutathione S-transferases and ascorbate peroxidases (Ponce de León et al., unpublished data). Mosses are known to contain a whole range of secondary metabolites, which are not present in flowering plants. The P. patens genome has been duplicated 30 and 60 million years ago, and metabolic genes seem to have been retained in excess following duplication, leading probably, in part, to the high versatility of moss metabolism [98]. Some of these metabolites, such as flavonoids, have played important roles in the adaptation of plants to land, to cope with a variety of stresses, including ultraviolet-B (UV-B) radiation, desiccation stress and co-evolving herbivores and pathogens. For example, P. patens has a higher number of members composing PAL and CHS multigene families as compared to flowering plants [99,100], and some specific genes could contribute to host defense. Consistently, several genes of the phenylpropanoid pathway leading to flavonoids synthesis, including 4-coumarate:coenzyme A ligase, several CHS and chalcone isomerase are induced in P. patens tissues after P.c. carotovorum elicitor treatments (Navarrete and Ponce de León et al., unpublished results). Moreover, recent studies showed that P. patens accumulated quercetin derivatives in response to UV-B radiation [99]. These flavonoids could also be involved in moss defense responses, since quercetin induces a resistance mechanism in Arabidopsis tissues in response to Pseudomonas syringae pv. tomato DC3000 infection, evidenced by an oxidative burst, callose deposition, and induced expression of PR-1 and PAL [101]. In addition, the Pythium and B. cinerea inducible PpLOX1 [26,28] can use arachidonic acid as a substrate leading to the production of oxylipins, which are not present in flowering plants [102–104] and could contribute to the P. patens defense response. PpLOX1 and PpLOX2 can produce 12-hydroperoxy eicosatetraenoic acid (12-HPETE) from arachidonate, which in turn serves as substrate for a hydroperoxide lyase (HPL) [102,105] or PpLOX1 and PpLOX2, which posses hydroperoxide cleaving activity [102,103], leading to the production of different C8- and C9-oxylipins. P. patens HPL can also use 9-hydroperoxides of C18-fatty acids as substrate, producing (2E)-nonenal and C8-volatiles [105]. The aldehyde (2E)-nonenal could contribute to the defense of P. patens, since it has antimicrobial activity against certain pathogens, including Pseudomonas syringae pv. tomato and Phytophthora infestans [106]. Chitosan induces the production of secondary metabolites in P. patens, such as cyclic diterpenes, and increases transcript levels of genes encoding key biosynthetic enzymes of this metabolic pathway [31,107]. Inducible ent-kaurane–related diterpenoids play important roles in protecting vascular plants against microbial pathogens, as is the case for the causal agent of rice blast disease, Magnaporthe grisea [108], and Rhizopus microsporus and Colletotrichum graminicola, which cause stalk rot in maize [109]. 6. Defense Hormones Plant hormones, including SA, JA, ethylene, abscisic acid (ABA) and auxins, are involved in the defense response of flowering plants against pathogens, and the role played by these hormones is related to the particular host-pathogen interaction [110]. In general, SA is effective in mediating plant resistance against biotrophs, whereas JA and ethylene are effective in mediating resistance against necrotrophs [111–114]. The interplay between these defense hormones, both agonistic and antagonistic, will determine the outcome of the interaction and minimizes fitness costs, generating a flexible signaling network that allows fine tuning of the inducible defense mechanisms [110,115,116]. P. patens is capable of producing ABA, auxin and cytokinin [117–119], and during the last few years, most studies on moss hormones have been focused on ABA-dependent abiotic stress responses and the regulation of development processes by auxin and cytokinin [120–124]. Until present, only a few studies have been focused on moss hormones in plant-pathogen interactions. The role of ABA in defense responses depends on the infection stage, the type of tissue infected and the specific host pathogen interaction [125]. Evidence indicates that ABA plays a role in the resistance of flowering plants, including stomatal closure, defense gene expression and ROS production/scavenging [57,125–128]. In flowering plants, ABA antagonizes resistance to B. cinerea, since ABA-deficient mutants are more resistant to infection [58,62,129]. Consistently, increased ABA levels contribute to the development of grey mould in tomato [57,125]. B. cinerea-infected P. patens plants showed a small increase in ABA content when mycelium growth was extensive, suggesting that ABA could be produced by B. cinerea itself [130] to promote susceptibility by interfering with defense signaling, like the SA pathway, as has been reported previously for flowering plants [131,132]. Bryophytes produce ethylene [133,134] and the P. patens genome encodes proteins homologous to ethylene signaling components [18,135]. There are seven putative ethylene receptor proteins in P. patens [135] and genes encoding EIN3, EIL and ERF-type components, although the existence of a CTR1 component of ethylene signaling is less clear [136]. A mutation of the presumed ethylene binding site of PpETR7 inhibits the P. patens ethylene response, indicating that P. patens perceives ethylene using PpETR7 [136]. Ethylene induces defense mechanisms in flowering plants, including the production of phytoalexins, PR proteins, the induction of the phenylpropanoid pathway and cell wall modifications [137]. Resistance against B. cinerea is thought to be influenced by ethylene [138–140]. B. cinerea produces ethylene itself and can interfere in this way with plant defense signaling [141]. Ethylene production increases in Arabidopsis after B. cinerea infection [142], and pretreatment of tomato plants with ethylene results in increased resistance against B. cinerea, evidenced by decreased disease symptoms and fungal biomass [137]. In addition, ethylene influenced phenylpropanoid metabolism, leading to accumulation of hydroxycinnamates and monolignols at the plant cell wall, is linked to ethylene-mediated resistance against B. cinerea [72]. Although studies on the effect of ethylene on the P. patens defense system has not been addressed, the ethylene precursor, 1-aminocyclopropane-1-carboxylic acid (ACC), induces the expression of some defense genes in P. patens (Ponce de León et al., unpublished results), suggesting that, like flowering plants, ethylene participates in the moss defense response. The use of the candidate ethylene receptors mutant Ppetr7-1 will contribute to understanding the role played by ethylene in the defense of P. patens against pathogen infection. Until very recently, it was unknown if bryophytes produce SA and JA. The P. patens genome has 14 putative genes encoding PALs [99] and several putative homologues of isochorismate synthases, supporting the synthesis of SA in this moss. In addition, P. patens synthesizes at least seven LOXs [104], two allene oxide synthase (AOS) [143,144], three allene oxide cyclase (AOC) [145,146] and several putative 12-oxo-phytodienoic acid (OPDA) reductases genes [147,148], which encodes enzymes leading to the production of JA. Until present, enzymatic activity has been confirmed for LOXs, AOSs and AOC [104,143–146], although OPR3 activity, which is the only enzyme capable of converting cis-(+)-OPDA to JA, is still missing [147]. Like flowering plants, P. patens responds to B. cinerea and P. irregulare infection by increasing endogenous levels of the precursor of JA, OPDA [28,29,62,149]. Transcript levels of genes encoding enzymes involved in OPDA biosynthesis, including LOX and AOS, are induced in B. cinerea infected tissues [28]. OPDA reductase transcript levels also increase in P. patens tissues in response to B. cinerea inoculation [28]. However, JA could not be detected in healthy, pathogen-infected, elicitor-treated or wounded P. patens tissues, suggesting that oxylipins are not further metabolized to JA [28,145,150]. Thus, cis-(+)-OPDA might function as a signaling molecule in P. patens instead of JA. Studies with the Arabidopsis opr3 mutant have shown that OPDA is active as a defense signal against pathogens and regulates defense gene expression [150–152]. Interestingly, moss tissues respond to the presence of OPDA and JA by decreasing rhizoid length and moss colony size [28], similarly to the reduced growth of seedlings and roots observed in OPDA and Methyl Jasmonate (MeJA) treated Arabidopsis [153–156]. Moreover, JA, MeJA and OPDA induced the expression of PAL in P. patens, showing that the presence of these oxylipins is sensed by this moss and signal transduction events are activated, leading to increased levels of defense-related transcripts [29]. The P. patens genome has six putative genes encoding the JA-isoleucine receptor COI (coronatine insensitive) and six encoding the repressor JAZ (jasmonate ZIM-domain) [157]. P. patens COI-like receptors could bind other oxylipins instead of JA-isoleucine, including cis-(+)-OPDA and/or cis-(+)-OPDA-isoleucine. Thus, the JA signaling pathway could have evolved after divergence of bryophytes and vascular plants. In addition, the similarities between the auxin receptor (TIR1) and COI1 suggest that COI-1 could have evolved from a TIR1 ancestor by gene duplication, leading to perception of JA-isoleucine by successive mutations [157]. Salicylic acid levels increase in response to B. cinerea infection in flowering plants [158,159] and in P. patens [28]. Like flowering plants, SA seems to play an important role in the defense of P. patens against microbial pathogens. SA treatment of moss tissues induces the expression of the defense gene PAL [28], and SA application induced defense mechanisms and increased resistance to P.c. carotovorum in P. patens colonies [160]. SA-mediated resistance could be due to activation of similar defense mechanisms in mosses and flowering plants, since exogenous SA application to tobacco plants also increase resistance against P.c. carotovorum [161]. In flowering plants, SA plays a key role in the activation of defense mechanisms associated with the HR and participates in a feedback amplification loop, both upstream and downstream of cell death [162,163]. The generation of SA-deficient NahG transgenic moss plants will help to elucidate SA involvement in moss defense, including the HR-like response. 7. Conclusions During land colonization, plants gradually evolved defense strategies to cope with radiation, desiccation stress and airborne pathogens by newly acquired specialized metabolic pathways, such as the phenylpropanoid metabolism. Recently, significant progress has been made on sequencing genomes of plants that occupy interesting positions within the evolutionary history of plants, including the non-vascular moss P. patens and the vascular spikemoss S. moellendorffii [18,22]. P. patens occupies a key position halfway between green algae and flowering plants, allowing evolutionary and comparative studies of defense mechanisms across the green plant lineage. Interestingly, it was recently shown that P. patens has acquired genes related directly or indirectly with defense mechanisms by means of horizontal gene transfer from fungi and viruses [164]. The possible uptake of foreign DNA from fungi associated with early land plants could have facilitated the transition to a hostile land environment [164,165]. P. patens respond to pathogen infection or elicitor treatment by inducing defense-related gene expression and producing metabolites and hormones that could play different roles in defense. Several defense mechanisms are shared between P. patens and flowering plants, and functional conservation of some signaling pathways probably indicate common ancestral defense strategies [28–30,32,136]. While the JA signaling pathway may have evolved after the divergence of bryophytes and vascular plants, ethylene, ABA and SA likely have their origins in the early stages of land colonization. The use of P. patens mutants in key components of these signaling pathways will help to determine the role played by these hormones in moss defense. P. patens also offers the possibility to identify novel metabolites, some of which are not present in flowering plants, including arachidonic acid-derived oxylipins that could play a role in defense responses. In addition, experimentation with P. patens could help to unravel defense pathways and gene functions in plants through the generation of knock-out mutants and single point mutations of genes involved in disease resistance and to identify clear mutant phenotypes due to the presence of a dominant gametophytic haploid phase [19]. Large-scale analyses of transcripts from pathogen-infected or elicitor-treated moss plants together with functional genomic and comparative studies with flowering plants will help to identify key components in the plant defense response and to design strategies to enhance plant resistance to biotic stress.
--- abstract: 'This paper discusses the problem of whether it is possible to annihilate elements of local cohomology modules by elements of arbitrarily small order under a fixed valuation. We first discuss the general problem and its relationship to the Direct Summand Conjecture, and next present two concrete examples where annihilators with small order are shown to exist. We then prove a more general theorem, where the existence of such annihilators is established in some cases using results on abelian varieties and the Albanese map.' address: - 'Department of Mathematics, University of Utah, 155 South 1400 East, Salt Lake City, UT 84112, USA' - 'Department of Mathematics, University of Utah, 155 South 1400 East, Salt Lake City, UT 84112, USA' - 'School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005, India' author: - Paul Roberts - 'Anurag K. Singh' - 'V. Srinivas' title: Annihilators of Local Cohomology in Characteristic Zero --- [^1] Almost vanishing of local cohomology ==================================== The concept of almost vanishing that we use here comes out of recent work on *Almost Ring Theory* by Gabber and Ramero [@GR]. This theory was developed to give a firm foundation to the results of Faltings on *Almost étale extensions* [@Faltings], and these ideas have their origins in a classic work of Tate on *$p$-divisible groups* [@Tate]. The use of the general theory, for our purposes, is comparatively straightforward, but it illustrates the main questions in looking at certain homological conjectures, as discussed later in the section. The approach is heavily influenced by Heitmann’s proof of the Direct Summand Conjecture for rings of dimension three [@Heitmann-dim3]. Let $A$ be an integral domain, and let $v$ be a valuation on $A$ with values in the abelian group of rational numbers; more precisely, $v$ is a function from $A$ to ${{\mathbb Q}}\cup\{\infty\}$ such that 1. $v(a)=\infty$ if and only if $a=0$, 2. $v(ab)=v(a)+v(b)$ for all $a,b\in A$, and 3. $v(a+b)\ge\min\{v(a),v(b)\}$ for all $a,b\in A$. We will also assume that $v(a)\ge 0$ for all elements $a\in A$. \[def:almostzero\] An $A$-module $M$ is *almost zero* if for every $m\in M$ and every real number $\epsilon>0$, there exists an element $a$ in $A$ with $v(a)<\epsilon$ and $am=0$. When it is necessary to specify the valuation, we say that *$M$ is almost zero with respect to the valuation $v$*. We note some properties of almost zero modules: 1. For an exact sequence $$0\to M'\to M\to M''\to 0\,,$$ the module $M$ is almost zero if and only if each of $M'$ and $M''$ is almost zero. 2. If $\{M_i\}$ is a directed system consisting of almost zero modules, then its direct limit $\dlim_iM_i$ is almost zero. In [@GR] Gabber and Ramero define a module to be almost zero if it is annihilated by a fixed ideal ${{\mathfrak m}}$ of $A$ with ${{\mathfrak m}}={{\mathfrak m}}^2$. This set of modules also satisfies conditions (1) and (2), though in many cases their condition is stronger than the one in Definition \[def:almostzero\]. The *absolute integral closure* $R^+$ of a domain $R$ is the integral closure of $R$ in an algebraic closure of its fraction field. An important situation for us will be where $(R,{{\mathfrak m}})$ is a complete local ring. In this case, fix a valuation $v\colon R\to{{\mathbb Z}}\cup\{\infty\}$ which is positive on ${{\mathfrak m}}$. By Izumi’s Theorem [@Izumi], two such valuations are bounded by constant multiples of each other. Since $R^+$ is an integral extension, $v$ extends to a valuation $v\colon R^+\to{{\mathbb Q}}\cup\{\infty\}$. Let $A$ be a subring of $R^+$ containing $R$; we often take $A$ to be $R^+$. Note that $v$ is positive on the maximal ideal of $A$. The ring $A$ need not be Noetherian, and by a *system of parameters* for $A$, we shall mean a system of parameters for some Noetherian subring of $A$ that contains $R$. The main question we consider is whether the local cohomology modules $H^i_{{\mathfrak m}}(A)$ are almost zero for $i<\dim A$. Let $x_1\dots,x_d$ be a system of parameters for $R$. Then the local cohomology module $H^i_{{\mathfrak m}}(A)$ is the $i$-th cohomology modules of the Čech complex $$0\to A\to\oplus A_{x_i}\to\oplus A_{x_ix_j}\to\cdots\to A_{x_1\cdots x_d}\to 0\,.$$ The question whether $H^i_{{\mathfrak m}}(A)$ is almost zero for $i=0,\dots,d-1$ is closely related to the question whether the $x_i$ come close to forming a regular sequence in the following sense. A sequence of elements $x_1,\dots,x_d\in A$ is an *almost regular sequence* if for each $i=1,\dots,d$, the module $$((x_1,\dots,x_{i-1}):_Ax_i)/(x_1,\dots,x_{i-1})$$ is almost zero. If every system of parameters for $A$ is an almost regular sequence, we say that $A$ is *almost Cohen-Macaulay*. The usual inductive argument as in [@Serre Theorem IV.2.3] shows that if $A$ is almost Cohen-Macaulay, then the modules $H^i_{{\mathfrak m}}(A)$ are almost zero for $i<\dim A$. However, we do not know whether the converse holds in general. As motivation for the definitions introduced above, we discuss how these are related to the homological conjectures. Let $x_1,\dots,x_d$ be a system of parameters for a local ring $R$. Hochster’s *Monomial Conjecture* states that $$x_1^t\cdots x_d^t\notin\big(x_1^{t+1},\dots,x_d^{t+1}\big)R\qquad\text{ for all } t\ge 0.$$ This is known to be true for local rings containing a field, and Heitmann [@Heitmann-dim3] proved it for local rings of mixed characteristic of dimension up to three. It remains open for mixed characteristic rings of higher dimension, where it is equivalent to several other conjectures such as the Direct Summand Conjecture (which states that regular local rings are direct summands of their module-finite extension rings), the Canonical Element Conjecture, and the Improved New Intersection Conjecture; for some of the related work, we mention [@EG; @Hochster-CBMS; @Hochster-ds; @PS] and [@Roberts-intersection]. The connection between the Monomial Conjecture and the almost Cohen-Macaulay property is evident from the following proposition. Let $R$ be a local domain with an integral extension which is almost Cohen-Macaulay. Then the Monomial Conjecture holds for $R$, i.e., for each system of parameters $x_1,\dots,x_d$ of $R$, we have $$x_1^t\cdots x_d^t\notin\big(x_1^{t+1},\dots,x_d^{t+1}\big)R\qquad\text{ for all }t\ge 0\,.$$ Let $A$ be an integral extension of $R$ which is almost Cohen-Macaulay with respect to a valuation $v$ which is positive on the maximal ideal of $R$. Then $v(x_i)>0$ for each $i=1,\dots,d$; let $\epsilon$ be the minimum of these positive rational numbers. If $x_1^t\cdots x_d^t\in(x_1^{t+1},\dots, x_d^{t+1})R$ for some integer $t$, then $$x_1^t\cdots x_d^t = a_1x_1^{t+1}+\cdots+a_dx_d^{t+1}$$ for elements $a_i$ of $A$. (The $a_i$ can be chosen in $R$, though we will only consider them as elements of $A$.) Rearranging terms in the above equation, we have $$x_1^t(x_2^t\cdots x_d^t-a_1x_1)\in\big(x_2^{t+1},\dots,x_d^{t+1}\big)A\,.$$ Since $A$ is almost Cohen-Macaulay, the elements $x_1^t,x_2^{t+1},\dots,x_d^{t+1}$ form an almost regular sequence. Hence there exists $c_1\in A$ with $v(c_1)<\epsilon/d$ and $$c_1(x_2^t\cdots x_d^t-a_1x_1)\in\big(x_2^{t+1},\dots,x_d^{t+1}\big)A\,.$$ This implies that $c_1x_2^t\cdots x_d^t\in(x_1,x_2^{t+1},\dots,x_d^{t+1})A$. We now repeat the process for $x_2$, i.e., we have $$c_1x_2^t\cdots x_d^t=b_1x_1+b_2x_2^{t+1}+\cdots+b_dx_d^{t+1}$$ with $b_i\in A$, so $$x_2^t(c_1x_3^t\cdots x_d^t-b_2x_2)\in\big(x_1,x_3^{t+1},\dots,x_d^{t+1}\big)A\,.$$ By an argument similar to the one above, there is an element $c_2\in A$ with $v(c_2)<\epsilon/d$ and $$c_1c_2x_3^t\cdots x_d^t\in\big(x_1,x_2,x_3^{t+1},\dots,x_d^{t+1}\big)A\,.$$ Repeating this procedure $d-2$ more times, we obtain elements $c_1,c_2,\dots,c_d$ in $A$ with $v(c_i)<\epsilon/d$ and $$c_1c_2\cdots c_d=u_1x_1+\cdots+u_dx_d$$ for some $u_i\in A$. But then $$v(c_1\cdots c_d)=v(c_1)+\cdots+v(c_d)<d(\epsilon/d)=\epsilon$$ whereas, since $v(u)\ge 0$ for all $u\in A$, we also have $$v(u_1x_1+\cdots+u_dx_d)\ge\min\{v(u_ix_i)\}\ge\min\{v(u_i)+v(x_i)\} \ge\min\{v(x_i)\}=\epsilon\,,$$ which is a contradiction. To put the results of the remainder of the paper in context, we recall the situation in positive characteristic. Let $R$ be a complete local domain containing a field of characteristic $p>0$. We let $R_\infty$ denote the perfect closure of $R$, that is, $R_\infty$ is the ring obtained by adjoining to $R$ the $p^n$-th roots of all its elements. \[prop:Rinfinity\] Let $(R,{{\mathfrak m}})$ be a complete local domain containing a field of prime characteristic. Then $R_\infty$ is almost Cohen-Macaulay with respect to any valuation which is positive on ${{\mathfrak m}}$. Let $v$ be such a valuation, and let $x_1,\dots,x_d$ be a system of parameters for $R_\infty$. Suppose that $$\label{eqn:Rinfinity} ax_i=b_1x_1+\cdots+b_{i-1}x_{i-1}$$ for $a,b_j\in R_\infty$. Let $R'$ be the Noetherian subring of $R_\infty$ generated over $R$ by $a$, $b_1,\dots,b_{i-1}$, and $x_1,\dots,x_d$. By Cohen’s structure theorem, $R'$ is a finite extension of a power series ring $S=K[[x_1,\dots,x_d]]$, where $K$ is a coefficient field. Let $m$ be the largest integer such that $R'$ contains a free $S$-module of rank $m$. In this case, the cokernel of $$S^m\subseteq R'$$ is a torsion $S$-module, so there exists a nonzero element $c\in S$ such that $cR'\subseteq S^m$. Taking $p^n$-th powers in equation  gives us $$a^{p^n}x_i^{p^n}\in\big(x_1^{p^n},\dots,x_{i-1}^{p^n}\big)R'\qquad\text{ for all }n\ge0\,.$$ Multiplying the above by $c$ and using $cR'\subseteq S^m$, we get $$ca^{p^n}x_i^{p^n}\in\big(x_1^{p^n},\dots,x_{i-1}^{p^n}\big)S^m\,.$$ Since $x_1^{p^n},\dots,x_i^{p^n}$ is a regular sequence on the free module $S^m$, it follows that $$ca^{p^n}\in\big(x_1^{p^n},\dots,x_{i-1}^{p^n}\big)S^m\subseteq\big(x_1^{p^n},\dots,x_{i-1}^{p^n}\big)R'\,.$$ Taking $p^n$-th roots in an equation for $ca^{p^n}\in(x_1^{p^n},\dots,x_{i-1}^{p^n})R'$ gives us $$c^{1/p^n}a\in\big(x_1,\dots,x_{i-1}\big)R_\infty\qquad\text{ for all }n\ge0\,.$$ Since the limit of $v(c^{1/p^n})$ is zero as $n\to\infty$, it follows that $R_\infty$ is almost Cohen-Macaulay. In [@HHbig] Hochster and Huneke proved the much deeper fact that for an excellent local domain $R$ of positive characteristic, the ring $R^+$ is Cohen-Macaulay; see also [@HL]. We remark that the subring $R_\infty$ may not be Cohen-Macaulay in general: if $R$ is an $F$-pure ring which is not Cohen-Macaulay, then, since $R\hookrightarrow R_\infty$ is pure, $R_\infty$ is not Cohen-Macaulay as well. If $R$ is a local domain containing a field of characteristic zero, then $R^+$ is typically not a big Cohen-Macaulay algebra. For example, let $R$ be a normal ring of characteristic zero which is not Cohen-Macaulay. Then the field trace map shows that $R$ splits from finite integral extensions. Consequently a nontrivial relation on a system of parameters for $R$ remains nontrivial in finite extensions, and hence in $R^+$. Specifically, for a ring $(R,{{\mathfrak m}})$ of characteristic zero, the map $$H^i_{{\mathfrak m}}(R)\to H^i_{{\mathfrak m}}(R^+)$$ is injective for all $i$. This leads to the following question. \[q1\] Let $(R,{{\mathfrak m}})$ be a complete local domain. For $i<\dim R$, is the image of natural map $$H^i_{{\mathfrak m}}(R)\to H^i_{{\mathfrak m}}(R^+)$$ almost zero? The answer is affirmative if the ring $R$ contains a field of positive characteristic: this follows from Proposition \[prop:Rinfinity\], or from either of the stronger statements [@HHbig Theorem 1.1] or [@HL Theorem 2.1]. If $R$ is a three-dimensional ring of mixed characteristic $p$, Heitmann [@Heitmann-dim3] proved that the image of $H^2_{{\mathfrak m}}(R)$ in $H^2_{{\mathfrak m}}(R^+)$ is killed by $p^{1/n}$ for all integers $n\ge0$; more recently, he proved the stronger statement [@Heitmann2005 Theorem 2.9] that $H^2_{{\mathfrak m}}(R^+)$ is annihilated by $c^{1/n}$ for all $c\in{{\mathfrak m}}$ and $n\ge0$. Hence the answer to Question \[q1\] is also affirmative for mixed characteristic rings of dimension less than or equal to three. Examples ======== In this section, we present two nontrivial examples where local cohomology modules of characteristic zero rings are annihilated by elements of arbitrarily small positive order. The examples are ${{\mathbb N}}$-graded, and in such cases it is natural to use the valuation arising from the grading: $v(r)$ is the least integer $n$ such that the $n$-th degree component of $r$ is nonzero. \[prop:normalization\] Let $K$ be a field of characteristic zero, and consider the hypersurface $S=K[x,y,z,w]/(xy-zw)$. For distinct elements $\alpha_i$ of $K$, let $\eta$ be a square root of $$\prod_{i=1}^4(x-\alpha_iz)\,.$$ Then the integral closure of $S[\eta]$ in its field of fractions is the ring $$R=S\left[\eta,\frac{w}{x}\eta,\frac{w^2}{x^2}\eta\right]\,.$$ The element $(w^2/x^2)\eta$ is integral over $S[\eta]$ since $$\left(\frac{w^2}{x^2}\eta\right)^2=\prod_{i=1}^4(w-\alpha_iy)\,.$$ A similar computation shows that $(w/x)\eta$ is integral over $S[\eta]$, and it remains to prove that the integral closure of $S[\eta]$ is generated by these elements. An element of the fraction field of $S[\eta]$ can be written as $a+b\eta$, with $a$ and $b$ from the fraction field of $S$. Now $a+b\eta$ is integral over $S$ if and only if its trace and norm of are elements of $S$. Since $2$ is a unit in $S$ this is equivalent to $a\in S$ and $b^2\eta^2\in S$. Thus the integral closure of $S[\eta]$ is $S\oplus I\eta$, where $I$ is the fractional ideal consisting of elements $b$ with $b^2\eta^2\in S$. Since $S$ is a normal domain, $b^2\eta^2$ belongs to $S$ if and only if $v_{{\mathfrak p}}(b^2\eta^2)\ge0$ for all valuations $v_{{\mathfrak p}}$ corresponding to height one prime ideals ${{\mathfrak p}}$ of $S$. Note that $v_{{\mathfrak p}}(\eta^2)>0$ precisely for the primes ${{\mathfrak p}}_0=(x,z)$ and ${{\mathfrak p}}_i=(x-\alpha_iz,w-\alpha_iy)$ for $1\le i\le4$. Since $v_{{{\mathfrak p}}_0}(\eta^2)=4$ and $v_{{{\mathfrak p}}_i}(\eta^2)=1$ for $1\le i\le4$, the condition for $b$ to be an element of $I$ is that $$v_{{{\mathfrak p}}_0}(b)\ge-2\qquad\text{ and }\qquad v_{{\mathfrak p}}(b)\ge0\text{ for all }{{\mathfrak p}}\neq{{\mathfrak p}}_0\,.$$ This implies that $v_{{\mathfrak p}}(bx^2)\ge0$ for all height one primes ${{\mathfrak p}}$, i.e., that $bx^2\in S$. Let $b=s/x^2$. Then $v_{(x,w)}(b)\ge0$ implies that $s$ must be in the ideal $(x,w)^2$. Hence $I$ is generated over $S$ by $1$, $w/x$, and $w^2/x^2$. \[ex1\] We continue in the notation of Proposition \[prop:normalization\], i.e., $R$ is the normalization of $S[\eta]$. The ring $R$ is normal by construction, and has dimension three. It follows that $H^0_{{\mathfrak m}}(R)=0=H^1_{{\mathfrak m}}(R)$, where ${{\mathfrak m}}$ is the homogeneous maximal ideal of $R$. We show that there are elements of $R^+$ of arbitrarily small positive order annihilating the image of $H^2_{{\mathfrak m}}(R)$ in $H^2_{{\mathfrak m}}(R^+)$. Note that $x$, $y$, $z+w$ form a homogeneous system of parameters for the hypersurface $S$, and hence also for $R$. In the ring $R$, we have a relation on these elements given by the equation $$\frac{w}{x}\eta\cdot(z+w)=\eta\cdot y+\frac{w^2}{x^2}\eta\cdot x\,.$$ This is a nontrivial relation in the sense that $(w/x)\eta$ does not belong to the ideal generated by $x$ and $y$, so the ring $R$ is not Cohen-Macaulay. Consider the element of $H^2_{{\mathfrak m}}(R)$ given by this relation; it turns out that $H^2_{{\mathfrak m}}(R)$ is a one-dimensional $K$-vector space generated by this element, see Remark \[rem:Segre\]. Let $v$ be the valuation defined by the grading on $R$, i.e., $v$ takes value $1$ on $x$, $y$, $z$, and $w$, and $v(\eta)=2$. We construct elements $x_n$ in finite extensions $R_n$ of $R$ with $v(x_n)=1/2^n$ and $x_n(w/x)\eta\in(x,y)R_n$; it then follows that each $x_n$ annihilates the image of the map $H^2_{{\mathfrak m}}(R)\to H^2_{{\mathfrak m}}(R^+)$. Let $R_1$ be the extension ring of $R$ obtained by adjoining $\sqrt{x-\alpha_iz}$ for $1\le i\le 4$ and normalizing. We claim that the element $x_1=\sqrt{x-\alpha_1z}$ multiplies $(w/x)\eta$ into the ideal $(x,y)R_1$. To see this, note that $$\begin{gathered} x_1\frac{w}{x}\eta=x_1\frac{w}{x}\prod_{i=1}^4\sqrt{x-\alpha_iz} =(x-\alpha_1z)\frac{w}{x}\prod_{i=2}^4\sqrt{x-\alpha_iz}\\ =x\left(\frac{w}{x}\prod_{i=2}^4\sqrt{x-\alpha_iz}\right) -y\left(\alpha_1\prod_{i=2}^4\sqrt{x-\alpha_iz}\right)\,.\end{gathered}$$ The element $x_1$ has $v(x_1)=1/2$. To find an annihilator $x_2$ with $v(x_2)=1/4$, we first write $$x-\alpha_3z=\beta(x-\alpha_1z)-\gamma(x-\alpha_2z)$$ for suitable $\beta,\gamma\in K$, and then factor as a difference of squares to obtain $$\begin{gathered} x-\alpha_3z\\ =\left(\sqrt{\beta(x-\alpha_1z)}+\sqrt{\gamma(x-\alpha_2z)}\right) \left(\sqrt{\beta(x-\alpha_1z)}-\sqrt{\gamma(x-\alpha_2z)}\right)\,.\end{gathered}$$ We let $$x_2=\sqrt{\sqrt{\beta(x-\alpha_1z)}+\sqrt{\gamma(x-\alpha_2z)}}\,,$$ which is an element with $v(x_2)=1/4$. Now $$x_2\sqrt{x-\alpha_3z}=\lambda\left(\sqrt{\beta(x-\alpha_1z)}+\sqrt{\gamma(x-\alpha_2z)}\right)$$ where $$\lambda=\sqrt{\sqrt{\beta(x-\alpha_1z)}-\sqrt{\gamma(x-\alpha_2z)}}\,,$$ and so $$\begin{gathered} x_2\eta=\lambda(x-\alpha_1z)\sqrt{\beta(x-\alpha_2z)(x-\alpha_4z)}\\ +\lambda(x-\alpha_2z)\sqrt{\gamma(x-\alpha_1z)(x-\alpha_4z)}\,.\end{gathered}$$ Using this, we get $$\begin{aligned} x_2\frac{w}{x}\eta &=x\left(\lambda\frac{w}{x}\sqrt{\beta(x-\alpha_2z)(x-\alpha_4z)}\right) -y\left(\lambda\alpha_1\sqrt{\beta(x-\alpha_2z)(x-\alpha_4z)}\right)\\ &\quad+x\left(\lambda\frac{w}{x}\sqrt{\gamma(x-\alpha_1z)(x-\alpha_4z)}\right) -y\left(\lambda\alpha_2\sqrt{\gamma(x-\alpha_1z)(x-\alpha_4z)}\right)\end{aligned}$$ and consequently $x_2(w/x)\eta\in(x,y)R_2$, where $R_2$ is the finite extension of $R$ obtained by adjoining the various roots occurring in the previous equation and normalizing. We describe briefly the process of constructing $x_n$ for $n\ge 3$. The first step is to write $\sqrt{x-\alpha_4z}$ in terms of $\sqrt{x-\alpha_1z}$ and $\sqrt{x-\alpha_2z}$ as we did for $\sqrt{x-\alpha_3z}$ above. This enables us to write $\sqrt{x-\alpha_3z}\sqrt{x-\alpha_4z}$ as a product of four square roots, each of which is a linear combination of $\sqrt{x-\alpha_1z}$ and $\sqrt{x-\alpha_2z}$. We can now repeat the process used to construct $x_2$, essentially replacing $x$ by $\sqrt{x-\alpha_3z}$ and $z$ by $\sqrt{x-\alpha_4z}$. Finally, we can repeat this process indefinitely, obtaining elements $x_n$ with $v(x_n)=1/2^n$ which annihilate the given element of local cohomology. \[rem:Segre\] The ring $R$ in the previous example can be obtained as a Segre products of rings of lower dimension, and we briefly discuss this point of view. Let $A$ and $B$ be ${{\mathbb N}}$-graded normal rings which are finitely generated over a field $A_0=B_0=K$. Their *Segre product* is the ring $$R=A\#B=\bigoplus_{n\ge0}A_n\otimes_KB_n\,,$$ which inherits a natural grading where $R_n=A_n\otimes_KB_n$. If $K$ is algebraically closed then the tensor product $A\otimes_KB$ is a normal ring, and hence so is its direct summand $A\#B$. If $M$ and $N$ are ${{\mathbb Z}}$-graded modules over $A$ and $B$ respectively, then their Segre product is the $R$-module $$M\#N=\bigoplus_{n\in{{\mathbb Z}}}M_n\otimes_KN_n\,.$$ Using ${{\mathfrak m}}$ to denote the homogeneous maximal ideal of $R$, the local cohomology modules $H_{{\mathfrak m}}^k(R)$ can be computed using the Künneth formula due to Goto and Watanabe, [@GW Theorem 4.1.5]: $$\begin{gathered} H^k_{{\mathfrak m}}(R)=\left(A\#H^k_{{{\mathfrak m}}_B}(B)\right)\ \oplus\ \left(H^k_{{{\mathfrak m}}_A}(A)\#B\right)\\ \oplus\bigoplus_{i+j=k+1}\left(H^i_{{{\mathfrak m}}_A}(A)\#H^j_{{{\mathfrak m}}_B}(B)\right)\,.\end{gathered}$$ It follows that if $A$ and $B$ have positive dimension, then $$\dim(A\#B)=\dim A+\dim B-1\,.$$ We claim that the ring $R$ in Example \[ex1\] is isomorphic to the Segre product $A\#B$, where $$A=K[a,b,c]/\big(c^2-\prod_{i=1}^4(a-\alpha_ib)\big)$$ is a hypersurface with $\deg a=\deg b=1$ and $\deg c=2$, and $B=K[s,t]$ is a standard graded polynomial ring. The map $$\begin{aligned} x&\mapsto as\,,&y&\mapsto bt\,,&z&\mapsto bs\,,&w&\mapsto at\,,\\ \eta&\mapsto cs^2\,,&(w/x)\eta&\mapsto cst\,,&(w/x)^2\eta&\mapsto ct^2\end{aligned}$$ extends to a $K$-algebra homomorphism $\phi\colon R\to A\#B$. This is a surjective homomorphism of integral domains of equal dimension, so it must be an isomorphism. Since $A$ and $B$ are Cohen-Macaulay rings of dimension $2$, the Künneth formula for $H^2_{{\mathfrak m}}(R)$ reduces to $$H^2_{{\mathfrak m}}(R)=\left(A\#H^2_{{{\mathfrak m}}_B}(B)\right)\ \oplus\ \left(H^2_{{{\mathfrak m}}_A}(A)\#B\right)\,.$$ The module $H^2_{{{\mathfrak m}}_B}(B)$ vanishes in nonnegative degrees, which implies that $A\#H^2_{{{\mathfrak m}}_B}(B)=0$. The component of $H^2_{{{\mathfrak m}}_A}(A)$ in nonnegative degree is the one-dimensional vector space spanned by the degree $0$ element $$\left[\frac{c}{ab}\right]\in H^2_{{{\mathfrak m}}_A}(A)\,.$$ Hence $H^2_{{\mathfrak m}}(R)$ is the one-dimensional vector space spanned by $[c/ab]\otimes 1$. The search for elements $x_n\in R^+$ of small degree annihilating the image of $H^2_{{\mathfrak m}}(R)$ in $H^2_{{\mathfrak m}}(R^+)$ is essentially the search for homogeneous elements of $A^+$, of small degree, multiplying $c$ into the ideal $(a,b)A^+$. \[ex2\] Let $K$ be an algebraically closed field of characteristic zero, $\theta\in K$ a primitive cube root of unity, and set $$A=K[x,y,z]/\big(\t x^3+\t^2 y^3+z^3\big)\,.$$ Let $R$ be the Segre product of $A$ and the polynomial ring $K[s,t]$. Then $R$ is a normal ring of dimension $3$, and the elements $sx$, $ty$, $sy+tx$ form a homogeneous system of parameters for $R$. Using the Künneth formula as in Remark \[rem:Segre\], the local cohomology module $H^2_{{\mathfrak m}}(R)$ is a one-dimensional vector space spanned by an element corresponding to the relation $$sztz(sy+tx)=(sz)^2ty+(tz)^2sx\,.$$ To annihilate this relation by an element of $R^+$ of positive degree $\epsilon\in{{\mathbb Q}}$, it suffices to find an element $u\in A^+$ of degree $\epsilon$ such that $$uz^2\in(x,y)A^+\,;$$ indeed if $uz^2=vx+wy$ for homogeneous $v,w\in A^+$ of degree $1+\epsilon$, then $$(s^{\epsilon}u)(sztz)=(s^{\epsilon}tv)(sx)+(s^{1+\epsilon}w)(ty)\,,$$ and $s^{\epsilon}tv$ and $s^{1+\epsilon}w$ are easily seen to be integral over $S$. We have now reduced our problem to working over the hypersurface $A$, where we are looking for elements $u\in A^+$ of small degree which annihilate $$\left[\frac{z^2}{xy}\right]\in H^2_{{{\mathfrak m}}_A}(A^+)\,.$$ Let $A_1$ be the extension of $A$ obtained by adjoining $x_1$, $y_1$, $z_1$, where $$x_1^3=\t^{1/3}x+\t^{2/3}y\,,\qquad y_1^3=\t^{1/3}x+\t^{5/3}y\,,\qquad z_1^3 =\t^{1/3}x+\t^{8/3}y\,.$$ Note that $x$ and $y$ can be written as $K$-linear combinations of $x_1^3$ and $y_1^3$. Moreover, $$\begin{gathered} (x_1y_1z_1)^3=\big(\t^{1/3}x+\t^{2/3}y\big)\big(\t^{1/3}x+\t^{5/3}y\big)\big(\t^{1/3}x+\t^{8/3}y\big)\\ =\t x^3+\t^2y^3=-z^3\,,\end{gathered}$$ so $z$ belongs to the $K$-algebra generated by $x_1$, $y_1$, and $z_1$. Now $$\begin{gathered} \t x_1^3+\t^2 y_1^3+z_1^3=\t\big(\t^{1/3}x+\t^{2/3}y\big)+\t^2\big(\t^{1/3}x+\t^{5/3}y\big)+\big(\t^{1/3}x+\t^{8/3}y\big)\\ =\big(\t^{4/3}+\t^{7/3}+\t^{1/3}\big)x+\big(\t^{5/3}+\t^{11/3}+\t^{8/3}\big)y=0\,,\end{gathered}$$ which implies that $$A_1=K[x_1,y_1,z_1]/\big(\t x_1^3+\t^2 y_1^3+z_1^3\big)$$ is a ring isomorphic to $A$. Thus $A\subset A_1$ gives a finite embedding of $A$ into itself under which the generators of degree $1$ go to elements of degree $3$; or, in terms of the original degree, the new generators of the homogeneous maximal ideal have degree $1/3$. Since $[H^2_{{{\mathfrak m}}_A}(A)]_0$ is annihilated by all elements of positive degree, the image of $[H^2_{{{\mathfrak m}}_A}(A)]_0$ in $H^2_{{{\mathfrak m}}_A}(A_1)$ is annihilated by elements of degree $1/3$. Iterating this construction, we conclude that there are elements of arbitrarily small positive degree annihilating the image of $[H^2_{{{\mathfrak m}}_A}(A)]_0$ in $H^2_{{{\mathfrak m}}_A}(A^+)$. Quite explicitly, we have a tower of extensions $$A=A_0\subset A_1\subset A_2\subset\dots\quad\text{ where }\quad A_n=K[x_n,y_n,z_n]/\big(\t x_n^3+\t^2 y_n^3+z_n^3\big)\,.$$ The maps $H^2_{{\mathfrak m}}(A_n)\to H^2_{{\mathfrak m}}(A_{n+1})$ preserve degrees, so $[H^2_{{{\mathfrak m}}_A}(A)]_0$ maps to the socle of $H^2_{{\mathfrak m}}(A_n)$ which is killed by all elements of $A_n$ of positive degree, e.g., by the elements $x_n,y_n,z_n$ which have degree $1/3^n$. In [@Heitmann2005 Theorem 2.9] Heitmann proves that if $(R,{{\mathfrak m}})$ is a mixed characteristic excellent local domain of dimension three, then the image of $H^2_{{\mathfrak m}}(R)$ in $H^2_{{\mathfrak m}}(R^+)$ is annihilated by arbitrarily small powers of every non-unit. The corresponding statement is false for three-dimensional domains of characteristic zero: for the ring $R$ of Example \[ex2\], we claim that $\sqrt{sz}$ does not annihilate the image of $H^2_{{\mathfrak m}}(R)\to H^2_{{\mathfrak m}}(R^+)$. Because of the splitting provided by field trace, it suffices to verify that $$\sqrt{sz}\left(sztz\right)\notin(sx,ty)T\,,$$ where $T$ is any normal subring of $R^+$ containing $R[\sqrt{sz}]$. Take $T$ to be the Segre product of $\tilde{A}=A[\sqrt{x},\sqrt{y},\sqrt{z}]$ and $\tilde{B}=B[\sqrt{s},\sqrt{t}]$. Note that $\tilde{B}$ is a polynomial ring in $\sqrt{s}$ and $\sqrt{t}$, and that $\tilde{A}$ is the hypersurface $$K[\sqrt{x},\sqrt{y},\sqrt{z}]/\left(\t(\sqrt{x})^6+\t^2(\sqrt{y})^6+(\sqrt{z})^6\right)\,.$$ It is enough to check that $\sqrt{sz}\left(sztz\right)\notin(sx,ty)(\tilde{A}\otimes_K\tilde{B})$, and after specializing $\sqrt{s}\mapsto 1$ and $\sqrt{t}\mapsto 1$ to check that $$(\sqrt{z})^5\notin\left((\sqrt{x})^2,(\sqrt{y})^2\right)\tilde{A}\,,$$ which is immediately seen to be true. The same argument shows that $\sqrt{sx}$, $\sqrt{sy}$, etc. do not annihilate the image of $H^2_{{\mathfrak m}}(R)\to H^2_{{\mathfrak m}}(R^+)$. The situation is quite similar with Example \[ex1\]. Annihilators using the Albanese map =================================== For an ${{\mathbb N}}$-graded domain $R$ which is finitely generated over a field $R_0$, let $R^{+\GR}$ be the ${{\mathbb Q}}_{\ge0}$-graded ring generated by those elements of $R^+$ which can be assigned a degree such that they satisfy a homogeneous equation of integral dependence over $R$. If $R_0$ is a field of prime characteristic, Hochster and Huneke [@HHbig Theorem 5.15] proved that the induced map $$H^i_{{\mathfrak m}}(R)\to H^i_{{\mathfrak m}}(R^{+\GR})$$ is zero for all $i<\dim R$. Translating to projective varieties, one immediately has the vanishing theorem: [@HHbig Theorem 1.2] Let $X$ be an irreducible closed subvariety of ${{\mathbb P}}^n_K$, where $K$ is a field of positive characteristic. Then for all integers $i$ with $0<i<\dim X$, and all integers $t$, there exists a projective variety $Y$ over a finite extension of $K$ with a finite surjective morphism $f\colon Y\to X$, such that the induced map $$H^i(X,{{\mathcal O}}_X(t))\to H^i(Y,f^*{{\mathcal O}}_X(t))$$ is zero. Over fields of characteristic zero, the corresponding statements are false because of the splitting provided by field trace. However, the following graded analogue of Question \[q1\] remains open. \[q:gr1\] Let $R$ be an ${{\mathbb N}}$-graded domain, finitely generated over a field $R_0$ of characteristic zero. For $i<\dim R$, is every element of the image of $$H^i_{{\mathfrak m}}(R)\to H^i_{{\mathfrak m}}(R^{+\GR})$$ killed by elements of $R^{+\GR}$ of arbitrarily small positive degree? This question, when considered for Segre products, leads to the following: \[q:gr2\] Let $R$ be an ${{\mathbb N}}$-graded domain of dimension $d$, finitely generated over a field $R_0$ of characteristic zero. Is the image of $$\left[H^d_{{\mathfrak m}}(R)\right]_{\ge0}\to H^d_{{\mathfrak m}}(R^{+\GR})$$ killed by elements of $R^{+\GR}$ of arbitrarily small positive degree? In Examples \[ex1\] and \[ex2\], we obtained affirmative answers to Question \[q:gr1\] by explicitly constructing the annihilators. In this section, we obtain an affirmative answer for the image of $[H^2_{{\mathfrak m}}(R)]_0$ and also settle Question \[q:gr2\] for rings of dimension two. We first recall some basic facts about the relationship between graded rings and very ample divisors. If $R$ is a standard graded ring, the associated projective scheme $X={\operatorname{Proj}}R$ has a very ample line bundle ${{\mathcal O}}_X(1)$ with sections defined by elements of degree one, which generate the line bundle. Conversely, a very ample line bundle defines a standard graded ring and an embedding of $X$ into projective space. The strategy is to find, for an arbitrarily large positive integer $n$, a finite surjective map from an integral projective scheme $Y$ to $X$, together with an ample line bundle ${{\mathcal L}}$ on $Y$, such that ${{\mathcal L}}^{\otimes n}$ is the pullback of ${{\mathcal O}}_X(1)$ and such that a section of ${{\mathcal L}}$ annihilates the pullback of the given element of cohomology. This will essentially be accomplished by mapping $X$ to its Albanese variety, and pulling back by the multiplication by $N$ map for large integers $N$. The precise result that we prove is as follows. \[thm:main\] Let $R$ be an ${{\mathbb N}}$-graded domain which is finitely generated over a field $R_0$ of characteristic $0$. Let $X={\operatorname{Proj}}R$ and let $\eta$ be an element of $H^1(X,{{\mathcal O}}_X)$. Then, for every $\epsilon>0$, there exists a finite extension $R\subseteq S$ of ${{\mathbb Q}}$-graded domains such that the image of $\eta$ under the induced map $$H^1(X,{{\mathcal O}}_X)\to H^1(Y,{{\mathcal O}}_Y)\qquad\text{ where }Y={\operatorname{Proj}}S$$ is annihilated by an element of $S$ of degree less that $\epsilon$. We remark that since $H^1(X,{{\mathcal O}}_X)$ corresponds to the component of $H_{{\mathfrak m}}^2(R)$ of degree zero, this theorem only implies that we can annihilate elements of $H^2_{{\mathfrak m}}(R)$ of degree zero by elements of small degree. If $H^2_{{\mathfrak m}}(R)$ is generated by its degree zero elements—and this happens in several interesting examples—we can deduce the result for all elements of $H^2_{{\mathfrak m}}(R)$. Replacing $R$ by its normalization, it suffices to work throughout with normal rings. We also reduce to the case where $R$ is a standard ${{\mathbb N}}$-graded ring as follows. Using [@EGA2 Lemme 2.1.6], $R$ has a Veronese subring $R^{(t)}$ which is generated by elements of equal degree. The local cohomology of $R^{(t)}$ supported at its homogeneous maximal ideal ${{\mathfrak m}}$ can be obtained by [@GW Theorem 3.1.1] which states that $$H^i_{{\mathfrak m}}(R^{(t)})=\bigoplus_{n\in{{\mathbb Z}}}[H^i_{{\mathfrak m}}(R)]_{nt}\,.$$ In particular, we have $$[H^2_{{\mathfrak m}}(R^{(t)})]_0=[H^2_{{\mathfrak m}}(R)]_0=H^1(X,{{\mathcal O}}_X)\,.$$ If elements of this cohomology group can be annihilated in graded finite extensions of $R^{(t)}$, then the same can be achieved in extensions of $R$. We next treat the special case where ${\operatorname{Proj}}R$ is itself an abelian variety, which we denote $A$. For each integer $N$, let $[N_A]\colon A\to A$ be the morphism corresponding to multiplication by $N$. Assume further that the very ample sheaf ${{\mathcal O}}_A(1)$ defining the graded ring $R$ satisfies the condition that $$\label{eq:neg1} [(-1)_A]^*({{\mathcal O}}_A(1))={{\mathcal O}}_A(1)\,.$$ Note that, if ${\mathcal L}$ is any very ample line bundle on $A$, the new very ample line bundle $${{\mathcal O}}_A(1)={\mathcal L}\otimes [(-1)_A]^*{\mathcal L}$$ satisfies this further assumption. We recall two facts about abelian varieties from Mumford [@Mumford]: 1. $H^1(A,{{\mathcal O}}_A(1))=0$. By “The Vanishing Theorem" [@Mumford page 150], given a line bundle ${{\mathcal L}}$, there is a unique integer $i$ such that $H^i(A,{{\mathcal L}})$ is nonzero. Since ${{\mathcal O}}_A(1)$ is very ample, this integer must be $0$. 2. $[N_A]^*({{\mathcal O}}_A(1))={{\mathcal O}}_A(N^2)$. This follows from [@Mumford Corollary II.6.3] since we are assuming . The theorem in this case follows from these two properties: the morphism $[N_A]$ induces a map $$R=\bigoplus_n\Gamma(A,{{\mathcal O}}_A(n))\to\bigoplus_n\Gamma(A,[N_A]^*{{\mathcal O}}_A(n))\,,$$ and, by the second property above, $$\Gamma(A,[N_A]^*{{\mathcal O}}_A(n))=\Gamma(A,{{\mathcal O}}_A(N^2n))\,.$$ Thus we have a map of graded rings from $R$ to itself, that takes an element of degree $1$ to an element of degree $N^2$. Denote the new copy of $R$ by $S$, and regrade $S$ with a ${{\mathbb Q}}$-grading such that the map $R\to S$ preserves degrees. This implies that $S$ has elements $s$ of degree $1/N^2$ under the new grading. Such an element $s$ must annihilate the image of $\eta\in H^1(A,{{\mathcal O}}_A)$, since the product $s\cdot\eta$ lies in $H^1(A,{{\mathcal O}}_A(1))=0$. Hence for each positive integer $N$, we have found a finite extension of $R$ with an element of degree $1/N^2$ that annihilates the image of $\eta$. The remainder of the proof is devoted to reducing to the previous case. Let $R$ be a graded domain such that $X={\operatorname{Proj}}R$ is normal. Let $A$ be the *strict Albanese variety* of $X$ as defined in [@Chevalley]. It is the dual abelian variety to the Picard variety of $X$ (in the sense of Chevalley-Grothendieck) which parametrizes line bundles algebraically equivalent to 0. Let $\phi\colon X\to A$ be the corresponding Albanese morphism. Then (since the ground field has characteristic 0) $\phi$ induces an isomorphism $$H^1(A,{{\mathcal O}}_A)\cong H^1(X,{{\mathcal O}}_X)\,,$$ see Chevalley [@Chevalley]. Since $A$ is an abelian variety, it has a very ample invertible sheaf ${{\mathcal O}}_A(1)$, see for example [@Mumford pp. 60–62]. After replacing ${{\mathcal O}}_A(1)$ by ${{\mathcal O}}_A(1)\otimes [(-1)_A]^*{{\mathcal O}}_A(1)$ if necessary, we may assume that $$[(-1)_A]^*({{\mathcal O}}_A(1))\cong {{\mathcal O}}_A(1)\,.$$ We let ${{\mathcal O}}_X(1)$ denote the very ample invertible sheaf defined by the grading on $R$. Let $\pi_1\colon Y_1\to X$ be the pullback of multiplication by $N$ on $A$, and let $\phi_1\colon Y_1\to A$ be the map induced by $\phi$, so that we have the fiber product diagram below. $$\xymatrix{Y_1\ar[r]^{\phi_1}\ar[d]_{\pi_1}& A\ar[d]^{[N_A]}\\ X\ar[r]^{\phi} & A}$$ Let ${{\mathcal M}}_1=\phi_1^*({{\mathcal O}}_A(1))$. Then $$\pi_1^*(\phi^*({{\mathcal O}}_A(1)))=\phi_1^*([N_A]^*({{\mathcal O}}_A(1)))=\phi_1^*({{\mathcal O}}_A(N^2))={{\mathcal M}}_1^{\otimes N^2}.$$ Now let $m$ be an integer such that $\phi^*({{\mathcal O}}_A(-1))\otimes {{\mathcal O}}_X(m)$ is globally generated; such an $m$ exists since ${{\mathcal O}}_X(1)$ is ample, and we fix one such $m$. Since the sheaf $\phi^*({{\mathcal O}}_A(-1))\otimes {{\mathcal O}}_X(m)$ is globally generated, there exists a map $\psi\colon X\to{{\mathbb P}}^n$ such that $$\psi^*({{\mathcal O}}_{{{\mathbb P}}^n}(1))=\phi^*({{\mathcal O}}_A(-1))\otimes {{\mathcal O}}_X(m)\,.$$ Let $\alpha\colon{{\mathbb P}}^n\to{{\mathbb P}}^n$ be a finite map such that $\alpha^*({{\mathcal O}}_{{{\mathbb P}}^n}(1))={{\mathcal O}}_{{{\mathbb P}}^n}(N)$; for example, we can take $\alpha$ to be the map defined by the ring homomorphism on a polynomial ring that sends the variables to their $N$-th powers. Let $Y_2$ be the fiber product of $\psi$ and $\alpha$, which gives us a diagram $$\xymatrix{Y_2 \ar[r]^{\phi_2}\ar[d]_{\pi_2}& {{\mathbb P}}^n\ar[d]^{\alpha} \\ X \ar[r]^{\psi} & {{\mathbb P}}^n\,.}$$ Let ${{\mathcal M}}_2=\phi_2^*({{\mathcal O}}_{{{\mathbb P}}^n}(1))$. We then have $$\begin{gathered} \pi_2^*(\phi^*({{\mathcal O}}_A(-1))\otimes{{\mathcal O}}_X(m))\cong\pi_2^*(\psi^*({{\mathcal O}}_{{{\mathbb P}}^n}(1)))=\phi_2^*(\alpha^*({{\mathcal O}}_{{{\mathbb P}}^n}(1)))\\ \cong \phi_2^*({{\mathcal O}}_{{{\mathbb P}}^n}(N))\cong {{\mathcal M}}_2^{\otimes N}\,.\end{gathered}$$ Note that the above morphisms $\pi_i\colon Y_i\to X$, $i=1,2$ are *finite and surjective*. Let $Y$ be a component of the normalization of the reduced fibre product of $\pi_1\colon Y_1\to X$ and $\pi_2\colon Y_2\to X$, with the property that $Y\to X$ is surjective. Since $Y_1\times_XY_2\to X$ is finite and surjective, some irreducible component of this fiber product maps onto $X$ via a finite morphism, and we may take $Y$ to be the normalization of any such component. We then have an induced finite surjective map $\pi\colon Y\to X$, and induced maps $\mu_1\colon Y\to Y_1$ and $\mu_2\colon Y\to Y_2$, giving a commutative diagram $$\xymatrix{Y \ar[r]^{\mu_1}\ar[d]_{\mu_2}\ar[dr]^{\pi}& Y_1\ar[d]^{\pi_1} \\ Y_2 \ar[r]^{\pi_2} & X\,.}$$ By construction, we have $$\begin{gathered} \pi^*{{\mathcal O}}_X(m)=\pi^*(\phi^*{{\mathcal O}}_A(1)\otimes\phi^*{{\mathcal O}}_A(-1)\otimes{{\mathcal O}}_X(m))\\ =\mu_1^*\pi_1^*(\phi^*({{\mathcal O}}_A(1)))\otimes \mu_2^*\pi_2^*(\phi^*({{\mathcal O}}_A(-1))\otimes{{\mathcal O}}_X(m))\\ =\mu_1^*({{\mathcal M}}_1^{\otimes N^2})\otimes \mu_2^*({{\mathcal M}}_2^{\otimes N}) = {{\mathcal M}}^{\otimes N}\,,\end{gathered}$$ where ${{\mathcal M}}=\mu_1^*{{\mathcal M}}_1^{\otimes N}\otimes \mu_2^*{{\mathcal M}}_2.$ Now ${{\mathcal M}}_1= \phi_1^*({{\mathcal O}}_A(1))$ is generated by global sections of the form $\phi_1^*(u)$ with $u\in H^0(A,{{\mathcal O}}_A(1))$. Choose any such element $u$ such that its image in $H^0(Y,\mu_1^*{{\mathcal M}}_1)$ is nonzero. Let $v$ be a nonzero element of $H^0(Y,\mu_2^*{{\mathcal M}}_2)$, and let $s$ be the image of $\mu_1^*\phi_1^*(u^N)\otimes v$ in $H^0(Y,{{\mathcal M}})$. Then $s$ is a nonzero section of ${{\mathcal M}}$, and since $\pi^*({{\mathcal O}}_X(m))={{\mathcal M}}^{\otimes N}$, the degree of $s$ in the grading induced from that on $R$ is $m/N$. We claim that the composition $$\CD H^1(X,{{\mathcal O}}_X)@>\pi^*>>H^1(Y,{{\mathcal O}}_Y)@>\cdot s>>H^1(Y,{{\mathcal M}}) \endCD$$ vanishes. For this, it suffices to show that the composition $$\CD H^1(X,{{\mathcal O}}_X)@>\pi^*>>H^1(Y,{{\mathcal O}}_Y)@>\mu_1^*\phi_1^*(u)>>H^1(Y,\mu_1^*{{\mathcal M}}_1) \endCD$$ vanishes, which in turn reduces to showing that $$\CD H^1(X,{{\mathcal O}}_X)@>\pi_1^*>>H^1(Y_1,{{\mathcal O}}_{Y_1})@>\phi_1^*(u)>>H^1(Y_1,{{\mathcal M}}_1) \endCD$$ vanishes. Since $\phi^*\colon H^1(A,{{\mathcal O}}_A)\to H^1(X,{{\mathcal O}}_X)$ is an isomorphism, we further reduce to showing that $$\CD H^1(A,{{\mathcal O}}_A)@>[N_A]^*>>H^1(A,{{\mathcal O}}_A)@>\cdot u>>H^1(A,{{\mathcal O}}_A(1)) \endCD$$ vanishes. But this is true since $H^1(A,{{\mathcal O}}_A(1))=0$. Since we can make $m/N$ arbitrarily small by choosing $N$ large, this completes the proof. As a corollary, we see that the answer to Question \[q:gr2\] is affirmative for rings of dimension two: \[cor:dim2\] Let $R$ be an ${{\mathbb N}}$-graded domain of dimension $2$, which is finitely generated over a field $R_0$ of characteristic zero. Then the image of $$\left[H^2_{{\mathfrak m}}(R)\right]_{\ge0}\to H^2_{{\mathfrak m}}(R^{+\GR})$$ is killed by elements of $R^{+\GR}$ of arbitrarily small positive degree. By adjoining roots of elements if necessary, we may assume that $R$ has a system of parameters $x,y$ consisting of linear forms. Theorem \[thm:main\] implies that the image of $[H^2_{{\mathfrak m}}(R)]_0$ is killed by elements of $R^{+\GR}$ of arbitrarily small positive degree, so it suffices to prove that $[H^2_{{\mathfrak m}}(R)]_{\ge0}$ is the $R$-module generated by $[H^2_{{\mathfrak m}}(R)]_0$. Since $x$ and $y$ are linear forms, we have $[R_{xy}]_{n+1}=[R_{xy}]_n\cdot R_1$ for all integers $n$. Computing $H^2_{{\mathfrak m}}(R)$ using the Čech complex on $x,y$, it follows that $$\left[H^2_{{\mathfrak m}}(R)\right]_{n+1}=\left[H^2_{{\mathfrak m}}(R)\right]_n\cdot R_1 \qquad\text{ for all }n\in{{\mathbb Z}}\,.\qedhere$$ Closure operations ================== The issues discussed here are closely related to closure operations considered by Hochster and Huneke, and by Heitmann. The *plus closure* of an ideal ${{\mathfrak a}}$ of a domain $R$ is defined as ${{\mathfrak a}}^+={{\mathfrak a}}R^+\cap R$. It has desirable properties for rings of prime characteristic, e.g., it bounds colon ideals on systems of parameters: if $x_1,\dots,x_d$ is a system of parameters for an excellent local domain $R$ containing a field of prime characteristic, then $$(x_1,\dots,x_{i-1}):_Rx_i\subseteq (x_1,\dots,x_{i-1})^+\qquad\text{ for all }i\,.$$ In general, plus closure does not have this colon-capturing property for rings of characteristic zero or of mixed characteristic. Several alternative closure operations are defined by Heitmann in [@Heitmann2001] including the *extended plus closure*. Building on these ideas, he settled the Direct Summand Conjecture for mixed characteristic rings of dimension three [@Heitmann-dim3]. In [@Heitmann2005 Theorem 1.3] Heitmann proved that extended plus closure has the colon-capturing property for arbitrary sets of three parameters in excellent domains of mixed characteristic. Let $(R,{{\mathfrak m}})$ be a complete local domain and fix, as usual, a valuation $v\colon R\to{{\mathbb Z}}\cup\{\infty\}$ which is positive on ${{\mathfrak m}}$ and extend to $v\colon R^+\to{{\mathbb Q}}\cup\{\infty\}$. In [@HHJPAA] Hochster and Huneke define the *dagger closure* ${{\mathfrak a}}^\dagger$ of an ideal ${{\mathfrak a}}$ as the ideal consisting of all elements $x\in R$ for which there exist elements $u\in R^+$, of arbitrarily small positive order, with $ux\in{{\mathfrak a}}R^+$. In [@HHJPAA Theorem 3.1] it is proved that the dagger closure ${{\mathfrak a}}^\dagger$ agrees with the tight closure ${{\mathfrak a}}^*$ for ideals of complete local domains of prime characteristic, see also [@HHJAMS §6]. While tight closure is defined in characteristic zero by reduction to prime characteristic, the definition of dagger closure is characteristic-free. \[q:dagger\] Does the dagger closure operation have the colon-capturing property, i.e., if $x_1,\dots,x_d$ is a system of parameters for a complete local domain $R$, is it true that $$(x_1,\dots,x_{i-1}):_Rx_i\subseteq(x_1,\dots,x_{i-1})^\dagger\,?$$ According to Hochster and Huneke [@HHJPAA page 244] “it is important to raise (and answer) this question.” If Question \[q:dagger\] has an affirmative answer, then so does Question \[q1\]. Consider the hypersurface $K[[x,y,z]]/(x^3+y^3+z^3)$. If $K$ has prime characteristic, a straightforward calculation—performed in many an introductory lecture on tight closure theory—shows that $z^2\in(x,y)^*$. If $K$ has characteristic zero, the “reduction modulo $p$” nature of the definition of tight closure [@HHJAMS §3] immediately yields $z^2\in (x,y)^*$ once again. In contrast, the computation that $z^2\in (x,y)^\dagger$ is quite delicate and, aside from the linear change of variables, is the computation we performed in Example \[ex2\]. While concrete descriptions of the multipliers of small order are available in this and some other examples, dagger closure remains quite mysterious even in simple examples such as diagonal hypersurfaces: Let $K$ be a field of characteristic zero, and let $$R=K[[x_0,\dots,x_d]]/(x_0^n+\cdots+x_d^n), \qquad\text{where $n>d$}.$$ Does $x_0^d$ belong to the dagger closure of the ideal $(x_1,\dots,x_d)$? A routine computation of tight closure shows that $x_0^d\in(x_1,\dots,x_d)^*$. In the case $d=2$, we have $x_0^2\in(x_1,x_2)^\dagger$ by Corollary \[cor:dim2\]. [20]{} C. Chevalley, *Sur la théorie de la variété de [P]{}icard*, Amer. J. Math **82** (1960), 435–490. E. G. Evans and P. Griffith, *The syzygy problem*, Ann. of Math. (2) **114** (1981), 323–333. G. Faltings, *Almost étale extensions*, in: Cohomologies $p$-adiques et applications arithmétiques II, Astérisque **279** (2002), 185–270. O. Gabber and L. Ramero, *Almost ring theory*, Lecture Notes in Mathematics **1800**, Springer-Verlag, Berlin, 2003. S. Goto and K.-i. Watanabe, *On graded rings. I*, J. Math. Soc. Japan **30** (1978), 179–213. A. Grothendieck, *Éléments de géométrie algébrique. II. Étude globale élémentaire de quelques classes de morphismes*, Inst. Hautes Études Sci. Publ. Math. **8** (1961), 5–222. R. C. Heitmann, *Extensions of plus closure*, J. Algebra **238** (2001), 801–826. R. C. Heitmann, *The direct summand conjecture in dimension three*, Ann. of Math. (2) **156** (2002), 695–712. R. C. Heitmann, *Extended plus closure and colon-capturing*, J. Algebra **293** (2005), 407–426. M. Hochster, *Topics in the homological theory of modules over commutative rings*, CBMS Reg. Conf. Ser. Math. **24**, AMS, Providence, RI, 1975. M. Hochster, *Canonical elements in local cohomology modules and the direct summand conjecture*, J. Algebra **84** (1983), 503–553. M. Hochster and C. Huneke, *Tight closure, invariant theory, and the [B]{}riançon-[S]{}koda theorem*, J. Amer. Math. Soc. **3** (1990), 31–116. M. Hochster and C. Huneke, *Tight closure and elements of small order in integral extensions*, J. Pure Appl. Algebra **71** (1991), 233–247. M. Hochster and C. Huneke, *Infinite integral extensions and big [C]{}ohen-[M]{}acaulay algebras*, Ann. of Math. (2) **135** (1992), 53–89. C. Huneke and G. Lyubeznik, *Absolute integral closure in positive characteristic*, Adv. Math. **210** (2007), 498–504. S. Izumi, *A measure of integrity for local analytic algebras*, Publ. Res. Inst. Math. Sci. **21** (1985), 719–735. D. Mumford, *Abelian varieties*, Tata Institute of Fundamental Research Studies in Mathematics **5**, Oxford University Press, London, 1970. C. Peskine and L. Szpiro, *Dimension projective finie et cohomologie locale*, Inst. Hautes Études Sci. Publ. Math. **42** (1973), 47–119. P. Roberts, *Le théorème d’intersection*, C. R. Acad. Sci. Paris Sér. I Math. **304** (1987), 177–180. J.-P. Serre, *Local algebra*, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2000. J. T. Tate, *$p$-divisible groups*, in: Proc. Conf. Local Fields (Driebergen, 1966) pp. 158–183, Springer, Berlin, 1967. [^1]: P.R. and A.K.S. were supported in part by grants from the National Science Foundation. V.S. was supported by a Swarnajayanthi Fellowship of the DST
With the government raising the bar ever-higher on standards within the education system, a skills gap is becoming ever-more evident in UK classrooms. For example, secondary school teachers are expected to include cross-curricular links to literacy within their subject material, and primary school teachers have to teach a more demanding maths and English curriculum as well as modern foreign languages and computer coding skills. As there seems to be no additional funding to bridge this skills gap or recruit more staff, how can schools manage the expectation for teachers to work across multiple subjects? Lack of Subject Knowledge While it may sound excessive to expect teachers, especially those who have been working in the profession for some time, to suddenly have an expert working knowledge of subjects which are outside the scope of their experience, the reality is that teachers up and down the country are regularly standing in front of classes to teach subjects in which they lack knowledge and skills. Computer coding, for example, is just one area in which there appears to be an enormous skills shortage in schools, and one survey (from as long ago as 2015) reported that almost a third of UK educators feel that they lack the ability to teach the subject effectively. In some other subjects such as primary-level French, teachers who have never studied the language are now expected to teach it, and in many cases are literally one lesson ahead of their pupils. Lack of Training Of course, lack of subject knowledge is not the only area in which today’s teachers are missing the skills that they need. Many educators complain that short PGCE training courses and recruitment initiatives such as Teach First have left newly qualified teachers with a lack of behaviour management strategies and key assessment skills. As the problem becomes more pronounced, heads and leadership teams are facing the problem of how to tackle the situation – do they train existing staff, or do they simply hire new staff who can plug the gap? Training Existing Teachers Although some people may believe that hiring in staff who can supply the required skills may be the answer, this is not always the best solution when it comes to education. Although a Skillsoft survey of business leaders showed that over half of all managers would choose to recruit externally rather than offer their existing staff the training that they would need to enhance their skills, in education this is often counter-intuitive. For many areas in which teachers lack skills, some high quality CPD training would be quite sufficient to resolve the problem. For example, in areas such as behaviour management, assessment strategies and effective planning and differentiation, training courses can provide the necessary knowledge paired with the inspirational ideas that can make a big difference when translated to the classroom. However, there are other areas in which recruiting new staff makes sense. Recruiting to Plug the Skills Gap When teachers lack subject knowledge, it is harder to address the problem through training. After all, no amount of training courses can replace years of studying a language, for example. When teaching requires an in-depth knowledge of a subject in order to impart it effectively to their class, it makes sense to recruit a professional who specialises in the field to ensure that the pupils’ education does not suffer. Headteachers must now decide where the gaps lie in their own schools, and find the best way of addressing those skills shortages so that they can help existing staff to maximise their abilities while engaging new high quality professionals to address those shortages.
https://blog.edclass.com/recruit-or-train-which-is-best-when-you-have-a-skills-gap-in-your-school/
Over the past three decades, Israel has experienced a large labor migration, introducing non-Jewish migrants into Israeli society for the first time. Given the nature of the political regime, a non-liberal democracy defined as an ethnic Jewish state, this has had very meaningful consequences not only for the economy of the state, but for its identity as well. We argue in this paper that the gradual changes incurred in the nature of the political regime have far reaching consequences not merely for the political horizon of the new labor migrants, but for the entire community of citizens and most specifically for already marginalized citizen minorities. It is the purpose of this paper to disaggregate the debate regarding the relationship between migration and citizenship by examining the impact of migration on polities or regimes whose citizenship structures are non-liberal, often highly ethnicized and hence with significant marginal veteran minorities. Empirically we focus on the case of Israel, and the impact of its recent labor migration on the patterns of citizenship belonging and membership towards its most significant civic minority, the Arab Palestinians.
https://cris.bgu.ac.il/en/publications/labor-migration-citizenship-and-minorities-in-non-liberal-democra
Increasing administrative requirements waste researcher time and taxpayer money Excessive regulations are consuming scientists’ time and wasting taxpayer dollars, says a report released today by the National Science Board (NSB), the policymaking body of the National Science Foundation and advisor to Congress and the President. “Regulation and oversight of research are needed to ensure accountability, transparency and safety,” said Arthur Bienenstock, chair of the NSB task force that examined the issue. “But excessive and ineffective requirements take scientists away from the bench unnecessarily and divert taxpayer dollars from research to superfluous grant administration. This is a real problem, particularly in the current budget climate.” Thousands of federally funded scientists responded to NSB’s request to identify requirements they believe unnecessarily increase their administrative workload. The responses raised concerns related to financial management, grant proposal preparation, reporting, personnel management, and institutional review boards and animal care and use committees (IRBs and IACUCs). Scientists and institutions pinpointed regulations they believe are ineffective or inappropriately applied to research, and audit and compliance activities that take away research time and result in university over-regulation. “Escalating compliance requirements and inconsistent audit practices directly impact scientists and the time they have to perform research and train students and staff,” said Kelvin Droegemeier, NSB vice chairman and a member of the task force. The report, Reducing Investigators’ Administrative Workload for Federally Funded Research, recommends limiting proposal requirements to those essential to evaluate merit; keeping reporting focused on outcomes; and automating payroll certification for effort reporting. The NSB further recommends an evaluation of animal research, conflict of interest, and safety and security requirements, and encourages universities to review their IRB and IACUC processes to achieve rapid approval of protocols. The report cites a continued lack of consistency in requirements within and between federal agencies and recommends the creation of a permanent high-level, inter-agency, inter-sector committee. The committee would address the recommendations in the NSB and other reports; identify and prioritize, with stakeholder engagement, additional opportunities to streamline and harmonize regulations; and, help standardize the implementation of new requirements affecting investigators and institutions. “Streamlining research regulations and making requirements more consistent across federal agencies is in the best interest of scientists and taxpayers,” said Bienenstock. The Latest on: Excessive scientific regulation via Google News The Latest on: Excessive scientific regulation - The Future of CBD research, marketing, and regulationon August 1, 2022 at 6:24 am In this interview, Professor Ted Dinan, Medical Director at Atlantia Clinical Trials, and Asa Waldstein, Principal at Supplement Advisory Group, discuss Atlantia’s expertise in clinical trials for CBD ... - Insects Might Feel Pain. Should Scientists Care?on July 30, 2022 at 1:29 pm To Chittka, these observations defy a long-held view that insects are robotlike, controlled by hardwired cognitive programs. Rather, bees’ behavior seems to be influenced by subjective experience—a ... - End of baldness? Scientists spot key chemical that could spur hair growthon July 28, 2022 at 9:49 am Not only do hair follicles have stem cell reservoirs, but they can also regenerate periodically and without the need for an injury. - The seaweed startup world needs to slow downon July 27, 2022 at 7:06 pm these products will not exist if our industry does not pay equal attention to scientific research, regulations, and stakeholder coalition-building. We run the risk of falling short on the very ... - Legalizing Cannabis at the Federal Level Could Benefit Public Health. Here’s Whyon July 27, 2022 at 3:09 am New research suggests that decriminalizing cannabis at the federal level could help mitigate the sales of high-potency cannabis products, which may be harmful to overall health. According to experts, ... - Public Health ‘Authority’? Pleaseon July 26, 2022 at 9:40 pm Top Covid hawk Andy Slavitt forgets his expertise when confronted by monkeypox, a gay male disease spread primarily by sex ... - If insects feel pain, should we reconsider how we experiment on them?on July 26, 2022 at 6:00 pm Some scientists want to grant more invertebrates ethical consideration, questioning long-held assumptions on consciousness. - Stricter regulations needed to properly manage vicious dogs in Vietnamon July 25, 2022 at 3:30 am It is quite easy and simple to purchase a fierce dog breed such as pit bulls in Vietnam, but current regulations seem to be. There are many groups for pit bull owners and traders on Facebook, ... - Embryonic Research Could Be the Next Target After Roeon July 20, 2022 at 4:00 am Stem cell research has underpinned IVF’s success, but legal experts, clinicians, and potential donors worry about its future.
https://innovationtoronto.com/2014/05/excessive-regulations-turning-scientists-bureaucrats/
Uganda is currently hosting more than 1.2 million refugees whose 60% are children (World Bank, 2017). An average of 2,000 people arrives daily since July 2016 – as the pressure on the public services and local resources is immense – with the authority and the locals visibly struggling to keep up. As such, part of the social problems facing by the these already marginalized children, young adults and women include luck of access to basic education, vocational skills, economic empowerment, sexual and reproductive health and rights... Children and women are the most vulnerable to child exploitation and sexual violence resulting in a lack of education and economic empowerment opportunities due to their status as refugees. Only able to attend local school until the age of 12, they leave with limited job prospects, and consequently, many are tempted away from their community to seek low paid jobs. In addition, the lack of access to learn certain 21 first century skills that will contribute in their career and human development are part of the social exclusion problems facing these already marginalized youths. Our findings about serious issues affecting refugee children, young adults and women in Kampala, ninety-nine percent (99%) of 120 interviewed ones reported that idleness was the main driver of their adversity, unhappiness and suffering. They said: “We are already facing tremendous difficulties including discrimination, family disapproval, social isolation, sexual violence, unwanted pregnancies... These threats cause poor self-esteem and feelings of shame and lead us to more emotional distress, suicide attempts, substance use, and risky sexual behavior. We do not know who we are and what we are. We are already dead!” Other facts showed that adolescent girls aged 14 – 16 years accounted for 40% of new unwanted pregnancies and sexual infections among all 120 persons interviewed. Data suggest this number is large enough to warrant special attention and targeted intervention, such as this computer coding project for children and women’s fun and economic empowerment. Your given money will be exclusively used to evolve the proposed computer coding project for children and women’s fun and economic empowerment through equipping these already marginalized people with the training and support they need to become innovative social entrepreneurs of the future and blossom into confident citizens. In keeping with these objectives, we intend to empower, by the end of year 2019, at least 300 young people (60% girls vs. 40% boys) aged 7 – 25 to learn how to use computer science to create fun and interactive stories, video games, apps mobile, drawings, cartoons, websites and more digital products themed on various topics including gender-based violence, children and climate change, peace and conflict resolution, sexual and reproductive health and rights and so many other life issues. Our methodology and products are real educational and stimulate conversations on these sensitive life issues. We find it crucial that the most marginalized children and adolescent girls have proper access to correct information and learn to communicate about the respect and non-violation of their human rights, and the sexuality (even before they become sexual active themselves). As such, the computer coding project for refugee children and women’s fun and economic empowerment contributes towards the social inclusion of the children and women as it will lay a promising professional foundation for them to become professional in the focus computer science skills areas. Directly, this project supports the Sustainable Development Goals – SDG Goal 4: To ensure inclusive and equitable quality education and promote lifelong learning opportunities for all; SDGs Goal 5: To Achieve gender equality and empower all women and girls; and Goal 10: To reduce inequality within and among countries.
https://www.refugee-friends-care.org/child-computer-science/
Today the Government has introduced the Subsidy Control Bill and published the Government’s response to the consultation, “Subsidy control: designing a new approach for the UK”. The UK government has seized the opportunity presented by our exit from the European Union to develop a new, bespoke regime for subsidy control within the UK. This new regime has been designed to reflect our strategic interests, strengthen our Union and help to drive economic growth and prosperity across the whole of the UK. The new regime will be flexible, agile, and tailored to support business growth and innovation, as well as help to maintain a competitive free market economy and protect competition and investment in the UK. Between 3 February and 31 March 2021, the Government held a public consultation on the UK’s future subsidy control proposals. The Government has used responses to the consultation to inform the design to provide a bespoke and dynamic framework, which will: - Empower local authorities, public bodies, and central and devolved governments to design subsidies that deliver strong benefits for the UK taxpayer. - Enable public authorities to deliver subsidies that are tailored and bespoke for local needs to support the UK’s economic recovery and deliver UK Government priorities such as levelling up, achieving net zero and increasing UK R&D investment. - Provide certainty and confidence to businesses investing in the UK, by protecting against subsidies that risk causing distortive or harmful economic impacts, including to the UK domestic market. - Contribute to meeting the UK’s international commitments on subsidy control, including its international commitments at the World Trade Organisation, in Free Trade Agreements and the Northern Ireland Protocol. The foundation of this new domestic subsidy control regime is a clear, proportionate, and transparent set of principles, underpinned by guidance, that will ensure public authorities fully understand their legal obligations and embed strong value for money and competition principles. The Government will create streamlined routes to demonstrate compliance for categories of subsidies at low risk of causing market distortions, that promote our strategic policy objectives and which the Government judges to be compliant with the principles of the regime. This will ensure that these authorities are able to deliver these subsidies with minimum bureaucracy and maximum certainty. In order to protect UK competition and investment and demonstrate where it is proportionate for public authorities to give greater scrutiny to their subsidies, we will create two specific categories of small number of subsidies that may undertake more extensive analysis to assess their compliance with the principles: Subsidies of Interest and Subsidies of Particular Interest. Criteria for these subsidies will be set out in secondary legislation in due course. We anticipate there will be a very small number of subsidies in each of these categories. The Bill also establishes an independent body which will be a UK Subsidy Advice Unit in the Competition and Markets Authority (CMA). The Subsidy Advice Unit will have a role in monitoring and overseeing how the regime is working as a whole, as well as conducting a mandatory, non-binding review on public authorities’ assessments for Subsidies of Interest and Subsidies of Particular Interest. Enforcement will be through the Competition Appeal Tribunal who will hear judicial reviews against subsidy decisions. The Government has designed a subsidy control scheme that promotes a dynamic market economy throughout the UK and that minimises distortions within the UK. To ensure that this system works for all parts of the UK, the government has worked closely with the devolved administrations throughout this process, including meeting the statutory duty to share the consultation response document ahead of publication and consider devolved administrations’ representations. The measures in the Subsidy Control Bill strike the right balance between maximising our new-found flexibilities, having left the EU, and providing a consistent framework for all UK public authorities. The Bill will ensure that the UK maintains a competitive, free market economy – which is fundamental to our national prosperity – while protecting the interests of the British taxpayer.I will lay the Government Response to Subsidy Control consultation before Parliament and will place a copy of the Impact Assessment in the Libraries of the House.
https://questions-statements.parliament.uk/written-statements/detail/2021-06-30/hcws134
After enduring enough abuse from some people on this site for not including the source in my last article, I guess they have succeeded. Here it is, with source, and all you codeheads have wanted. The sample has no executable so the source would have to be compiled. VC++6.0 automatically registers the control. Below is a list and description of the interfaces and how they may be utilized. The ruler itself consists of one CWnd derived class CScale. The implementation is about 2000 lines of code, so I wont be explaining anything here. The function names are pretty intuitive, so u shouldn't have a problem running through them. CWnd CScale void SetRulerInfo(short nLowerLimit,short nUpperLimit, short nScalingFactor,BOOL bHorz,BOOL b3DLook,BOOL AutoResize); This is the only method I thought needed explaining, as you can see, the rest are just getter and setter methods, with the exception of the message senders (which I'll get to soon). This method is used to set the properties of the ruler at run time in one atomic operation. The parameters are explained as follows: short nLowerLimit As the name suggests, sets the lower bound of the scale (left/top for horizontal ruler and vertical ruler respectively). The lower bound is a pixel location in client coordinate. short nUpperLimit This is the partner of the lower bound. short nScalingFactor The scaling factor determines what interval is used to draw major and minor tickers. In the sample, 5 is used. See demo for illustration. BOOL bHorz TRUE to create a horizontal ruler (default). FALSE for vertical. TRUE FALSE BOOL b3DLook TRUE for 3D borders (default). FALSE for flat. See demo. BOOL bAutoResize This feature allows the ruler's scale to be resized at runtime without calling the setter methods. When on, resize handles appear at the side of the ruler. These are just setter and getter methods for the above properties. void SetLowerLimit(short nLowerLimit); void SetUpperLimit(short nUpperLimit); void SetScalingFactor(short nScalingFactor); void SetLook(BOOL bLook3D); void SetAlignment(BOOL bHorz); void SetAutoResize(BOOL bAutoResize); short GetLowerLimit(); short GetUpperLimit(); short GetScalingFactor(); BOOL IsHorzAligned(); BOOL Has3DBorders(); Mouse event firers void StartTracking(short nFlag, OLE_XPOS_PIXELS nX, OLE_YPOS_PIXELS nY); void StopTracking(short nFlags, OLE_XPOS_PIXELS nX, OLE_YPOS_PIXELS nY); void Track(short nFlags, OLE_XPOS_PIXELS nX, OLE_YPOS_PIXELS nY); The above events are fired as a result of the mouse down, mouse up, and mouse move events respectively. nX and nY are the points in screen coordinates and should be converted into client coordinates before used. The nFlag is used to indicate which scaler is being used, 0 for regular arrow movement, 1 for left bar and 2 for right bar. See demo for illustration. nX nY nFlag This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://codeproject.freetls.fastly.net/Articles/2248/A-general-purpose-ruler-control
BACKGROUND AND PURPOSE: Widespread brain structural changes are seen following extended spaceflight missions. The purpose of this study was to investigate whether these structural changes are associated with alterations in motor or cognitive function. MATERIALS AND METHODS: Brain MR imaging scans of National Aeronautics and Space Administration astronauts were retrospectively analyzed to quantify pre- to postflight changes in brain structure. Local structural changes were assessed using the Jacobian determinant. Structural changes were compared with clinical findings and cognitive and motor function. RESULTS: Long-duration spaceflights aboard the International Space Station, but not short-duration Space Shuttle flights, resulted in a significant increase in total ventricular volume (10.7% versus 0%, P < .001, n = 12 versus n = 7). Total ventricular volume change was significantly associated with mission duration (r = 0.72, P = .001, n = 19) but negatively associated with age (r = −0.48, P = .048, n = 19). Long-duration spaceflights resulted in significant crowding of brain parenchyma at the vertex. Pre- to postflight structural changes of the left caudate correlated significantly with poor postural control; and the right primary motor area/midcingulate correlated significantly with a complex motor task completion time. Change in volume of 3 white matter regions significantly correlated with altered reaction times on a cognitive performance task (bilateral optic radiations, splenium of the corpus callosum). In a post hoc finding, astronauts who developed spaceflight-associated neuro-ocular syndrome demonstrated smaller changes in total ventricular volume than those who did not (12.8% versus 6.5%, n = 8 versus n = 4). CONCLUSIONS: While cautious interpretation is appropriate given the small sample size and number of comparisons, these findings suggest that brain structural changes are associated with changes in cognitive and motor test scores and with the development of spaceflight-associated neuro-optic syndrome. ABBREVIATIONS:
http://www.ajnr.org/content/40/11/1878.abstract
Interindividual variability in Swiss male mice: relationship between social factors, aggression, and anxiety. In the present study we carried out a series of experiments in Swiss albino male mice to investigate a) the effects of previous social experience on the levels of anxiety in the elevated plus-maze (EPM) and b) whether the response of males in the EPM differs in relation to the different social status. In Experiment 1 we tested in the EPM male mice that received different social experience. Results showed that individually housing generally increased measures of anxiety in the EPM compared with the group-housing condition. Moreover, aggressive males, screened during dyadic encounters in a neutral cage, displayed the highest levels of anxiety relative to the other experimental conditions. In Experiment 2 male mice remained group-housed and were observed to record their social status. Results showed that those animals rated as socially dominant displayed a higher level of EPM anxiety relative to subordinates. From an ethological perspective our findings may be interpreted in terms of coping strategies, with aggressive/dominant animals typified by higher levels of risk assessment and open-arm avoidance than defensive/subordinate animals.
LETTERS: Tegart’s view on electoral reform misguided The Merritt Herald welcomes your letters, on any subject, addressed to the editor. Letters must be signed and include the writer’s name, address and phone number for verification purposes. Letters may be edited for length, taste and clarity. Please keep letters to 300 words or less. Email letters to: [email protected]. Current system does not represent British Columbians Contrary to the partisan negative propaganda of our local Liberal MLA Jackie Tegart, our current First Past the Post (FPTP) electoral system does not fairly represent the constituencies of B.C. In fact, rarely in Canada, including B.C. have a majority of the voter’s, elected a majority government. In the last two federal elections, majority governments have been elected with 39 per cent of the vote, leaving the real majority of 61 per cent of the voter’s without representation. In fact, most progressive democracies throughout the world, function exceedingly better with some sort of proportional representation governing body, than do the minority of democracies using FPTP. Ironically, our local MLA uses the small country of Belgium, on the brink of dividing into two countries, as her control model. The reality is that in true proportional representation, every vote counts. This forces governments to collaborate and work cooperatively, rather than wasting time and resource bickering over unqualified partisan values. Unfortunately, it is very difficult to articulate how a proportional representation would elect our MLAs. Personally, I’m advocating for a system that elects its riding MLAs with the the traditional FPTP, then all parties add members to the legislature to equal their percentage based on the popular vote. This is sometimes referred to as Mixed Member Proportional (MMP). With this system, no matter which party you vote for, you will be represented, as long as that party obtains a minimum threshold (probably 3 to 5 per cent) of the vote. So if you’re fed up with being represented by a majority government, with a minority of the vote, write to your MLA and demand better, or better yet, vote yes in 2018 referendum. Art Green Hope, B.C. Liberals enjoyed governing without majority support Editor, Jackie Tegart claims our present electoral system is democratic and is based on the principal that everyone’s voice is equal. The 2013 election results illustrate just how wrong this assertion is. The Liberals received a minority 44 per cent of the vote but took 58 per cent of the seats. Under our present system that minority popular vote gave them a majority of the seats in the house and thereby all the power in government. I totally fail to see how on earth this system can be held forth as democratic. Ms. Tegart criticizes the present government for calling for a simple majority on the coming proportional representation (PR) referendum rather than 60 per cent. Given that the last time the Liberals formed government, they did so with a popular vote well under a majority, I am once again challenged to follow the logic. It is worth noting that the “failed” referendum of 2005 actually received 58 per cent of the votes. That wasn’t enough for the Gordon Campbell Liberals. The next criticism of proportional representation Ms. Tegart trots out is that it can lead to smaller parties and coalition governments. Part of democracy is that all political views should be represented according to their support in elections. Smaller parties with enough support should be represented in government. PR does not “encourage coalitions,” but it can lead to minority governments. Minority governments are not in themselves bad, they can lead to a government where only the best ideas receive the support of government. The false majorities of the first past the post systems lead us to governments like those of Christy Clark and Stephen Harper where leaders run the show with no regard whatsoever to democratic principals. All democratically elected governments in the world, other than the U.S. the U.K. and Canada use some form of proportional representation. While those governments might have a problem once in while actually getting their house in order, those problems pale in comparison to first past the post countries. Just consider Donald Trump, who got elected with a minority of the votes cast. I would argue that Liberal opposition to PR is not based an any embrace of democratic principles. As evidenced by Christy Clark’s desperate efforts to cling to power after the last election, the Liberal principle is to hold power at any cost. Through well funded research and polling the Liberals know where to focus their extensive campaign advertising efforts. “Swing ridings” receive most of the attention at the cost of other ridings. Ms. Tegart’s claim that PR would be bad for rural B.C. is simply more of the divisive fear mongering pulled out of the Liberal play book. A PR electoral system would give all B.C. voters an equal say in forming the government whether they live in rural or urban ridings. Tim Larsen Merritt, B.C. Reform could lead to better representation Editor, I would like to comment on the recent article by Ms. Tegart to the effect that proportional representation would be bad for rural B.C. I do not believe that to be the case because there would still be just as many MLAs for local voters, as under the existing system, but they would better reflect the political diversity of the region. Just like the NDP won’t win every vote in the Vancouver area, nor will the Liberals win every vote in the Interior. Many voters opted for the Liberals in the Lower Mainland while by the same token many voters preferred the NDP in the Cariboo region. Under proportional representation, nobody would lose representation, but rather each region has a strong, multi-partisan voice, and would always have some MLAs who are in government and some in opposition. I believe that first-past-the-post is the system that is broken. Under-first past-the-post, fewer than 50 per cent of the votes cast can and usually have elected more than 50 per cent of the seats and gained 100 per cent of the power. How can that represent voter preferences? David R. Pearce Victoria, B.C. We can do better than winner-take-all politics Editor, It’s been my experience in life that establishing consensus takes time, but that it is a healthier way of doing things than having the big kid make all the rules unilaterally. It seems, in Belgium, that this is the approach they have chosen rather than having to deal in other ways with the serious linguistic divisions in that country. I respect their choice and understand that it is working well for them, all things considered. In B.C., in the last provincial election, we ended up with the sort of result that would be normal under a proportional system, in which no party obtains a majority of seats based on 40 per cent of the vote. This did not lead to gridlock, and but rather to a more collaborative form of government involving two parties that together secured 57 per cent of the popular vote. If that is what proportional representation is all about I say, “Bring it on!” I would encourage MLA Tegart to do the same. We can do better than our current winner-take-all system of politics.
The Steve Reich composition Drumming (1971) is considered a monument in the canon of contemporary music. It is a signifier of the movement of Minimalism that was launched by La Mont Young ((born October 14, 1935) and includes a triumvirate of Reich (born October 3, 1936) and Philip Glass (born January 31, 1937). The style, in this case pure percussion, entails seemingly endless repetition of phrases and rhythms with very subtle shifts and nuances. On first exposure there is a sameness that is soporific and trance inducing. For the initiated it is an invitation to reverie as the senses pulsate with the thread that flows and wafts us along with an assault of associations. Depending on the receptivity of the individual the experience is either exotic and meditative or enervating. Once the hour long performance of Drumming commences, in this case with twelve musicians and nine dancers it is a matter of Huis clos. As in Sartre’s 1944 existential play there is No Exit from a relentless and hypnotic aural and visual experience. It is important to note that there is no escape or moment of pause or release from a phenomenon that is a perhaps tyrannical and relentless sensual assault. There is the ever shifting pulse and flow of the percussion echoed by the engaged dance. While constantly varied it never lets up for a moment of sorbet to cleanse the palate between sections that are performed edge to edge with no breaks. Decades ago, my first exposure to the minimalist style came through a live performance of Glass and his ensemble at Harvard’s Sanders Theatre. Even as a jazz critic used to the avant-garde of Ornette Coleman, John Coltrane, Albert Ayler, Anthony Braxton, Cecil Taylor and others the music initially confounded me. It was most accessible to approach it as an extension of jazz. During lunch meetings with Glass and Reich, arranged by a promoter of their music, I became disabused of that approach and came to find a footing with which to embrace the music. Once you surrender and stop imposing your notions of what it should be, and how to respond, I came to truly love the experience. The music often takes one to other places without the need for chemical assistance or involved initiation and training. It has a short hand way of catapulting you to that other time and place so far far away or layered deep inside us. A recording of the Reich piece Drumming was a part of my studio music collection. An art studio can be a solitary place. Often the selection of music has a bearing on releasing the flow of the creative act. This can entail hours of the most intensive concentration. In this context Drumming was always a welcome studio companion. So the music and experience were readily familiar and I much anticipated this evening at Pillow. It was the first time that I heard the piece live. In comments before the performance Ella Baff, the artistic director of Pillow, informed us that the 1999 dance set to Reich’s music had become a classic of modern dance. By demand it has been reintroduced into the repertoire of the company which had not performed at Pillow in a decade. The curtain parted revealing an open stage with uniform medium lighting designed by Axel Morgenthaler. There appeared to be no lighting cues throughout the dance. This was an aspect of a presentation of the whole, the ensemble of musicians and dancers, with little or no emphasis on individuals. There were segments that involved single dancers, partners (including same sex), and groups of four (male and female), but no sense of any hierarchy of stars and spectacular, audience thrilling individual efforts. The musicians occupied a compact side of the stage navigating their way around groupings of eight bongo drums, three marimbas, and three glockenspiels. There were two voices and whistler/ piccolo. There were four parts or movements in the seamless composition. Before the percussion began a single dancer walked/ paced about the stage gradually joined by three pairs (male/ female). Then four. Normally bongos are hand drums but here they were played with sticks. This results in a sharp, high pitched sound. As the beat commenced one marveled at its intricacy and precision particularly when it entailed shifts that require an attentive ear to calibrate. From the standpoint of musicianship this performance was truly flawless as conducted by Walter Boudreau. Remarkably, with riveting intensity, in an hour’s duration the musicians never missed a single beat. For a very demanding composition that’s an amazing accomplishment. The same may be said for the dancers who displayed control and precision. Maintaining a flow of perpetual motion surely must be taxing. The chorography of Laurin, however, established a limited vocabulary of movements. This entailed lifts, kicks, walking/ pacing, and segments on the floor. Given the stark nature and confines of the music there is no room for narrative although one might just because of a horror vacui find some humanistic aspect of relationships. Mostly, however, the dance was as deadpan and locked in as the percussion. The bongo movement shifted to marimbas. It has the possibility of notes because of the range of its wooden keys but here the emphasis is on an organic percussive tone. It is perhaps why Reich did not opt for the electronic vibraphone and its latent potential for melody. Similarly, the change to glockenspiels truncated their potential for melody preferring to evoke a different pitch and texture to the sustained time signatures and their internal shifts. Truth be told it was glut of textures, rhythms, and movements to absorb. The sameness, unless one paid strict attention, devolved into lapses drifting away from what was on stage. It induced an ebb and flow of consciousness. You would bolt out of a dream state and, with a shake of the head, attempt to refocus. There were some points to be distracted or fix on. Like the shock of bright red hair of the dancers and their initial appearance in generic street clothes. Then, toward the midpoint to end, the women appeared in what a colleague described as short shorts but to me were red satin panties. Overall, I have mixed responses to seeing that iconic music set to dance. Previously I was free to form my own mindscapes and imagery when listing to Drumming. In this instance it seemed too literal to see someone else’s interpretation of what it looks like. For me the resultant dance was efficient and disciplined but ultimately uninspiring. Nothing about the dance was particularly definitive. A chacun son gout.
https://mail.berkshirefinearts.com/08-11-2013_o-vertigo-danse-at-jacob-s-pillow.htm
How To Make Mead at Home – (Easy Step-by-Step) Mead is one of the oldest alcoholic beverages consumed today, and making it at home isn’t as challenging as making homemade wine or beer. But, of course, if you’ve never made mead at home, you might not be sure where to start. You can make mead at home by dissolving honey in warm, purified water, then adding wine yeast and additional ingredients to the mixture. The yeast will cause the honey-water mixture to ferment, generating alcohol. The fermentation process typically takes several weeks to complete. This guide will explore each step of the mead-making process, ensuring that you can make the highest quality homemade mead all on your own. 1. Choose a Mead Recipe That Suits Your Tastes Before you prepare your brewing space or gather the necessary tools, you’ll want to select a mead recipe that appeals to your preferences and tastes. For example, the most basic mead recipes consist solely of water, honey, and yeast. But others might include spices and fruits. It’s also crucial to choose a recipe with a listed yield that meets your needs. Unless you’re planning on bottling your mead, starting with a recipe with a one-gallon yield is best. Otherwise, you could end up with an excessive amount of mead that spoils before you can enjoy it. Remember, the smaller the listed yield for a recipe, the fewer ingredients you’ll need. Choosing a one-gallon mead recipe is a fantastic way to stick to a tight budget. Here are two example recipes for one-gallon yields to help you get started, one basic and one a little more creative! Basic One-Gallon Mead Recipe If you’d like to keep things simple, you might want to opt for a basic recipe. To create a single gallon of straightforward homemade mead, you’ll need: - Half a gallon of filtered water (about 2.27 liters). - Three pounds of honey (about 1.36 kilograms). - Half an envelope of wine or mead yeast (about 2.5 grams). But, of course, adding fresh fruits and spices is a surefire way to create a more flavorful concoction! One-Gallon Mead Recipe (With Spices) While basic mead is a sweet alternative to bitter alcoholic beverages, there are ways to imbue your mead with a tasty medley of spices and flavors. Dried and fresh fruits are common additions to homemade mead mixtures, including: - Raisins - Blueberries - Plums - Oranges Spices like cinnamon, nutmeg, and cloves are also excellent ingredients to add to your pre-fermented mead. One example of a fruity, spice-filled homemade mead recipe is: - Half a gallon of filtered water (about 2.27 liters). - Three pounds of honey (about 1.36 kilograms). - One whole cinnamon stick. - One cup of fresh fruit (berries or citrus). - Half an envelope of wine or mead yeast (about 2.5 grams). Once you’ve chosen a recipe that suits your needs and personal preferences, you can move on to the next ste: gathering ingredients. 2. Select Ingredients Based on Your Chosen Recipe You can’t make mead without having the right ingredients on hand, the most essential of which are honey, yeast, and water. But how much honey do you need, and what about additional ingredients? The answers to these questions vary depending on your chosen recipe. If, for example, you’ve chosen a low-yield recipe (one gallon or less), you’ll likely need far less honey than if you’ve chosen a high-yield one. And if you’ve selected a basic mead recipe, you won’t need to worry about stocking up on spices or fruits to add to your mead mixture. In both cases, it’s crucial to read through your chosen recipe and note ingredient amounts. Doing so will ensure you invest in the appropriate amount of each item, making the mead-making process a stress-free experience. Still, no matter your chosen yield or recipe you’ll need a few common ingredients, including: - Filtered water. - Several pounds of honey (the more you add, the sweeter the final result). - Wine or mead yeast. Fruits and spices aren’t required to make mead, but raisins, fresh berries, citrus, and spices like cinnamon are common additions that you may want to consider adding to your mixture. Once you’ve purchased the ingredients you need for your brew, it’s time to double-check that you have all the necessary equipment to transform your honey-water mixture into delicious mead. 3. Gather the Necessary Equipment As with brewing homemade wine or beer, you’ll need the right equipment to make mead at home. The most crucial of these include: - A glass container (sized to meet the recipe yield). - A large stainless steel cooking pot (capable of holding several gallons of liquid). - An airlock and stopper (sized to fit onto your glass container). - A whisk or spoon. - A funnel (sized to fit into the container). While it’s possible to use a plastic container in lieu of a glass one, the initial honey-water mixture you’ll be pouring into your chosen container will be warm. Because heat can weaken plastic, a thick glass jug or carboy is often the best choice. If you’re planning on making a large quantity of mead (more than one gallon), you’ll also need sanitized homebrewing bottles, bottle caps, and a bottling press. Those hoping to make several gallons of homemade mead should also consider selecting fermenter buckets instead of glass containers. These tend to be more affordable and durable. Check out this video guide for more information on low-cost fermenter buckets: It’s also essential to have a sanitizing fluid on hand, like rubbing alcohol or unscented bleach. This liquid will ensure that your mead ferments properly and doesn’t develop harmful molds during the fermentation process. 4. Prepare Your Brewing Area When you’ve selected and acquired the necessary equipment, it’s time to prepare your brewing area. This means cleaning and sanitizing your equipment (using a 1:1 ratio of bleach and water or alcohol and water) and the surfaces you’ll be working on. It’s also wise to select a dark area within your home for storing the fermenting mead. You could choose a closet, an unused bedroom, or a basement. The UV rays emitted by the sun can kill yeast cells, so keeping your mead mixture away from sunlight is crucial to the fermentation process. Ensuring that your fermenting beverage is stored at room temperature is also essential, as excessive heat (90°F/32°C) or cold can deactivate yeast, halting the fermentation process. When your equipment and work surfaces are sanitized (and you’ve chosen a storage spot for the mead), you can begin working on dissolving the honey, one of the most vital steps of the mead-making process! 5. Dissolve Honey in Warm Purified Water Place your cooking pot onto a range or stovetop and add purified (filtered) water. Consult your chosen recipe to find out how much water you need. Though you might be tempted to use tap water for this step, it’s important to remember that tap water is often laden with minerals and chemicals like chlorine. These can kill yeast or negatively impact the taste of your mead. For these reasons, only use purified water when making mead at home. When you’ve filled your cooking pot with an appropriate amount of water, turn the heat to low or medium-low. Wait for the water to begin heating up, allowing several minutes to pass. When the water is above room temperature, it’s time to add the honey. Again, referring to your chosen recipe is essential, as it will help you add the right amount. Still, no matter how much honey the recipe calls for, you’ll want to add it into the water slowly. This shouldn’t be a problem, as honey is quite viscous (thick and slow-moving), so it flows far more slowly than thinner liquids. While adding the honey to the water, mix the water in the pot using a sanitized spoon or whisk. Active stirring can accelerate the mixing process and ensure the honey fully dissolves into the water. 6. Add Any Additional Ingredients You Desire When you’ve finished adding the appropriate amount of honey to your water-filled cooking pot, you can begin adding any additional ingredients listed in the recipe (except yeast). You can skip this step if you’re working from a basic mead recipe. But if your chosen mead recipe includes spices or fruit, now is the time to add them! For the best result, dice or chop larger fruits and spices before adding them to the pot. Doing so will help the flavors of these additional ingredients meld with the liquid. Adding sugar-rich fruits can also help with fermentation, as yeast needs sugar to create alcohol. In fact, fruit can ferment without yeast, and adding it to your mead mixture may result in a beverage with a higher alcohol volume. 7. Pour Your Mixture Into the Jug or Bucket Turn off the heat and allow the mixture to come to room temperature. Then, place your sanitized funnel onto the opening of the glass jug or carboy (you can skip this step if using a fermenter bucket). Carefully pour your honey-water mixture into this container. When the pot is emptied, it’s time to add the yeast. 8. Add Wine Yeast to the Mixture Adding the yeast to your dissolved honey-water mixture is essential to transforming it into mead. After all, mead is an alcoholic beverage; like wine or beer, this alcohol is a byproduct of fermentation. Yeast, a living organism that consumes sugars and generates alcohol, is one of the most common ingredients used for fermentation. But there are several types of yeast, and not all are suitable for creating mead. For that reason, it’s best to use wine yeast (or specialized mead yeast) when making mead at home. Fortunately, this type of yeast is easy to find for sale online and is very similar to packets of instant yeast commonly found in grocery stores. Wine Yeast Red Star Premier Classique Formerly Montrachet for Wine Making x10 is a fantastic example (Available on Amazon.com). This yeast comes in easy-to-open packets and is exceptionally affordable. It’s also an active type of yeast, so it will begin converting sugar to alcohol as soon as it’s exposed to warm or room-temperature liquid. Simply rip your yeast packet open, pour it into your carboy or bucket, and mix well! 9. Shake or Stir the Mixture To Fully Combine the Ingredients The dissolved honey in your mixture may solidify and settle at the bottom of the container. Because honey is rich in sugars, this separation can make it almost impossible for the yeast to access the sugar it needs to generate alcohol. Fortunately, a thorough shake or stir can help reintroduce your mead mixture’s elements, ensuring that the yeast has what it needs to transform the honey-water into mead. If you’re using a glass jug or carboy, seal the container with a cap and give it a thorough shake. If you’re using a fermenting bucket, a long-handled spoon (sanitized) ought to do the trick. You cannot overmix your liquid at this stage, so don’t be afraid to spend a fix minutes ensuring that your ingredients are well combined. A great sign of healthy, activated yeast is a layer of bubbly foam at the top of your liquid mixture. This layer generally develops after the mixture is left still for several minutes. When you’re confident that your mixture is well combined, it’s time to seal the liquid by attaching an airlock and stopper. 10. Place an Airlock and Stopper Onto the Container If you’ve ever made homemade wine or beer, you’re likely familiar with airlocks and stoppers. These components plug into the openings on fermenting buckets and carboys, creating a seal that stops airflow while allowing gas to exit. This prevents containers from overpressurizing and keeps liquids fresh and safe from airborne bacteria and mold spores. Installing an airlock and stopper is as simple as pressing it into the opening of your container. Investing in an airlock with a pre-attached stopper at the bottom is the most convenient option. Also called carboy bungs, these devices are affordable and readily available online. Twin Bubble Airlock and Carboy Bung (Pack of 2) is a great choice for those using thick glass carboy containers. The bottom stopper fits into container openings easily and forms a seal, while the plastic tubing allows gases to escape. 11. Allow the Mixture To Ferment for Several Weeks This final step is all about patience. Keeping your mead mixture sealed during fermentation allows the yeast to convert the sugars into alcohol. Unsealing it too soon will result in a yeasty, low-alcohol concoction that’s essentially sugar water. The time it’ll take for your mead to finish fermenting varies depending on factors like temperature and quantity. However, most mead mixtures transform from sugary, yeast-filled water into a sweet alcoholic beverage after three to six weeks. Keeping your mixture in a dark, room-temperature area and leaving it alone (aka not shaking it) is key. One of the best ways to tell whether your mead has finished fermenting is to check the airlock. If you spot bubbles rising into the airlock, it needs more time. But if the airlock is bubble free for several days, it’s likely safe to remove the stopper and airlock. After that, simply strain the liquid through cheesecloth and enjoy! If you’re preparing a large batch of mead, the end of the fermentation phase signals the beginning of the bottling phase. Naturally, you’ll also want to strain your mixture before bottling. Be sure to read up on homebrew bottling tips to make this process as smooth as possible. Final Thoughts Whatever your chosen recipe, following these steps is a surefire way to enjoy a tasty homemade glass of mead. Experimenting with a variety of additional ingredients is a fantastic way to come up with a mead recipe that suits your tastes, so feel free to get creative!
https://homebrewadvice.com/how-to-make-mead-at-home